00:00:00.001 Started by upstream project "autotest-nightly" build number 3919 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3294 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.008 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu24-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.009 The recommended git tool is: git 00:00:00.010 using credential 00000000-0000-0000-0000-000000000002 00:00:00.012 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu24-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.022 Fetching changes from the remote Git repository 00:00:00.023 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.034 Using shallow fetch with depth 1 00:00:00.034 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.034 > git --version # timeout=10 00:00:00.046 > git --version # 'git version 2.39.2' 00:00:00.046 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.061 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.061 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.177 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.187 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.198 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:02.198 > git config core.sparsecheckout # timeout=10 00:00:02.207 > git read-tree -mu HEAD # timeout=10 00:00:02.221 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:02.250 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:02.250 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:02.480 [Pipeline] Start of Pipeline 00:00:02.494 [Pipeline] library 00:00:02.495 Loading library shm_lib@master 00:00:02.496 Library shm_lib@master is cached. Copying from home. 00:00:02.516 [Pipeline] node 00:00:02.527 Running on VM-host-SM9 in /var/jenkins/workspace/ubuntu24-vg-autotest 00:00:02.529 [Pipeline] { 00:00:02.546 [Pipeline] catchError 00:00:02.549 [Pipeline] { 00:00:02.568 [Pipeline] wrap 00:00:02.578 [Pipeline] { 00:00:02.588 [Pipeline] stage 00:00:02.590 [Pipeline] { (Prologue) 00:00:02.607 [Pipeline] echo 00:00:02.608 Node: VM-host-SM9 00:00:02.612 [Pipeline] cleanWs 00:00:02.621 [WS-CLEANUP] Deleting project workspace... 00:00:02.621 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.626 [WS-CLEANUP] done 00:00:02.783 [Pipeline] setCustomBuildProperty 00:00:02.853 [Pipeline] httpRequest 00:00:02.872 [Pipeline] echo 00:00:02.873 Sorcerer 10.211.164.101 is alive 00:00:02.879 [Pipeline] httpRequest 00:00:02.882 HttpMethod: GET 00:00:02.883 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:02.883 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:02.884 Response Code: HTTP/1.1 200 OK 00:00:02.884 Success: Status code 200 is in the accepted range: 200,404 00:00:02.885 Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:03.028 [Pipeline] sh 00:00:03.307 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:03.324 [Pipeline] httpRequest 00:00:03.341 [Pipeline] echo 00:00:03.342 Sorcerer 10.211.164.101 is alive 00:00:03.348 [Pipeline] httpRequest 00:00:03.352 HttpMethod: GET 00:00:03.352 URL: http://10.211.164.101/packages/spdk_d005e023bd514d7d48470775331498120af1a8d8.tar.gz 00:00:03.352 Sending request to url: http://10.211.164.101/packages/spdk_d005e023bd514d7d48470775331498120af1a8d8.tar.gz 00:00:03.354 Response Code: HTTP/1.1 200 OK 00:00:03.354 Success: Status code 200 is in the accepted range: 200,404 00:00:03.354 Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest/spdk_d005e023bd514d7d48470775331498120af1a8d8.tar.gz 00:00:21.237 [Pipeline] sh 00:00:21.516 + tar --no-same-owner -xf spdk_d005e023bd514d7d48470775331498120af1a8d8.tar.gz 00:00:24.814 [Pipeline] sh 00:00:25.096 + git -C spdk log --oneline -n5 00:00:25.096 d005e023b raid: fix empty slot not updated in sb after resize 00:00:25.096 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:00:25.096 8ee2672c4 test/bdev: Add test for resized RAID with superblock 00:00:25.096 19f5787c8 raid: skip configured base bdevs in sb examine 00:00:25.096 3b9baa5f8 bdev/raid1: Support resize when increasing the size of base bdevs 00:00:25.115 [Pipeline] writeFile 00:00:25.131 [Pipeline] sh 00:00:25.413 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:25.425 [Pipeline] sh 00:00:25.706 + cat autorun-spdk.conf 00:00:25.706 SPDK_TEST_UNITTEST=1 00:00:25.706 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.706 SPDK_TEST_NVME=1 00:00:25.706 SPDK_TEST_BLOCKDEV=1 00:00:25.706 SPDK_RUN_ASAN=1 00:00:25.706 SPDK_RUN_UBSAN=1 00:00:25.706 SPDK_TEST_RAID5=1 00:00:25.706 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:25.714 RUN_NIGHTLY=1 00:00:25.715 [Pipeline] } 00:00:25.733 [Pipeline] // stage 00:00:25.748 [Pipeline] stage 00:00:25.750 [Pipeline] { (Run VM) 00:00:25.765 [Pipeline] sh 00:00:26.046 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:26.046 + echo 'Start stage prepare_nvme.sh' 00:00:26.046 Start stage prepare_nvme.sh 00:00:26.046 + [[ -n 0 ]] 00:00:26.046 + disk_prefix=ex0 00:00:26.046 + [[ -n /var/jenkins/workspace/ubuntu24-vg-autotest ]] 00:00:26.046 + [[ -e /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf ]] 00:00:26.046 + source /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf 00:00:26.046 ++ SPDK_TEST_UNITTEST=1 00:00:26.046 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.046 ++ SPDK_TEST_NVME=1 00:00:26.046 ++ SPDK_TEST_BLOCKDEV=1 00:00:26.046 ++ SPDK_RUN_ASAN=1 00:00:26.046 ++ SPDK_RUN_UBSAN=1 00:00:26.046 ++ SPDK_TEST_RAID5=1 00:00:26.046 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:26.046 ++ RUN_NIGHTLY=1 00:00:26.046 + cd /var/jenkins/workspace/ubuntu24-vg-autotest 00:00:26.046 + nvme_files=() 00:00:26.046 + declare -A nvme_files 00:00:26.046 + backend_dir=/var/lib/libvirt/images/backends 00:00:26.046 + nvme_files['nvme.img']=5G 00:00:26.046 + nvme_files['nvme-cmb.img']=5G 00:00:26.046 + nvme_files['nvme-multi0.img']=4G 00:00:26.046 + nvme_files['nvme-multi1.img']=4G 00:00:26.046 + nvme_files['nvme-multi2.img']=4G 00:00:26.046 + nvme_files['nvme-openstack.img']=8G 00:00:26.046 + nvme_files['nvme-zns.img']=5G 00:00:26.046 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:26.046 + (( SPDK_TEST_FTL == 1 )) 00:00:26.046 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:26.046 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:26.046 + for nvme in "${!nvme_files[@]}" 00:00:26.046 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:00:26.046 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.046 + for nvme in "${!nvme_files[@]}" 00:00:26.046 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:00:26.046 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:26.046 + for nvme in "${!nvme_files[@]}" 00:00:26.046 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:00:26.046 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:26.046 + for nvme in "${!nvme_files[@]}" 00:00:26.046 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:00:26.046 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:26.046 + for nvme in "${!nvme_files[@]}" 00:00:26.046 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:00:26.046 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.046 + for nvme in "${!nvme_files[@]}" 00:00:26.046 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:00:26.305 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.305 + for nvme in "${!nvme_files[@]}" 00:00:26.305 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:00:26.305 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:26.305 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:00:26.305 + echo 'End stage prepare_nvme.sh' 00:00:26.305 End stage prepare_nvme.sh 00:00:26.317 [Pipeline] sh 00:00:26.598 + DISTRO=ubuntu2404 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:26.598 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -H -a -v -f ubuntu2404 00:00:26.598 00:00:26.598 DIR=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk/scripts/vagrant 00:00:26.598 SPDK_DIR=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk 00:00:26.598 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu24-vg-autotest 00:00:26.598 HELP=0 00:00:26.598 DRY_RUN=0 00:00:26.598 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img, 00:00:26.598 NVME_DISKS_TYPE=nvme, 00:00:26.598 NVME_AUTO_CREATE=0 00:00:26.598 NVME_DISKS_NAMESPACES=, 00:00:26.598 NVME_CMB=, 00:00:26.598 NVME_PMR=, 00:00:26.598 NVME_ZNS=, 00:00:26.598 NVME_MS=, 00:00:26.598 NVME_FDP=, 00:00:26.598 SPDK_VAGRANT_DISTRO=ubuntu2404 00:00:26.598 SPDK_VAGRANT_VMCPU=10 00:00:26.598 SPDK_VAGRANT_VMRAM=12288 00:00:26.598 SPDK_VAGRANT_PROVIDER=libvirt 00:00:26.598 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:26.598 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:26.598 SPDK_OPENSTACK_NETWORK=0 00:00:26.598 VAGRANT_PACKAGE_BOX=0 00:00:26.598 VAGRANTFILE=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:26.598 FORCE_DISTRO=true 00:00:26.598 VAGRANT_BOX_VERSION= 00:00:26.598 EXTRA_VAGRANTFILES= 00:00:26.598 NIC_MODEL=e1000 00:00:26.598 00:00:26.598 mkdir: created directory '/var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt' 00:00:26.598 /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt /var/jenkins/workspace/ubuntu24-vg-autotest 00:00:29.130 Bringing machine 'default' up with 'libvirt' provider... 00:00:29.698 ==> default: Creating image (snapshot of base box volume). 00:00:29.698 ==> default: Creating domain with the following settings... 00:00:29.698 ==> default: -- Name: ubuntu2404-24.04-1720510786-2314_default_1721864665_09a87d4d313bf6506558 00:00:29.698 ==> default: -- Domain type: kvm 00:00:29.698 ==> default: -- Cpus: 10 00:00:29.698 ==> default: -- Feature: acpi 00:00:29.698 ==> default: -- Feature: apic 00:00:29.698 ==> default: -- Feature: pae 00:00:29.698 ==> default: -- Memory: 12288M 00:00:29.698 ==> default: -- Memory Backing: hugepages: 00:00:29.698 ==> default: -- Management MAC: 00:00:29.698 ==> default: -- Loader: 00:00:29.698 ==> default: -- Nvram: 00:00:29.698 ==> default: -- Base box: spdk/ubuntu2404 00:00:29.698 ==> default: -- Storage pool: default 00:00:29.698 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2404-24.04-1720510786-2314_default_1721864665_09a87d4d313bf6506558.img (20G) 00:00:29.698 ==> default: -- Volume Cache: default 00:00:29.698 ==> default: -- Kernel: 00:00:29.698 ==> default: -- Initrd: 00:00:29.698 ==> default: -- Graphics Type: vnc 00:00:29.698 ==> default: -- Graphics Port: -1 00:00:29.698 ==> default: -- Graphics IP: 127.0.0.1 00:00:29.698 ==> default: -- Graphics Password: Not defined 00:00:29.698 ==> default: -- Video Type: cirrus 00:00:29.698 ==> default: -- Video VRAM: 9216 00:00:29.698 ==> default: -- Sound Type: 00:00:29.698 ==> default: -- Keymap: en-us 00:00:29.698 ==> default: -- TPM Path: 00:00:29.698 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:29.698 ==> default: -- Command line args: 00:00:29.698 ==> default: -> value=-device, 00:00:29.698 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:29.698 ==> default: -> value=-drive, 00:00:29.698 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:00:29.698 ==> default: -> value=-device, 00:00:29.698 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.957 ==> default: Creating shared folders metadata... 00:00:29.957 ==> default: Starting domain. 00:00:31.336 ==> default: Waiting for domain to get an IP address... 00:00:41.311 ==> default: Waiting for SSH to become available... 00:00:42.249 ==> default: Configuring and enabling network interfaces... 00:00:47.519 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:00:52.788 ==> default: Mounting SSHFS shared folder... 00:00:54.166 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output => /home/vagrant/spdk_repo/output 00:00:54.166 ==> default: Checking Mount.. 00:00:54.732 ==> default: Folder Successfully Mounted! 00:00:54.732 ==> default: Running provisioner: file... 00:00:54.991 default: ~/.gitconfig => .gitconfig 00:00:55.249 00:00:55.249 SUCCESS! 00:00:55.249 00:00:55.249 cd to /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt and type "vagrant ssh" to use. 00:00:55.249 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:00:55.249 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt" to destroy all trace of vm. 00:00:55.249 00:00:55.258 [Pipeline] } 00:00:55.274 [Pipeline] // stage 00:00:55.283 [Pipeline] dir 00:00:55.283 Running in /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt 00:00:55.285 [Pipeline] { 00:00:55.297 [Pipeline] catchError 00:00:55.299 [Pipeline] { 00:00:55.311 [Pipeline] sh 00:00:55.590 + vagrant ssh-config --host vagrant 00:00:55.590 + sed -ne /^Host/,$p 00:00:55.590 + tee ssh_conf 00:00:58.872 Host vagrant 00:00:58.872 HostName 192.168.121.123 00:00:58.872 User vagrant 00:00:58.872 Port 22 00:00:58.872 UserKnownHostsFile /dev/null 00:00:58.872 StrictHostKeyChecking no 00:00:58.872 PasswordAuthentication no 00:00:58.872 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2404/24.04-1720510786-2314/libvirt/ubuntu2404 00:00:58.872 IdentitiesOnly yes 00:00:58.872 LogLevel FATAL 00:00:58.872 ForwardAgent yes 00:00:58.872 ForwardX11 yes 00:00:58.872 00:00:58.884 [Pipeline] withEnv 00:00:58.887 [Pipeline] { 00:00:58.901 [Pipeline] sh 00:00:59.187 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:00:59.187 source /etc/os-release 00:00:59.187 [[ -e /image.version ]] && img=$(< /image.version) 00:00:59.187 # Minimal, systemd-like check. 00:00:59.187 if [[ -e /.dockerenv ]]; then 00:00:59.187 # Clear garbage from the node's name: 00:00:59.187 # agt-er_autotest_547-896 -> autotest_547-896 00:00:59.187 # $HOSTNAME is the actual container id 00:00:59.187 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:00:59.187 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:00:59.187 # We can assume this is a mount from a host where container is running, 00:00:59.187 # so fetch its hostname to easily identify the target swarm worker. 00:00:59.187 container="$(< /etc/hostname) ($agent)" 00:00:59.187 else 00:00:59.187 # Fallback 00:00:59.187 container=$agent 00:00:59.187 fi 00:00:59.187 fi 00:00:59.187 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:00:59.187 00:00:59.467 [Pipeline] } 00:00:59.486 [Pipeline] // withEnv 00:00:59.495 [Pipeline] setCustomBuildProperty 00:00:59.511 [Pipeline] stage 00:00:59.514 [Pipeline] { (Tests) 00:00:59.533 [Pipeline] sh 00:00:59.813 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:00.086 [Pipeline] sh 00:01:00.366 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:00.640 [Pipeline] timeout 00:01:00.641 Timeout set to expire in 1 hr 30 min 00:01:00.643 [Pipeline] { 00:01:00.659 [Pipeline] sh 00:01:00.939 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:01.507 HEAD is now at d005e023b raid: fix empty slot not updated in sb after resize 00:01:01.520 [Pipeline] sh 00:01:01.801 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:02.073 [Pipeline] sh 00:01:02.353 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:02.627 [Pipeline] sh 00:01:02.907 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu24-vg-autotest ./autoruner.sh spdk_repo 00:01:03.166 ++ readlink -f spdk_repo 00:01:03.166 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:03.166 + [[ -n /home/vagrant/spdk_repo ]] 00:01:03.166 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:03.166 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:03.166 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:03.166 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:03.166 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:03.166 + [[ ubuntu24-vg-autotest == pkgdep-* ]] 00:01:03.166 + cd /home/vagrant/spdk_repo 00:01:03.166 + source /etc/os-release 00:01:03.166 ++ PRETTY_NAME='Ubuntu 24.04 LTS' 00:01:03.166 ++ NAME=Ubuntu 00:01:03.166 ++ VERSION_ID=24.04 00:01:03.166 ++ VERSION='24.04 LTS (Noble Numbat)' 00:01:03.166 ++ VERSION_CODENAME=noble 00:01:03.166 ++ ID=ubuntu 00:01:03.166 ++ ID_LIKE=debian 00:01:03.166 ++ HOME_URL=https://www.ubuntu.com/ 00:01:03.166 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:03.166 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:03.166 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:03.166 ++ UBUNTU_CODENAME=noble 00:01:03.166 ++ LOGO=ubuntu-logo 00:01:03.166 + uname -a 00:01:03.166 Linux ubuntu2404-cloud-1720510786-2314 6.8.0-36-generic #36-Ubuntu SMP PREEMPT_DYNAMIC Mon Jun 10 10:49:14 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:03.166 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:03.426 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:01:03.426 Hugepages 00:01:03.426 node hugesize free / total 00:01:03.426 node0 1048576kB 0 / 0 00:01:03.426 node0 2048kB 0 / 0 00:01:03.426 00:01:03.426 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:03.426 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:03.426 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:03.426 + rm -f /tmp/spdk-ld-path 00:01:03.426 + source autorun-spdk.conf 00:01:03.426 ++ SPDK_TEST_UNITTEST=1 00:01:03.426 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.426 ++ SPDK_TEST_NVME=1 00:01:03.426 ++ SPDK_TEST_BLOCKDEV=1 00:01:03.426 ++ SPDK_RUN_ASAN=1 00:01:03.426 ++ SPDK_RUN_UBSAN=1 00:01:03.426 ++ SPDK_TEST_RAID5=1 00:01:03.426 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:03.426 ++ RUN_NIGHTLY=1 00:01:03.426 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:03.426 + [[ -n '' ]] 00:01:03.426 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:03.426 + for M in /var/spdk/build-*-manifest.txt 00:01:03.426 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:03.426 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:03.685 + for M in /var/spdk/build-*-manifest.txt 00:01:03.685 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:03.685 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:03.685 ++ uname 00:01:03.685 + [[ Linux == \L\i\n\u\x ]] 00:01:03.685 + sudo dmesg -T 00:01:03.685 + sudo dmesg --clear 00:01:03.685 + dmesg_pid=2384 00:01:03.685 + sudo dmesg -Tw 00:01:03.685 + [[ Ubuntu == FreeBSD ]] 00:01:03.685 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:03.685 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:03.685 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:03.685 + [[ -x /usr/src/fio-static/fio ]] 00:01:03.685 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:03.685 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:03.685 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:03.685 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:03.685 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:03.685 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:03.685 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:03.685 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:03.685 Test configuration: 00:01:03.685 SPDK_TEST_UNITTEST=1 00:01:03.685 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.685 SPDK_TEST_NVME=1 00:01:03.685 SPDK_TEST_BLOCKDEV=1 00:01:03.685 SPDK_RUN_ASAN=1 00:01:03.685 SPDK_RUN_UBSAN=1 00:01:03.685 SPDK_TEST_RAID5=1 00:01:03.685 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:03.685 RUN_NIGHTLY=1 23:44:58 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:03.685 23:44:58 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:03.685 23:44:58 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:03.685 23:44:58 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:03.685 23:44:58 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:03.685 23:44:58 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:03.685 23:44:58 -- paths/export.sh@4 -- $ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:03.685 23:44:58 -- paths/export.sh@5 -- $ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:03.685 23:44:58 -- paths/export.sh@6 -- $ export PATH 00:01:03.685 23:44:58 -- paths/export.sh@7 -- $ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:03.685 23:44:58 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:03.685 23:44:58 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:03.685 23:44:58 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721864698.XXXXXX 00:01:03.685 23:44:58 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721864698.XVxBQY 00:01:03.685 23:44:58 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:03.685 23:44:58 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:03.685 23:44:58 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:03.685 23:44:58 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:03.685 23:44:58 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:03.685 23:44:58 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:03.685 23:44:58 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:03.685 23:44:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:03.685 23:44:58 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:03.685 23:44:58 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:03.685 23:44:58 -- pm/common@17 -- $ local monitor 00:01:03.685 23:44:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.685 23:44:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.685 23:44:58 -- pm/common@25 -- $ sleep 1 00:01:03.685 23:44:58 -- pm/common@21 -- $ date +%s 00:01:03.685 23:44:58 -- pm/common@21 -- $ date +%s 00:01:03.685 23:44:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721864698 00:01:03.685 23:44:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721864698 00:01:03.685 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721864698_collect-vmstat.pm.log 00:01:03.685 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721864698_collect-cpu-load.pm.log 00:01:05.063 23:44:59 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:05.063 23:44:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:05.063 23:44:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:05.063 23:44:59 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:05.063 23:44:59 -- spdk/autobuild.sh@16 -- $ date -u 00:01:05.063 Wed Jul 24 23:44:59 UTC 2024 00:01:05.063 23:44:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:05.063 v24.09-pre-318-gd005e023b 00:01:05.063 23:44:59 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:05.063 23:44:59 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:05.063 23:44:59 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:05.063 23:44:59 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:05.063 23:44:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:05.063 ************************************ 00:01:05.063 START TEST asan 00:01:05.063 ************************************ 00:01:05.063 using asan 00:01:05.063 23:44:59 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:05.063 00:01:05.063 real 0m0.000s 00:01:05.063 user 0m0.000s 00:01:05.063 sys 0m0.000s 00:01:05.063 23:44:59 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:05.063 ************************************ 00:01:05.063 23:44:59 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:05.063 END TEST asan 00:01:05.063 ************************************ 00:01:05.063 23:44:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:05.063 23:44:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:05.063 23:44:59 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:05.063 23:44:59 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:05.063 23:44:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:05.063 ************************************ 00:01:05.063 START TEST ubsan 00:01:05.063 ************************************ 00:01:05.063 using ubsan 00:01:05.063 23:44:59 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:05.063 00:01:05.063 real 0m0.000s 00:01:05.063 user 0m0.000s 00:01:05.063 sys 0m0.000s 00:01:05.063 23:44:59 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:05.063 ************************************ 00:01:05.063 END TEST ubsan 00:01:05.063 ************************************ 00:01:05.063 23:44:59 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:05.063 23:44:59 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:05.063 23:44:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:05.063 23:44:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:05.063 23:44:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:05.063 23:44:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:05.063 23:44:59 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:05.063 23:44:59 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:05.063 23:44:59 -- common/autobuild_common.sh@423 -- $ run_test unittest_build _unittest_build 00:01:05.063 23:44:59 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:05.063 23:44:59 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:05.063 23:44:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:05.063 ************************************ 00:01:05.063 START TEST unittest_build 00:01:05.063 ************************************ 00:01:05.063 23:44:59 unittest_build -- common/autotest_common.sh@1125 -- $ _unittest_build 00:01:05.063 23:44:59 unittest_build -- common/autobuild_common.sh@414 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --without-shared 00:01:05.063 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:05.063 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:05.631 Using 'verbs' RDMA provider 00:01:21.450 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:33.654 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:33.654 Creating mk/config.mk...done. 00:01:33.654 Creating mk/cc.flags.mk...done. 00:01:33.654 Type 'make' to build. 00:01:33.654 23:45:29 unittest_build -- common/autobuild_common.sh@415 -- $ make -j10 00:01:33.913 make[1]: Nothing to be done for 'all'. 00:01:48.790 The Meson build system 00:01:48.790 Version: 1.4.1 00:01:48.790 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:48.790 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:48.790 Build type: native build 00:01:48.790 Program cat found: YES (/usr/bin/cat) 00:01:48.790 Project name: DPDK 00:01:48.790 Project version: 24.03.0 00:01:48.790 C compiler for the host machine: cc (gcc 13.2.0 "cc (Ubuntu 13.2.0-23ubuntu4) 13.2.0") 00:01:48.790 C linker for the host machine: cc ld.bfd 2.42 00:01:48.790 Host machine cpu family: x86_64 00:01:48.790 Host machine cpu: x86_64 00:01:48.791 Message: ## Building in Developer Mode ## 00:01:48.791 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:48.791 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:48.791 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:48.791 Program python3 found: YES (/var/spdk/dependencies/pip/bin/python3) 00:01:48.791 Program cat found: YES (/usr/bin/cat) 00:01:48.791 Compiler for C supports arguments -march=native: YES 00:01:48.791 Checking for size of "void *" : 8 00:01:48.791 Checking for size of "void *" : 8 (cached) 00:01:48.791 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:48.791 Library m found: YES 00:01:48.791 Library numa found: YES 00:01:48.791 Has header "numaif.h" : YES 00:01:48.791 Library fdt found: NO 00:01:48.791 Library execinfo found: NO 00:01:48.791 Has header "execinfo.h" : YES 00:01:48.791 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.1 00:01:48.791 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:48.791 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:48.791 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:48.791 Run-time dependency openssl found: YES 3.0.13 00:01:48.791 Run-time dependency libpcap found: NO (tried pkgconfig) 00:01:48.791 Library pcap found: NO 00:01:48.791 Compiler for C supports arguments -Wcast-qual: YES 00:01:48.791 Compiler for C supports arguments -Wdeprecated: YES 00:01:48.791 Compiler for C supports arguments -Wformat: YES 00:01:48.791 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:48.791 Compiler for C supports arguments -Wformat-security: YES 00:01:48.791 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:48.791 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:48.791 Compiler for C supports arguments -Wnested-externs: YES 00:01:48.791 Compiler for C supports arguments -Wold-style-definition: YES 00:01:48.791 Compiler for C supports arguments -Wpointer-arith: YES 00:01:48.791 Compiler for C supports arguments -Wsign-compare: YES 00:01:48.791 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:48.791 Compiler for C supports arguments -Wundef: YES 00:01:48.791 Compiler for C supports arguments -Wwrite-strings: YES 00:01:48.791 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:48.791 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:48.791 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:48.791 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:48.791 Program objdump found: YES (/usr/bin/objdump) 00:01:48.791 Compiler for C supports arguments -mavx512f: YES 00:01:48.791 Checking if "AVX512 checking" compiles: YES 00:01:48.791 Fetching value of define "__SSE4_2__" : 1 00:01:48.791 Fetching value of define "__AES__" : 1 00:01:48.791 Fetching value of define "__AVX__" : 1 00:01:48.791 Fetching value of define "__AVX2__" : 1 00:01:48.791 Fetching value of define "__AVX512BW__" : (undefined) 00:01:48.791 Fetching value of define "__AVX512CD__" : (undefined) 00:01:48.791 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:48.791 Fetching value of define "__AVX512F__" : (undefined) 00:01:48.791 Fetching value of define "__AVX512VL__" : (undefined) 00:01:48.791 Fetching value of define "__PCLMUL__" : 1 00:01:48.791 Fetching value of define "__RDRND__" : 1 00:01:48.791 Fetching value of define "__RDSEED__" : 1 00:01:48.791 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:48.791 Fetching value of define "__znver1__" : (undefined) 00:01:48.791 Fetching value of define "__znver2__" : (undefined) 00:01:48.791 Fetching value of define "__znver3__" : (undefined) 00:01:48.791 Fetching value of define "__znver4__" : (undefined) 00:01:48.791 Library asan found: YES 00:01:48.791 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:48.791 Message: lib/log: Defining dependency "log" 00:01:48.791 Message: lib/kvargs: Defining dependency "kvargs" 00:01:48.791 Message: lib/telemetry: Defining dependency "telemetry" 00:01:48.791 Library rt found: YES 00:01:48.791 Checking for function "getentropy" : NO 00:01:48.791 Message: lib/eal: Defining dependency "eal" 00:01:48.791 Message: lib/ring: Defining dependency "ring" 00:01:48.791 Message: lib/rcu: Defining dependency "rcu" 00:01:48.791 Message: lib/mempool: Defining dependency "mempool" 00:01:48.791 Message: lib/mbuf: Defining dependency "mbuf" 00:01:48.791 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:48.791 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:48.791 Compiler for C supports arguments -mpclmul: YES 00:01:48.791 Compiler for C supports arguments -maes: YES 00:01:48.791 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:48.791 Compiler for C supports arguments -mavx512bw: YES 00:01:48.791 Compiler for C supports arguments -mavx512dq: YES 00:01:48.791 Compiler for C supports arguments -mavx512vl: YES 00:01:48.791 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:48.791 Compiler for C supports arguments -mavx2: YES 00:01:48.791 Compiler for C supports arguments -mavx: YES 00:01:48.791 Message: lib/net: Defining dependency "net" 00:01:48.791 Message: lib/meter: Defining dependency "meter" 00:01:48.791 Message: lib/ethdev: Defining dependency "ethdev" 00:01:48.791 Message: lib/pci: Defining dependency "pci" 00:01:48.791 Message: lib/cmdline: Defining dependency "cmdline" 00:01:48.791 Message: lib/hash: Defining dependency "hash" 00:01:48.791 Message: lib/timer: Defining dependency "timer" 00:01:48.791 Message: lib/compressdev: Defining dependency "compressdev" 00:01:48.791 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:48.791 Message: lib/dmadev: Defining dependency "dmadev" 00:01:48.791 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:48.791 Message: lib/power: Defining dependency "power" 00:01:48.791 Message: lib/reorder: Defining dependency "reorder" 00:01:48.791 Message: lib/security: Defining dependency "security" 00:01:48.791 Has header "linux/userfaultfd.h" : YES 00:01:48.791 Has header "linux/vduse.h" : YES 00:01:48.791 Message: lib/vhost: Defining dependency "vhost" 00:01:48.791 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:48.791 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:48.791 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:48.791 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:48.791 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:48.791 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:48.791 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:48.791 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:48.791 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:48.791 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:48.791 Program doxygen found: YES (/usr/bin/doxygen) 00:01:48.791 Configuring doxy-api-html.conf using configuration 00:01:48.791 Configuring doxy-api-man.conf using configuration 00:01:48.791 Program mandb found: YES (/usr/bin/mandb) 00:01:48.791 Program sphinx-build found: NO 00:01:48.791 Configuring rte_build_config.h using configuration 00:01:48.791 Message: 00:01:48.791 ================= 00:01:48.791 Applications Enabled 00:01:48.791 ================= 00:01:48.791 00:01:48.791 apps: 00:01:48.791 00:01:48.791 00:01:48.791 Message: 00:01:48.791 ================= 00:01:48.791 Libraries Enabled 00:01:48.791 ================= 00:01:48.791 00:01:48.791 libs: 00:01:48.791 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:48.791 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:48.791 cryptodev, dmadev, power, reorder, security, vhost, 00:01:48.791 00:01:48.791 Message: 00:01:48.791 =============== 00:01:48.791 Drivers Enabled 00:01:48.791 =============== 00:01:48.791 00:01:48.791 common: 00:01:48.791 00:01:48.791 bus: 00:01:48.791 pci, vdev, 00:01:48.791 mempool: 00:01:48.791 ring, 00:01:48.791 dma: 00:01:48.791 00:01:48.791 net: 00:01:48.791 00:01:48.791 crypto: 00:01:48.791 00:01:48.791 compress: 00:01:48.791 00:01:48.791 vdpa: 00:01:48.791 00:01:48.791 00:01:48.791 Message: 00:01:48.791 ================= 00:01:48.791 Content Skipped 00:01:48.791 ================= 00:01:48.791 00:01:48.791 apps: 00:01:48.791 dumpcap: explicitly disabled via build config 00:01:48.791 graph: explicitly disabled via build config 00:01:48.791 pdump: explicitly disabled via build config 00:01:48.791 proc-info: explicitly disabled via build config 00:01:48.791 test-acl: explicitly disabled via build config 00:01:48.791 test-bbdev: explicitly disabled via build config 00:01:48.791 test-cmdline: explicitly disabled via build config 00:01:48.791 test-compress-perf: explicitly disabled via build config 00:01:48.791 test-crypto-perf: explicitly disabled via build config 00:01:48.791 test-dma-perf: explicitly disabled via build config 00:01:48.791 test-eventdev: explicitly disabled via build config 00:01:48.791 test-fib: explicitly disabled via build config 00:01:48.791 test-flow-perf: explicitly disabled via build config 00:01:48.791 test-gpudev: explicitly disabled via build config 00:01:48.791 test-mldev: explicitly disabled via build config 00:01:48.791 test-pipeline: explicitly disabled via build config 00:01:48.791 test-pmd: explicitly disabled via build config 00:01:48.791 test-regex: explicitly disabled via build config 00:01:48.791 test-sad: explicitly disabled via build config 00:01:48.791 test-security-perf: explicitly disabled via build config 00:01:48.791 00:01:48.791 libs: 00:01:48.791 argparse: explicitly disabled via build config 00:01:48.791 metrics: explicitly disabled via build config 00:01:48.791 acl: explicitly disabled via build config 00:01:48.791 bbdev: explicitly disabled via build config 00:01:48.791 bitratestats: explicitly disabled via build config 00:01:48.791 bpf: explicitly disabled via build config 00:01:48.791 cfgfile: explicitly disabled via build config 00:01:48.791 distributor: explicitly disabled via build config 00:01:48.791 efd: explicitly disabled via build config 00:01:48.791 eventdev: explicitly disabled via build config 00:01:48.791 dispatcher: explicitly disabled via build config 00:01:48.791 gpudev: explicitly disabled via build config 00:01:48.791 gro: explicitly disabled via build config 00:01:48.792 gso: explicitly disabled via build config 00:01:48.792 ip_frag: explicitly disabled via build config 00:01:48.792 jobstats: explicitly disabled via build config 00:01:48.792 latencystats: explicitly disabled via build config 00:01:48.792 lpm: explicitly disabled via build config 00:01:48.792 member: explicitly disabled via build config 00:01:48.792 pcapng: explicitly disabled via build config 00:01:48.792 rawdev: explicitly disabled via build config 00:01:48.792 regexdev: explicitly disabled via build config 00:01:48.792 mldev: explicitly disabled via build config 00:01:48.792 rib: explicitly disabled via build config 00:01:48.792 sched: explicitly disabled via build config 00:01:48.792 stack: explicitly disabled via build config 00:01:48.792 ipsec: explicitly disabled via build config 00:01:48.792 pdcp: explicitly disabled via build config 00:01:48.792 fib: explicitly disabled via build config 00:01:48.792 port: explicitly disabled via build config 00:01:48.792 pdump: explicitly disabled via build config 00:01:48.792 table: explicitly disabled via build config 00:01:48.792 pipeline: explicitly disabled via build config 00:01:48.792 graph: explicitly disabled via build config 00:01:48.792 node: explicitly disabled via build config 00:01:48.792 00:01:48.792 drivers: 00:01:48.792 common/cpt: not in enabled drivers build config 00:01:48.792 common/dpaax: not in enabled drivers build config 00:01:48.792 common/iavf: not in enabled drivers build config 00:01:48.792 common/idpf: not in enabled drivers build config 00:01:48.792 common/ionic: not in enabled drivers build config 00:01:48.792 common/mvep: not in enabled drivers build config 00:01:48.792 common/octeontx: not in enabled drivers build config 00:01:48.792 bus/auxiliary: not in enabled drivers build config 00:01:48.792 bus/cdx: not in enabled drivers build config 00:01:48.792 bus/dpaa: not in enabled drivers build config 00:01:48.792 bus/fslmc: not in enabled drivers build config 00:01:48.792 bus/ifpga: not in enabled drivers build config 00:01:48.792 bus/platform: not in enabled drivers build config 00:01:48.792 bus/uacce: not in enabled drivers build config 00:01:48.792 bus/vmbus: not in enabled drivers build config 00:01:48.792 common/cnxk: not in enabled drivers build config 00:01:48.792 common/mlx5: not in enabled drivers build config 00:01:48.792 common/nfp: not in enabled drivers build config 00:01:48.792 common/nitrox: not in enabled drivers build config 00:01:48.792 common/qat: not in enabled drivers build config 00:01:48.792 common/sfc_efx: not in enabled drivers build config 00:01:48.792 mempool/bucket: not in enabled drivers build config 00:01:48.792 mempool/cnxk: not in enabled drivers build config 00:01:48.792 mempool/dpaa: not in enabled drivers build config 00:01:48.792 mempool/dpaa2: not in enabled drivers build config 00:01:48.792 mempool/octeontx: not in enabled drivers build config 00:01:48.792 mempool/stack: not in enabled drivers build config 00:01:48.792 dma/cnxk: not in enabled drivers build config 00:01:48.792 dma/dpaa: not in enabled drivers build config 00:01:48.792 dma/dpaa2: not in enabled drivers build config 00:01:48.792 dma/hisilicon: not in enabled drivers build config 00:01:48.792 dma/idxd: not in enabled drivers build config 00:01:48.792 dma/ioat: not in enabled drivers build config 00:01:48.792 dma/skeleton: not in enabled drivers build config 00:01:48.792 net/af_packet: not in enabled drivers build config 00:01:48.792 net/af_xdp: not in enabled drivers build config 00:01:48.792 net/ark: not in enabled drivers build config 00:01:48.792 net/atlantic: not in enabled drivers build config 00:01:48.792 net/avp: not in enabled drivers build config 00:01:48.792 net/axgbe: not in enabled drivers build config 00:01:48.792 net/bnx2x: not in enabled drivers build config 00:01:48.792 net/bnxt: not in enabled drivers build config 00:01:48.792 net/bonding: not in enabled drivers build config 00:01:48.792 net/cnxk: not in enabled drivers build config 00:01:48.792 net/cpfl: not in enabled drivers build config 00:01:48.792 net/cxgbe: not in enabled drivers build config 00:01:48.792 net/dpaa: not in enabled drivers build config 00:01:48.792 net/dpaa2: not in enabled drivers build config 00:01:48.792 net/e1000: not in enabled drivers build config 00:01:48.792 net/ena: not in enabled drivers build config 00:01:48.792 net/enetc: not in enabled drivers build config 00:01:48.792 net/enetfec: not in enabled drivers build config 00:01:48.792 net/enic: not in enabled drivers build config 00:01:48.792 net/failsafe: not in enabled drivers build config 00:01:48.792 net/fm10k: not in enabled drivers build config 00:01:48.792 net/gve: not in enabled drivers build config 00:01:48.792 net/hinic: not in enabled drivers build config 00:01:48.792 net/hns3: not in enabled drivers build config 00:01:48.792 net/i40e: not in enabled drivers build config 00:01:48.792 net/iavf: not in enabled drivers build config 00:01:48.792 net/ice: not in enabled drivers build config 00:01:48.792 net/idpf: not in enabled drivers build config 00:01:48.792 net/igc: not in enabled drivers build config 00:01:48.792 net/ionic: not in enabled drivers build config 00:01:48.792 net/ipn3ke: not in enabled drivers build config 00:01:48.792 net/ixgbe: not in enabled drivers build config 00:01:48.792 net/mana: not in enabled drivers build config 00:01:48.792 net/memif: not in enabled drivers build config 00:01:48.792 net/mlx4: not in enabled drivers build config 00:01:48.792 net/mlx5: not in enabled drivers build config 00:01:48.792 net/mvneta: not in enabled drivers build config 00:01:48.792 net/mvpp2: not in enabled drivers build config 00:01:48.792 net/netvsc: not in enabled drivers build config 00:01:48.792 net/nfb: not in enabled drivers build config 00:01:48.792 net/nfp: not in enabled drivers build config 00:01:48.792 net/ngbe: not in enabled drivers build config 00:01:48.792 net/null: not in enabled drivers build config 00:01:48.792 net/octeontx: not in enabled drivers build config 00:01:48.792 net/octeon_ep: not in enabled drivers build config 00:01:48.792 net/pcap: not in enabled drivers build config 00:01:48.792 net/pfe: not in enabled drivers build config 00:01:48.792 net/qede: not in enabled drivers build config 00:01:48.792 net/ring: not in enabled drivers build config 00:01:48.792 net/sfc: not in enabled drivers build config 00:01:48.792 net/softnic: not in enabled drivers build config 00:01:48.792 net/tap: not in enabled drivers build config 00:01:48.792 net/thunderx: not in enabled drivers build config 00:01:48.792 net/txgbe: not in enabled drivers build config 00:01:48.792 net/vdev_netvsc: not in enabled drivers build config 00:01:48.792 net/vhost: not in enabled drivers build config 00:01:48.792 net/virtio: not in enabled drivers build config 00:01:48.792 net/vmxnet3: not in enabled drivers build config 00:01:48.792 raw/*: missing internal dependency, "rawdev" 00:01:48.792 crypto/armv8: not in enabled drivers build config 00:01:48.792 crypto/bcmfs: not in enabled drivers build config 00:01:48.792 crypto/caam_jr: not in enabled drivers build config 00:01:48.792 crypto/ccp: not in enabled drivers build config 00:01:48.792 crypto/cnxk: not in enabled drivers build config 00:01:48.792 crypto/dpaa_sec: not in enabled drivers build config 00:01:48.792 crypto/dpaa2_sec: not in enabled drivers build config 00:01:48.792 crypto/ipsec_mb: not in enabled drivers build config 00:01:48.792 crypto/mlx5: not in enabled drivers build config 00:01:48.792 crypto/mvsam: not in enabled drivers build config 00:01:48.792 crypto/nitrox: not in enabled drivers build config 00:01:48.792 crypto/null: not in enabled drivers build config 00:01:48.792 crypto/octeontx: not in enabled drivers build config 00:01:48.792 crypto/openssl: not in enabled drivers build config 00:01:48.792 crypto/scheduler: not in enabled drivers build config 00:01:48.792 crypto/uadk: not in enabled drivers build config 00:01:48.792 crypto/virtio: not in enabled drivers build config 00:01:48.792 compress/isal: not in enabled drivers build config 00:01:48.792 compress/mlx5: not in enabled drivers build config 00:01:48.792 compress/nitrox: not in enabled drivers build config 00:01:48.792 compress/octeontx: not in enabled drivers build config 00:01:48.792 compress/zlib: not in enabled drivers build config 00:01:48.792 regex/*: missing internal dependency, "regexdev" 00:01:48.792 ml/*: missing internal dependency, "mldev" 00:01:48.792 vdpa/ifc: not in enabled drivers build config 00:01:48.792 vdpa/mlx5: not in enabled drivers build config 00:01:48.792 vdpa/nfp: not in enabled drivers build config 00:01:48.792 vdpa/sfc: not in enabled drivers build config 00:01:48.792 event/*: missing internal dependency, "eventdev" 00:01:48.792 baseband/*: missing internal dependency, "bbdev" 00:01:48.792 gpu/*: missing internal dependency, "gpudev" 00:01:48.792 00:01:48.792 00:01:48.792 Build targets in project: 85 00:01:48.792 00:01:48.792 DPDK 24.03.0 00:01:48.792 00:01:48.792 User defined options 00:01:48.792 buildtype : debug 00:01:48.792 default_library : static 00:01:48.792 libdir : lib 00:01:48.792 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:48.792 b_sanitize : address 00:01:48.792 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:48.792 c_link_args : 00:01:48.792 cpu_instruction_set: native 00:01:48.792 disable_apps : test-pipeline,test-pmd,test-eventdev,test,test-cmdline,test-bbdev,test-sad,proc-info,graph,test-gpudev,test-crypto-perf,test-dma-perf,test-regex,test-mldev,test-acl,test-flow-perf,dumpcap,test-compress-perf,test-security-perf,test-fib,pdump 00:01:48.792 disable_libs : mldev,jobstats,bpf,argparse,rawdev,rib,stack,bbdev,lpm,pipeline,member,port,regexdev,latencystats,table,bitratestats,acl,sched,node,graph,gso,dispatcher,efd,eventdev,pdcp,fib,pcapng,cfgfile,metrics,ip_frag,gro,pdump,gpudev,distributor,ipsec 00:01:48.792 enable_docs : false 00:01:48.792 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:48.792 enable_kmods : false 00:01:48.792 max_lcores : 128 00:01:48.792 tests : false 00:01:48.792 00:01:48.792 Found ninja-1.11.1.git.kitware.jobserver-1 at /var/spdk/dependencies/pip/bin/ninja 00:01:48.792 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:48.792 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:48.792 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:48.792 [3/268] Linking static target lib/librte_log.a 00:01:48.792 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:48.792 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:48.792 [6/268] Linking static target lib/librte_kvargs.a 00:01:48.793 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.793 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:48.793 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:48.793 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:48.793 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:48.793 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:48.793 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:48.793 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:48.793 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:48.793 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:48.793 [17/268] Linking static target lib/librte_telemetry.a 00:01:48.793 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.051 [19/268] Linking target lib/librte_log.so.24.1 00:01:49.051 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:49.051 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:49.051 [22/268] Linking target lib/librte_kvargs.so.24.1 00:01:49.310 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:49.310 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:49.310 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:49.310 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:49.310 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:49.310 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:49.567 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:49.567 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.567 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:49.567 [32/268] Linking target lib/librte_telemetry.so.24.1 00:01:49.567 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:49.567 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:49.567 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:49.567 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:49.825 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:49.825 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:50.083 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:50.083 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:50.083 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:50.083 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:50.083 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:50.083 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:50.083 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:50.083 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:50.342 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:50.342 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:50.342 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:50.342 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:50.601 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:50.601 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:50.601 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:50.601 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:50.601 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:50.859 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:50.859 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:50.859 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:50.859 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:51.117 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:51.117 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:51.117 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:51.117 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:51.117 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:51.117 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:51.117 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:51.376 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:51.376 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:51.376 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:51.634 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:51.634 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:51.634 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:51.634 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:51.634 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:51.634 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:51.634 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:51.893 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:51.893 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:51.893 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:51.893 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:51.893 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:52.152 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:52.152 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:52.152 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:52.152 [85/268] Linking static target lib/librte_eal.a 00:01:52.410 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:52.410 [87/268] Linking static target lib/librte_ring.a 00:01:52.410 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:52.410 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:52.410 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:52.668 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:52.668 [92/268] Linking static target lib/librte_rcu.a 00:01:52.668 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:52.668 [94/268] Linking static target lib/librte_mempool.a 00:01:52.668 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.668 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:52.926 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:52.926 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.926 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:52.926 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:53.184 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:53.184 [102/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.442 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:53.442 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:53.442 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:53.442 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:53.442 [107/268] Linking static target lib/librte_mbuf.a 00:01:53.442 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:53.442 [109/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:53.442 [110/268] Linking static target lib/librte_net.a 00:01:53.442 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:53.442 [112/268] Linking static target lib/librte_meter.a 00:01:53.701 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.701 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.959 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:53.959 [116/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.959 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:53.959 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:53.959 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:54.553 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:54.553 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:54.553 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:54.553 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:54.812 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:54.812 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:54.812 [126/268] Linking static target lib/librte_pci.a 00:01:54.812 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:55.070 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:55.070 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:55.070 [130/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.070 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:55.070 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:55.070 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:55.070 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:55.070 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:55.329 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:55.329 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:55.329 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:55.329 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:55.329 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:55.329 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:55.329 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:55.329 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:55.329 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:55.587 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:55.587 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:55.587 [147/268] Linking static target lib/librte_cmdline.a 00:01:55.846 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:55.846 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:55.846 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:56.104 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:56.105 [152/268] Linking static target lib/librte_timer.a 00:01:56.105 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:56.105 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:56.363 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:56.363 [156/268] Linking static target lib/librte_ethdev.a 00:01:56.363 [157/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.363 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:56.363 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:56.622 [160/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.622 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:56.622 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:56.622 [163/268] Linking static target lib/librte_compressdev.a 00:01:56.622 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:56.622 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:56.622 [166/268] Linking static target lib/librte_hash.a 00:01:56.881 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:57.139 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:57.139 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:57.139 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:57.139 [171/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:57.139 [172/268] Linking static target lib/librte_dmadev.a 00:01:57.139 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.397 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:57.397 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.397 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:57.655 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.655 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:57.655 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:57.912 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:57.912 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:57.912 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:58.170 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:58.171 [184/268] Linking static target lib/librte_cryptodev.a 00:01:58.171 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:58.171 [186/268] Linking static target lib/librte_power.a 00:01:58.429 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:58.429 [188/268] Linking static target lib/librte_reorder.a 00:01:58.429 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:58.429 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:58.687 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:58.687 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:58.687 [193/268] Linking static target lib/librte_security.a 00:01:58.687 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.687 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.945 [196/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.945 [197/268] Linking target lib/librte_eal.so.24.1 00:01:58.945 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.202 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:59.203 [200/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:59.203 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:59.203 [202/268] Linking target lib/librte_ring.so.24.1 00:01:59.203 [203/268] Linking target lib/librte_meter.so.24.1 00:01:59.203 [204/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:59.461 [205/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:59.461 [206/268] Linking target lib/librte_rcu.so.24.1 00:01:59.461 [207/268] Linking target lib/librte_mempool.so.24.1 00:01:59.461 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:59.461 [209/268] Linking target lib/librte_pci.so.24.1 00:01:59.461 [210/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.461 [211/268] Linking target lib/librte_timer.so.24.1 00:01:59.461 [212/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:59.461 [213/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:59.461 [214/268] Linking target lib/librte_dmadev.so.24.1 00:01:59.461 [215/268] Linking target lib/librte_mbuf.so.24.1 00:01:59.461 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:59.461 [217/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:59.719 [218/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:59.719 [219/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:59.719 [220/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:59.719 [221/268] Linking target lib/librte_net.so.24.1 00:01:59.719 [222/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:59.719 [223/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:59.719 [224/268] Linking target lib/librte_compressdev.so.24.1 00:01:59.719 [225/268] Linking target lib/librte_cryptodev.so.24.1 00:01:59.719 [226/268] Linking target lib/librte_reorder.so.24.1 00:01:59.976 [227/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:59.976 [228/268] Linking target lib/librte_cmdline.so.24.1 00:01:59.976 [229/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:59.976 [230/268] Linking target lib/librte_security.so.24.1 00:01:59.976 [231/268] Linking target lib/librte_hash.so.24.1 00:01:59.976 [232/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:59.976 [233/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:00.247 [234/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:00.247 [235/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:00.247 [236/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:00.509 [237/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:00.509 [238/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:00.509 [239/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:00.509 [240/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:00.766 [241/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:00.766 [242/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:00.766 [243/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:00.766 [244/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:00.766 [245/268] Linking static target drivers/librte_bus_pci.a 00:02:00.766 [246/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:00.766 [247/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:00.766 [248/268] Linking static target drivers/librte_bus_vdev.a 00:02:01.024 [249/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:01.024 [250/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:01.024 [251/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.024 [252/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:01.282 [253/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:01.282 [254/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:01.282 [255/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:01.282 [256/268] Linking static target drivers/librte_mempool_ring.a 00:02:01.282 [257/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.282 [258/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:01.282 [259/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:02.220 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.479 [261/268] Linking target lib/librte_ethdev.so.24.1 00:02:02.479 [262/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:02.479 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:02.738 [264/268] Linking target lib/librte_power.so.24.1 00:02:06.034 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:06.293 [266/268] Linking static target lib/librte_vhost.a 00:02:08.197 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.197 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:08.197 INFO: autodetecting backend as ninja 00:02:08.197 INFO: calculating backend command to run: /var/spdk/dependencies/pip/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:09.144 CC lib/log/log.o 00:02:09.144 CC lib/ut_mock/mock.o 00:02:09.144 CC lib/log/log_flags.o 00:02:09.144 CC lib/log/log_deprecated.o 00:02:09.402 CC lib/ut/ut.o 00:02:09.402 LIB libspdk_log.a 00:02:09.402 LIB libspdk_ut.a 00:02:09.660 LIB libspdk_ut_mock.a 00:02:09.660 CC lib/ioat/ioat.o 00:02:09.660 CC lib/util/base64.o 00:02:09.660 CC lib/dma/dma.o 00:02:09.660 CC lib/util/bit_array.o 00:02:09.660 CXX lib/trace_parser/trace.o 00:02:09.660 CC lib/util/cpuset.o 00:02:09.660 CC lib/util/crc32.o 00:02:09.660 CC lib/util/crc32c.o 00:02:09.660 CC lib/util/crc16.o 00:02:09.918 CC lib/vfio_user/host/vfio_user_pci.o 00:02:09.918 CC lib/util/crc32_ieee.o 00:02:09.918 CC lib/util/crc64.o 00:02:09.918 CC lib/util/dif.o 00:02:09.918 LIB libspdk_dma.a 00:02:09.918 CC lib/vfio_user/host/vfio_user.o 00:02:09.918 CC lib/util/fd.o 00:02:09.918 CC lib/util/fd_group.o 00:02:10.176 CC lib/util/file.o 00:02:10.176 CC lib/util/hexlify.o 00:02:10.176 CC lib/util/iov.o 00:02:10.176 LIB libspdk_ioat.a 00:02:10.176 CC lib/util/math.o 00:02:10.176 CC lib/util/net.o 00:02:10.176 CC lib/util/pipe.o 00:02:10.176 CC lib/util/strerror_tls.o 00:02:10.176 CC lib/util/string.o 00:02:10.176 LIB libspdk_vfio_user.a 00:02:10.176 CC lib/util/uuid.o 00:02:10.434 CC lib/util/xor.o 00:02:10.434 CC lib/util/zipf.o 00:02:10.692 LIB libspdk_util.a 00:02:10.950 CC lib/json/json_parse.o 00:02:10.950 CC lib/json/json_util.o 00:02:10.950 CC lib/json/json_write.o 00:02:10.950 CC lib/rdma_provider/common.o 00:02:10.950 CC lib/vmd/vmd.o 00:02:10.950 CC lib/idxd/idxd.o 00:02:10.950 CC lib/conf/conf.o 00:02:10.950 CC lib/rdma_utils/rdma_utils.o 00:02:10.950 CC lib/env_dpdk/env.o 00:02:11.209 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:11.209 LIB libspdk_trace_parser.a 00:02:11.209 CC lib/vmd/led.o 00:02:11.209 CC lib/idxd/idxd_user.o 00:02:11.209 LIB libspdk_conf.a 00:02:11.209 CC lib/idxd/idxd_kernel.o 00:02:11.209 LIB libspdk_rdma_utils.a 00:02:11.209 CC lib/env_dpdk/memory.o 00:02:11.209 LIB libspdk_json.a 00:02:11.209 CC lib/env_dpdk/pci.o 00:02:11.209 CC lib/env_dpdk/init.o 00:02:11.475 CC lib/env_dpdk/threads.o 00:02:11.475 LIB libspdk_rdma_provider.a 00:02:11.475 CC lib/env_dpdk/pci_ioat.o 00:02:11.475 CC lib/env_dpdk/pci_virtio.o 00:02:11.475 CC lib/jsonrpc/jsonrpc_server.o 00:02:11.475 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:11.475 CC lib/env_dpdk/pci_vmd.o 00:02:11.734 CC lib/env_dpdk/pci_idxd.o 00:02:11.734 LIB libspdk_idxd.a 00:02:11.734 CC lib/env_dpdk/pci_event.o 00:02:11.734 CC lib/env_dpdk/sigbus_handler.o 00:02:11.734 CC lib/env_dpdk/pci_dpdk.o 00:02:11.734 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:11.734 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:11.734 LIB libspdk_vmd.a 00:02:11.734 CC lib/jsonrpc/jsonrpc_client.o 00:02:11.734 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:12.301 LIB libspdk_jsonrpc.a 00:02:12.301 CC lib/rpc/rpc.o 00:02:12.560 LIB libspdk_rpc.a 00:02:12.818 CC lib/keyring/keyring.o 00:02:12.818 CC lib/keyring/keyring_rpc.o 00:02:12.818 CC lib/notify/notify.o 00:02:12.818 CC lib/notify/notify_rpc.o 00:02:12.818 CC lib/trace/trace.o 00:02:12.818 CC lib/trace/trace_rpc.o 00:02:12.818 CC lib/trace/trace_flags.o 00:02:12.818 LIB libspdk_env_dpdk.a 00:02:13.077 LIB libspdk_notify.a 00:02:13.077 LIB libspdk_keyring.a 00:02:13.335 LIB libspdk_trace.a 00:02:13.593 CC lib/thread/thread.o 00:02:13.593 CC lib/thread/iobuf.o 00:02:13.593 CC lib/sock/sock.o 00:02:13.593 CC lib/sock/sock_rpc.o 00:02:14.160 LIB libspdk_sock.a 00:02:14.419 CC lib/nvme/nvme_ctrlr.o 00:02:14.419 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:14.419 CC lib/nvme/nvme_fabric.o 00:02:14.419 CC lib/nvme/nvme_ns_cmd.o 00:02:14.419 CC lib/nvme/nvme_ns.o 00:02:14.419 CC lib/nvme/nvme_qpair.o 00:02:14.419 CC lib/nvme/nvme.o 00:02:14.419 CC lib/nvme/nvme_pcie.o 00:02:14.419 CC lib/nvme/nvme_pcie_common.o 00:02:15.355 CC lib/nvme/nvme_quirks.o 00:02:15.355 CC lib/nvme/nvme_transport.o 00:02:15.355 CC lib/nvme/nvme_discovery.o 00:02:15.355 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:15.614 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:15.614 CC lib/nvme/nvme_tcp.o 00:02:15.614 CC lib/nvme/nvme_opal.o 00:02:15.614 LIB libspdk_thread.a 00:02:15.614 CC lib/nvme/nvme_io_msg.o 00:02:15.873 CC lib/nvme/nvme_poll_group.o 00:02:15.873 CC lib/nvme/nvme_zns.o 00:02:16.131 CC lib/nvme/nvme_stubs.o 00:02:16.131 CC lib/nvme/nvme_auth.o 00:02:16.390 CC lib/nvme/nvme_cuse.o 00:02:16.390 CC lib/nvme/nvme_rdma.o 00:02:16.390 CC lib/accel/accel.o 00:02:16.390 CC lib/accel/accel_rpc.o 00:02:16.648 CC lib/accel/accel_sw.o 00:02:16.906 CC lib/blob/blobstore.o 00:02:16.906 CC lib/init/json_config.o 00:02:16.906 CC lib/virtio/virtio.o 00:02:17.164 CC lib/virtio/virtio_vhost_user.o 00:02:17.164 CC lib/init/subsystem.o 00:02:17.164 CC lib/init/subsystem_rpc.o 00:02:17.423 CC lib/init/rpc.o 00:02:17.423 CC lib/blob/request.o 00:02:17.423 CC lib/blob/zeroes.o 00:02:17.423 CC lib/blob/blob_bs_dev.o 00:02:17.423 CC lib/virtio/virtio_vfio_user.o 00:02:17.423 CC lib/virtio/virtio_pci.o 00:02:17.423 LIB libspdk_init.a 00:02:17.680 CC lib/event/app.o 00:02:17.680 CC lib/event/app_rpc.o 00:02:17.680 CC lib/event/reactor.o 00:02:17.680 CC lib/event/log_rpc.o 00:02:17.680 CC lib/event/scheduler_static.o 00:02:17.937 LIB libspdk_accel.a 00:02:17.937 LIB libspdk_virtio.a 00:02:17.937 CC lib/bdev/bdev_rpc.o 00:02:17.937 CC lib/bdev/bdev.o 00:02:17.937 CC lib/bdev/scsi_nvme.o 00:02:17.937 CC lib/bdev/bdev_zone.o 00:02:17.937 CC lib/bdev/part.o 00:02:18.195 LIB libspdk_nvme.a 00:02:18.453 LIB libspdk_event.a 00:02:20.989 LIB libspdk_blob.a 00:02:21.248 CC lib/blobfs/tree.o 00:02:21.248 CC lib/blobfs/blobfs.o 00:02:21.248 CC lib/lvol/lvol.o 00:02:21.506 LIB libspdk_bdev.a 00:02:21.764 CC lib/ublk/ublk.o 00:02:21.764 CC lib/ublk/ublk_rpc.o 00:02:21.764 CC lib/ftl/ftl_core.o 00:02:21.764 CC lib/ftl/ftl_layout.o 00:02:21.764 CC lib/ftl/ftl_init.o 00:02:21.764 CC lib/scsi/dev.o 00:02:21.764 CC lib/nvmf/ctrlr.o 00:02:21.764 CC lib/nbd/nbd.o 00:02:22.022 CC lib/ftl/ftl_debug.o 00:02:22.022 CC lib/ftl/ftl_io.o 00:02:22.022 CC lib/scsi/lun.o 00:02:22.022 CC lib/nvmf/ctrlr_discovery.o 00:02:22.281 CC lib/ftl/ftl_sb.o 00:02:22.281 CC lib/ftl/ftl_l2p.o 00:02:22.281 CC lib/ftl/ftl_l2p_flat.o 00:02:22.281 CC lib/nbd/nbd_rpc.o 00:02:22.281 CC lib/scsi/port.o 00:02:22.539 CC lib/ftl/ftl_nv_cache.o 00:02:22.539 CC lib/ftl/ftl_band.o 00:02:22.539 LIB libspdk_ublk.a 00:02:22.539 LIB libspdk_blobfs.a 00:02:22.539 CC lib/ftl/ftl_band_ops.o 00:02:22.539 LIB libspdk_lvol.a 00:02:22.539 CC lib/scsi/scsi.o 00:02:22.539 CC lib/nvmf/ctrlr_bdev.o 00:02:22.539 LIB libspdk_nbd.a 00:02:22.539 CC lib/scsi/scsi_bdev.o 00:02:22.539 CC lib/scsi/scsi_pr.o 00:02:22.539 CC lib/scsi/scsi_rpc.o 00:02:22.797 CC lib/nvmf/subsystem.o 00:02:22.797 CC lib/scsi/task.o 00:02:22.798 CC lib/ftl/ftl_writer.o 00:02:23.055 CC lib/ftl/ftl_rq.o 00:02:23.055 CC lib/ftl/ftl_reloc.o 00:02:23.055 CC lib/ftl/ftl_l2p_cache.o 00:02:23.055 CC lib/nvmf/nvmf.o 00:02:23.055 CC lib/ftl/ftl_p2l.o 00:02:23.055 CC lib/ftl/mngt/ftl_mngt.o 00:02:23.314 LIB libspdk_scsi.a 00:02:23.314 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:23.571 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:23.571 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:23.571 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:23.571 CC lib/iscsi/conn.o 00:02:23.829 CC lib/vhost/vhost.o 00:02:23.829 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:23.829 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:23.829 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:23.829 CC lib/vhost/vhost_rpc.o 00:02:24.087 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:24.087 CC lib/vhost/vhost_scsi.o 00:02:24.087 CC lib/vhost/vhost_blk.o 00:02:24.087 CC lib/nvmf/nvmf_rpc.o 00:02:24.087 CC lib/nvmf/transport.o 00:02:24.087 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:24.345 CC lib/nvmf/tcp.o 00:02:24.345 CC lib/iscsi/init_grp.o 00:02:24.345 CC lib/iscsi/iscsi.o 00:02:24.345 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:24.603 CC lib/vhost/rte_vhost_user.o 00:02:24.603 CC lib/nvmf/stubs.o 00:02:24.603 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:24.603 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:24.861 CC lib/iscsi/md5.o 00:02:24.861 CC lib/iscsi/param.o 00:02:25.120 CC lib/iscsi/portal_grp.o 00:02:25.120 CC lib/iscsi/tgt_node.o 00:02:25.120 CC lib/ftl/utils/ftl_conf.o 00:02:25.120 CC lib/nvmf/mdns_server.o 00:02:25.120 CC lib/nvmf/rdma.o 00:02:25.379 CC lib/nvmf/auth.o 00:02:25.379 CC lib/iscsi/iscsi_subsystem.o 00:02:25.379 CC lib/ftl/utils/ftl_md.o 00:02:25.379 CC lib/iscsi/iscsi_rpc.o 00:02:25.638 CC lib/iscsi/task.o 00:02:25.638 CC lib/ftl/utils/ftl_mempool.o 00:02:25.897 LIB libspdk_vhost.a 00:02:25.897 CC lib/ftl/utils/ftl_bitmap.o 00:02:25.897 CC lib/ftl/utils/ftl_property.o 00:02:25.897 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:25.897 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:25.897 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:26.156 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:26.156 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:26.156 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:26.156 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:26.156 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:26.414 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:26.414 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:26.414 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:26.414 CC lib/ftl/base/ftl_base_dev.o 00:02:26.414 LIB libspdk_iscsi.a 00:02:26.414 CC lib/ftl/base/ftl_base_bdev.o 00:02:26.414 CC lib/ftl/ftl_trace.o 00:02:26.673 LIB libspdk_ftl.a 00:02:28.081 LIB libspdk_nvmf.a 00:02:28.339 CC module/env_dpdk/env_dpdk_rpc.o 00:02:28.598 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:28.598 CC module/sock/posix/posix.o 00:02:28.598 CC module/scheduler/gscheduler/gscheduler.o 00:02:28.598 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:28.598 CC module/accel/ioat/accel_ioat.o 00:02:28.598 CC module/accel/error/accel_error.o 00:02:28.598 CC module/keyring/file/keyring.o 00:02:28.598 CC module/blob/bdev/blob_bdev.o 00:02:28.598 CC module/keyring/linux/keyring.o 00:02:28.598 LIB libspdk_env_dpdk_rpc.a 00:02:28.598 CC module/accel/error/accel_error_rpc.o 00:02:28.598 CC module/keyring/linux/keyring_rpc.o 00:02:28.598 LIB libspdk_scheduler_dpdk_governor.a 00:02:28.598 CC module/keyring/file/keyring_rpc.o 00:02:28.598 LIB libspdk_scheduler_gscheduler.a 00:02:28.855 CC module/accel/ioat/accel_ioat_rpc.o 00:02:28.855 LIB libspdk_scheduler_dynamic.a 00:02:28.855 LIB libspdk_accel_error.a 00:02:28.855 LIB libspdk_keyring_linux.a 00:02:28.855 LIB libspdk_blob_bdev.a 00:02:28.855 LIB libspdk_keyring_file.a 00:02:28.855 LIB libspdk_accel_ioat.a 00:02:28.855 CC module/accel/dsa/accel_dsa.o 00:02:28.855 CC module/accel/dsa/accel_dsa_rpc.o 00:02:28.855 CC module/accel/iaa/accel_iaa_rpc.o 00:02:28.855 CC module/accel/iaa/accel_iaa.o 00:02:29.113 CC module/bdev/gpt/gpt.o 00:02:29.113 CC module/bdev/lvol/vbdev_lvol.o 00:02:29.113 CC module/blobfs/bdev/blobfs_bdev.o 00:02:29.113 CC module/bdev/delay/vbdev_delay.o 00:02:29.113 CC module/bdev/error/vbdev_error.o 00:02:29.113 LIB libspdk_accel_iaa.a 00:02:29.113 CC module/bdev/malloc/bdev_malloc.o 00:02:29.113 CC module/bdev/null/bdev_null.o 00:02:29.113 LIB libspdk_accel_dsa.a 00:02:29.113 CC module/bdev/null/bdev_null_rpc.o 00:02:29.371 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:29.371 CC module/bdev/gpt/vbdev_gpt.o 00:02:29.371 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:29.371 CC module/bdev/error/vbdev_error_rpc.o 00:02:29.371 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:29.371 LIB libspdk_sock_posix.a 00:02:29.371 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:29.629 LIB libspdk_bdev_null.a 00:02:29.629 LIB libspdk_blobfs_bdev.a 00:02:29.629 LIB libspdk_bdev_error.a 00:02:29.629 CC module/bdev/nvme/bdev_nvme.o 00:02:29.629 LIB libspdk_bdev_delay.a 00:02:29.629 LIB libspdk_bdev_malloc.a 00:02:29.629 LIB libspdk_bdev_gpt.a 00:02:29.629 CC module/bdev/passthru/vbdev_passthru.o 00:02:29.629 CC module/bdev/raid/bdev_raid.o 00:02:29.887 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:29.887 CC module/bdev/split/vbdev_split.o 00:02:29.887 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:29.887 CC module/bdev/aio/bdev_aio.o 00:02:29.887 CC module/bdev/ftl/bdev_ftl.o 00:02:29.887 LIB libspdk_bdev_lvol.a 00:02:29.887 CC module/bdev/iscsi/bdev_iscsi.o 00:02:29.887 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:29.887 CC module/bdev/aio/bdev_aio_rpc.o 00:02:30.145 CC module/bdev/split/vbdev_split_rpc.o 00:02:30.145 LIB libspdk_bdev_passthru.a 00:02:30.145 CC module/bdev/raid/bdev_raid_rpc.o 00:02:30.145 CC module/bdev/raid/bdev_raid_sb.o 00:02:30.145 CC module/bdev/raid/raid0.o 00:02:30.145 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:30.403 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:30.403 LIB libspdk_bdev_split.a 00:02:30.403 LIB libspdk_bdev_aio.a 00:02:30.403 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:30.403 LIB libspdk_bdev_iscsi.a 00:02:30.403 CC module/bdev/raid/raid1.o 00:02:30.403 LIB libspdk_bdev_zone_block.a 00:02:30.403 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:30.404 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:30.404 CC module/bdev/nvme/nvme_rpc.o 00:02:30.404 CC module/bdev/raid/concat.o 00:02:30.404 CC module/bdev/raid/raid5f.o 00:02:30.404 LIB libspdk_bdev_ftl.a 00:02:30.662 CC module/bdev/nvme/bdev_mdns_client.o 00:02:30.662 CC module/bdev/nvme/vbdev_opal.o 00:02:30.662 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:30.662 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:30.662 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:31.228 LIB libspdk_bdev_virtio.a 00:02:31.229 LIB libspdk_bdev_raid.a 00:02:32.606 LIB libspdk_bdev_nvme.a 00:02:33.173 CC module/event/subsystems/vmd/vmd.o 00:02:33.173 CC module/event/subsystems/iobuf/iobuf.o 00:02:33.173 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:33.173 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:33.173 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:33.173 CC module/event/subsystems/keyring/keyring.o 00:02:33.173 CC module/event/subsystems/sock/sock.o 00:02:33.173 CC module/event/subsystems/scheduler/scheduler.o 00:02:33.173 LIB libspdk_event_vhost_blk.a 00:02:33.173 LIB libspdk_event_keyring.a 00:02:33.173 LIB libspdk_event_sock.a 00:02:33.173 LIB libspdk_event_vmd.a 00:02:33.173 LIB libspdk_event_scheduler.a 00:02:33.173 LIB libspdk_event_iobuf.a 00:02:33.432 CC module/event/subsystems/accel/accel.o 00:02:33.690 LIB libspdk_event_accel.a 00:02:33.949 CC module/event/subsystems/bdev/bdev.o 00:02:34.208 LIB libspdk_event_bdev.a 00:02:34.466 CC module/event/subsystems/scsi/scsi.o 00:02:34.466 CC module/event/subsystems/ublk/ublk.o 00:02:34.466 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:34.466 CC module/event/subsystems/nbd/nbd.o 00:02:34.466 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:34.724 LIB libspdk_event_ublk.a 00:02:34.724 LIB libspdk_event_nbd.a 00:02:34.724 LIB libspdk_event_scsi.a 00:02:34.725 LIB libspdk_event_nvmf.a 00:02:34.983 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:34.983 CC module/event/subsystems/iscsi/iscsi.o 00:02:34.983 LIB libspdk_event_vhost_scsi.a 00:02:34.983 LIB libspdk_event_iscsi.a 00:02:35.241 CC test/rpc_client/rpc_client_test.o 00:02:35.241 TEST_HEADER include/spdk/accel.h 00:02:35.241 TEST_HEADER include/spdk/accel_module.h 00:02:35.241 TEST_HEADER include/spdk/assert.h 00:02:35.241 TEST_HEADER include/spdk/barrier.h 00:02:35.241 CXX app/trace/trace.o 00:02:35.241 TEST_HEADER include/spdk/base64.h 00:02:35.241 TEST_HEADER include/spdk/bdev.h 00:02:35.241 TEST_HEADER include/spdk/bdev_module.h 00:02:35.241 TEST_HEADER include/spdk/bdev_zone.h 00:02:35.241 TEST_HEADER include/spdk/bit_array.h 00:02:35.241 TEST_HEADER include/spdk/bit_pool.h 00:02:35.241 TEST_HEADER include/spdk/blob.h 00:02:35.241 TEST_HEADER include/spdk/blob_bdev.h 00:02:35.499 TEST_HEADER include/spdk/blobfs.h 00:02:35.499 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:35.499 TEST_HEADER include/spdk/conf.h 00:02:35.499 TEST_HEADER include/spdk/config.h 00:02:35.499 TEST_HEADER include/spdk/cpuset.h 00:02:35.499 TEST_HEADER include/spdk/crc16.h 00:02:35.499 TEST_HEADER include/spdk/crc32.h 00:02:35.499 TEST_HEADER include/spdk/crc64.h 00:02:35.499 TEST_HEADER include/spdk/dif.h 00:02:35.499 TEST_HEADER include/spdk/dma.h 00:02:35.499 TEST_HEADER include/spdk/endian.h 00:02:35.499 TEST_HEADER include/spdk/env.h 00:02:35.499 TEST_HEADER include/spdk/env_dpdk.h 00:02:35.499 TEST_HEADER include/spdk/event.h 00:02:35.499 TEST_HEADER include/spdk/fd.h 00:02:35.499 TEST_HEADER include/spdk/fd_group.h 00:02:35.499 TEST_HEADER include/spdk/file.h 00:02:35.499 TEST_HEADER include/spdk/ftl.h 00:02:35.499 TEST_HEADER include/spdk/gpt_spec.h 00:02:35.499 TEST_HEADER include/spdk/hexlify.h 00:02:35.499 TEST_HEADER include/spdk/histogram_data.h 00:02:35.499 CC examples/ioat/perf/perf.o 00:02:35.499 CC examples/util/zipf/zipf.o 00:02:35.499 TEST_HEADER include/spdk/idxd.h 00:02:35.499 TEST_HEADER include/spdk/idxd_spec.h 00:02:35.499 CC test/thread/poller_perf/poller_perf.o 00:02:35.499 TEST_HEADER include/spdk/init.h 00:02:35.499 TEST_HEADER include/spdk/ioat.h 00:02:35.499 TEST_HEADER include/spdk/ioat_spec.h 00:02:35.499 TEST_HEADER include/spdk/iscsi_spec.h 00:02:35.499 TEST_HEADER include/spdk/json.h 00:02:35.499 TEST_HEADER include/spdk/jsonrpc.h 00:02:35.499 TEST_HEADER include/spdk/keyring.h 00:02:35.499 TEST_HEADER include/spdk/keyring_module.h 00:02:35.499 TEST_HEADER include/spdk/likely.h 00:02:35.499 TEST_HEADER include/spdk/log.h 00:02:35.499 TEST_HEADER include/spdk/lvol.h 00:02:35.499 TEST_HEADER include/spdk/memory.h 00:02:35.499 TEST_HEADER include/spdk/mmio.h 00:02:35.499 TEST_HEADER include/spdk/nbd.h 00:02:35.499 TEST_HEADER include/spdk/net.h 00:02:35.499 TEST_HEADER include/spdk/notify.h 00:02:35.499 TEST_HEADER include/spdk/nvme.h 00:02:35.499 TEST_HEADER include/spdk/nvme_intel.h 00:02:35.500 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:35.500 CC test/dma/test_dma/test_dma.o 00:02:35.500 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:35.500 TEST_HEADER include/spdk/nvme_spec.h 00:02:35.500 TEST_HEADER include/spdk/nvme_zns.h 00:02:35.500 CC test/app/bdev_svc/bdev_svc.o 00:02:35.500 TEST_HEADER include/spdk/nvmf.h 00:02:35.500 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:35.500 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:35.500 TEST_HEADER include/spdk/nvmf_spec.h 00:02:35.500 TEST_HEADER include/spdk/nvmf_transport.h 00:02:35.500 TEST_HEADER include/spdk/opal.h 00:02:35.500 TEST_HEADER include/spdk/opal_spec.h 00:02:35.500 TEST_HEADER include/spdk/pci_ids.h 00:02:35.500 TEST_HEADER include/spdk/pipe.h 00:02:35.500 TEST_HEADER include/spdk/queue.h 00:02:35.500 CC test/env/mem_callbacks/mem_callbacks.o 00:02:35.500 TEST_HEADER include/spdk/reduce.h 00:02:35.500 TEST_HEADER include/spdk/rpc.h 00:02:35.500 TEST_HEADER include/spdk/scheduler.h 00:02:35.500 TEST_HEADER include/spdk/scsi.h 00:02:35.500 TEST_HEADER include/spdk/scsi_spec.h 00:02:35.500 TEST_HEADER include/spdk/sock.h 00:02:35.500 TEST_HEADER include/spdk/stdinc.h 00:02:35.500 TEST_HEADER include/spdk/string.h 00:02:35.500 TEST_HEADER include/spdk/thread.h 00:02:35.500 TEST_HEADER include/spdk/trace.h 00:02:35.500 TEST_HEADER include/spdk/trace_parser.h 00:02:35.500 TEST_HEADER include/spdk/tree.h 00:02:35.500 TEST_HEADER include/spdk/ublk.h 00:02:35.500 TEST_HEADER include/spdk/util.h 00:02:35.500 TEST_HEADER include/spdk/uuid.h 00:02:35.500 TEST_HEADER include/spdk/version.h 00:02:35.500 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:35.500 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:35.500 TEST_HEADER include/spdk/vhost.h 00:02:35.500 TEST_HEADER include/spdk/vmd.h 00:02:35.500 TEST_HEADER include/spdk/xor.h 00:02:35.500 TEST_HEADER include/spdk/zipf.h 00:02:35.500 CXX test/cpp_headers/accel.o 00:02:35.500 LINK rpc_client_test 00:02:35.500 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:02:35.758 LINK poller_perf 00:02:35.758 LINK zipf 00:02:35.758 LINK ioat_perf 00:02:35.758 LINK bdev_svc 00:02:35.758 CXX test/cpp_headers/accel_module.o 00:02:35.758 LINK histogram_ut 00:02:35.758 LINK spdk_trace 00:02:36.016 LINK test_dma 00:02:36.016 CXX test/cpp_headers/assert.o 00:02:36.016 CXX test/cpp_headers/barrier.o 00:02:36.274 CC test/unit/lib/log/log.c/log_ut.o 00:02:36.274 CC app/trace_record/trace_record.o 00:02:36.274 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:36.274 CC examples/ioat/verify/verify.o 00:02:36.274 LINK mem_callbacks 00:02:36.274 CXX test/cpp_headers/base64.o 00:02:36.274 CC test/app/histogram_perf/histogram_perf.o 00:02:36.274 CC test/thread/lock/spdk_lock.o 00:02:36.531 CXX test/cpp_headers/bdev.o 00:02:36.532 LINK histogram_perf 00:02:36.532 LINK verify 00:02:36.532 LINK log_ut 00:02:36.532 LINK spdk_trace_record 00:02:36.790 CC test/env/vtophys/vtophys.o 00:02:36.790 CXX test/cpp_headers/bdev_module.o 00:02:36.790 LINK nvme_fuzz 00:02:36.790 LINK vtophys 00:02:37.048 CC test/unit/lib/rdma/common.c/common_ut.o 00:02:37.048 CXX test/cpp_headers/bdev_zone.o 00:02:37.048 CC app/nvmf_tgt/nvmf_main.o 00:02:37.048 CC test/app/jsoncat/jsoncat.o 00:02:37.048 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:37.305 CXX test/cpp_headers/bit_array.o 00:02:37.305 LINK jsoncat 00:02:37.305 LINK nvmf_tgt 00:02:37.305 CXX test/cpp_headers/bit_pool.o 00:02:37.305 LINK interrupt_tgt 00:02:37.564 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:37.564 CXX test/cpp_headers/blob.o 00:02:37.564 CC test/env/memory/memory_ut.o 00:02:37.564 LINK env_dpdk_post_init 00:02:37.821 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:37.821 CXX test/cpp_headers/blob_bdev.o 00:02:37.821 CXX test/cpp_headers/blobfs.o 00:02:37.821 LINK common_ut 00:02:37.821 CC test/env/pci/pci_ut.o 00:02:38.079 CXX test/cpp_headers/blobfs_bdev.o 00:02:38.079 CC test/app/stub/stub.o 00:02:38.079 CC test/unit/lib/util/base64.c/base64_ut.o 00:02:38.339 CXX test/cpp_headers/conf.o 00:02:38.339 LINK stub 00:02:38.339 LINK pci_ut 00:02:38.339 CXX test/cpp_headers/config.o 00:02:38.339 LINK base64_ut 00:02:38.339 CXX test/cpp_headers/cpuset.o 00:02:38.619 LINK spdk_lock 00:02:38.619 CC examples/sock/hello_world/hello_sock.o 00:02:38.619 CXX test/cpp_headers/crc16.o 00:02:38.619 CC examples/thread/thread/thread_ex.o 00:02:38.619 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:02:38.884 CXX test/cpp_headers/crc32.o 00:02:38.884 CXX test/cpp_headers/crc64.o 00:02:38.884 LINK memory_ut 00:02:38.884 LINK thread 00:02:38.884 CC examples/vmd/lsvmd/lsvmd.o 00:02:38.884 LINK hello_sock 00:02:39.142 CXX test/cpp_headers/dif.o 00:02:39.142 LINK lsvmd 00:02:39.142 CXX test/cpp_headers/dma.o 00:02:39.142 CXX test/cpp_headers/endian.o 00:02:39.142 CC examples/idxd/perf/perf.o 00:02:39.400 CXX test/cpp_headers/env.o 00:02:39.400 CC app/iscsi_tgt/iscsi_tgt.o 00:02:39.400 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:02:39.400 CC test/unit/lib/dma/dma.c/dma_ut.o 00:02:39.659 CXX test/cpp_headers/env_dpdk.o 00:02:39.659 LINK bit_array_ut 00:02:39.659 CXX test/cpp_headers/event.o 00:02:39.659 LINK iscsi_tgt 00:02:39.659 LINK idxd_perf 00:02:39.918 CXX test/cpp_headers/fd.o 00:02:39.918 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:02:39.918 CC examples/vmd/led/led.o 00:02:39.918 CXX test/cpp_headers/fd_group.o 00:02:40.176 LINK led 00:02:40.176 LINK iscsi_fuzz 00:02:40.176 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:02:40.176 CXX test/cpp_headers/file.o 00:02:40.176 CXX test/cpp_headers/ftl.o 00:02:40.176 LINK cpuset_ut 00:02:40.176 LINK crc16_ut 00:02:40.176 LINK ioat_ut 00:02:40.435 CXX test/cpp_headers/gpt_spec.o 00:02:40.435 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:02:40.435 CXX test/cpp_headers/hexlify.o 00:02:40.435 CXX test/cpp_headers/histogram_data.o 00:02:40.435 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:02:40.694 LINK crc32_ieee_ut 00:02:40.694 LINK dma_ut 00:02:40.694 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:02:40.694 CXX test/cpp_headers/idxd.o 00:02:40.694 LINK crc32c_ut 00:02:40.694 CXX test/cpp_headers/idxd_spec.o 00:02:40.694 CXX test/cpp_headers/init.o 00:02:40.694 CC test/unit/lib/util/dif.c/dif_ut.o 00:02:40.694 CXX test/cpp_headers/ioat.o 00:02:40.694 LINK crc64_ut 00:02:40.952 CXX test/cpp_headers/ioat_spec.o 00:02:40.952 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:40.952 CXX test/cpp_headers/iscsi_spec.o 00:02:40.952 CC examples/nvme/hello_world/hello_world.o 00:02:40.952 CC examples/nvme/reconnect/reconnect.o 00:02:40.952 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:41.211 CC examples/accel/perf/accel_perf.o 00:02:41.211 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:41.211 CC examples/blob/hello_world/hello_blob.o 00:02:41.211 CC examples/blob/cli/blobcli.o 00:02:41.211 CXX test/cpp_headers/json.o 00:02:41.211 LINK hello_world 00:02:41.469 CXX test/cpp_headers/jsonrpc.o 00:02:41.469 LINK hello_blob 00:02:41.469 LINK reconnect 00:02:41.469 CXX test/cpp_headers/keyring.o 00:02:41.728 LINK vhost_fuzz 00:02:41.728 LINK accel_perf 00:02:41.728 CC app/spdk_tgt/spdk_tgt.o 00:02:41.728 LINK nvme_manage 00:02:41.728 CXX test/cpp_headers/keyring_module.o 00:02:41.728 LINK blobcli 00:02:41.987 CXX test/cpp_headers/likely.o 00:02:41.987 LINK spdk_tgt 00:02:42.245 CC examples/nvme/arbitration/arbitration.o 00:02:42.245 CXX test/cpp_headers/log.o 00:02:42.245 LINK dif_ut 00:02:42.245 CXX test/cpp_headers/lvol.o 00:02:42.503 CC app/spdk_lspci/spdk_lspci.o 00:02:42.503 CC test/unit/lib/util/file.c/file_ut.o 00:02:42.503 CC test/unit/lib/util/iov.c/iov_ut.o 00:02:42.762 CXX test/cpp_headers/memory.o 00:02:42.762 LINK file_ut 00:02:42.762 LINK spdk_lspci 00:02:42.762 CC test/unit/lib/util/math.c/math_ut.o 00:02:42.762 LINK arbitration 00:02:42.762 CC test/unit/lib/util/net.c/net_ut.o 00:02:43.020 CXX test/cpp_headers/mmio.o 00:02:43.020 LINK iov_ut 00:02:43.278 LINK math_ut 00:02:43.278 LINK net_ut 00:02:43.278 CC examples/bdev/hello_world/hello_bdev.o 00:02:43.278 CXX test/cpp_headers/nbd.o 00:02:43.278 CXX test/cpp_headers/net.o 00:02:43.536 CC examples/bdev/bdevperf/bdevperf.o 00:02:43.536 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:02:43.536 CC test/unit/lib/util/string.c/string_ut.o 00:02:43.536 CC test/nvme/aer/aer.o 00:02:43.536 CXX test/cpp_headers/notify.o 00:02:43.536 LINK hello_bdev 00:02:43.536 CXX test/cpp_headers/nvme.o 00:02:43.794 CXX test/cpp_headers/nvme_intel.o 00:02:43.794 CC examples/nvme/hotplug/hotplug.o 00:02:43.794 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:43.794 CXX test/cpp_headers/nvme_ocssd.o 00:02:43.794 LINK string_ut 00:02:43.794 LINK aer 00:02:43.794 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:44.051 LINK hotplug 00:02:44.051 LINK cmb_copy 00:02:44.051 CXX test/cpp_headers/nvme_spec.o 00:02:44.051 CC examples/nvme/abort/abort.o 00:02:44.051 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:44.308 CXX test/cpp_headers/nvme_zns.o 00:02:44.308 CC app/spdk_nvme_perf/perf.o 00:02:44.308 LINK pipe_ut 00:02:44.308 LINK pmr_persistence 00:02:44.566 LINK bdevperf 00:02:44.566 CXX test/cpp_headers/nvmf.o 00:02:44.566 CC test/unit/lib/util/xor.c/xor_ut.o 00:02:44.566 LINK abort 00:02:44.824 CXX test/cpp_headers/nvmf_cmd.o 00:02:44.824 CC test/nvme/reset/reset.o 00:02:44.824 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:44.824 CXX test/cpp_headers/nvmf_spec.o 00:02:45.082 CXX test/cpp_headers/nvmf_transport.o 00:02:45.082 CXX test/cpp_headers/opal.o 00:02:45.082 CC test/nvme/sgl/sgl.o 00:02:45.082 LINK reset 00:02:45.338 CXX test/cpp_headers/opal_spec.o 00:02:45.338 LINK xor_ut 00:02:45.338 CXX test/cpp_headers/pci_ids.o 00:02:45.338 CXX test/cpp_headers/pipe.o 00:02:45.338 CC test/nvme/e2edp/nvme_dp.o 00:02:45.596 CC app/spdk_nvme_identify/identify.o 00:02:45.596 LINK sgl 00:02:45.596 LINK spdk_nvme_perf 00:02:45.596 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:02:45.596 CXX test/cpp_headers/queue.o 00:02:45.596 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:02:45.596 CXX test/cpp_headers/reduce.o 00:02:45.596 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:02:45.854 LINK nvme_dp 00:02:45.854 CXX test/cpp_headers/rpc.o 00:02:46.113 CXX test/cpp_headers/scheduler.o 00:02:46.113 CC test/nvme/overhead/overhead.o 00:02:46.113 CXX test/cpp_headers/scsi.o 00:02:46.372 LINK pci_event_ut 00:02:46.372 CXX test/cpp_headers/scsi_spec.o 00:02:46.372 LINK json_util_ut 00:02:46.372 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:02:46.372 CC examples/nvmf/nvmf/nvmf.o 00:02:46.372 CXX test/cpp_headers/sock.o 00:02:46.372 LINK overhead 00:02:46.630 CXX test/cpp_headers/stdinc.o 00:02:46.631 CXX test/cpp_headers/string.o 00:02:46.631 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:02:46.631 LINK spdk_nvme_identify 00:02:46.631 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:02:46.631 CXX test/cpp_headers/thread.o 00:02:46.890 CC app/spdk_nvme_discover/discovery_aer.o 00:02:46.890 CC test/nvme/err_injection/err_injection.o 00:02:46.890 LINK nvmf 00:02:46.890 CXX test/cpp_headers/trace.o 00:02:47.148 LINK spdk_nvme_discover 00:02:47.148 LINK err_injection 00:02:47.148 CXX test/cpp_headers/trace_parser.o 00:02:47.405 CC app/spdk_top/spdk_top.o 00:02:47.405 CXX test/cpp_headers/tree.o 00:02:47.406 CXX test/cpp_headers/ublk.o 00:02:47.406 LINK idxd_user_ut 00:02:47.406 LINK json_write_ut 00:02:47.663 CXX test/cpp_headers/util.o 00:02:47.663 CC app/vhost/vhost.o 00:02:47.663 CC test/nvme/startup/startup.o 00:02:47.920 CC app/spdk_dd/spdk_dd.o 00:02:47.920 CXX test/cpp_headers/uuid.o 00:02:47.920 LINK vhost 00:02:47.920 CC test/nvme/reserve/reserve.o 00:02:47.920 LINK startup 00:02:47.920 CXX test/cpp_headers/version.o 00:02:47.920 CC app/fio/nvme/fio_plugin.o 00:02:48.178 CXX test/cpp_headers/vfio_user_pci.o 00:02:48.178 LINK idxd_ut 00:02:48.178 LINK reserve 00:02:48.178 CXX test/cpp_headers/vfio_user_spec.o 00:02:48.436 CXX test/cpp_headers/vhost.o 00:02:48.436 LINK spdk_dd 00:02:48.436 CXX test/cpp_headers/vmd.o 00:02:48.709 CC app/fio/bdev/fio_plugin.o 00:02:48.709 LINK spdk_top 00:02:48.709 CXX test/cpp_headers/xor.o 00:02:48.709 LINK json_parse_ut 00:02:48.975 LINK spdk_nvme 00:02:48.975 CXX test/cpp_headers/zipf.o 00:02:48.975 CC test/nvme/simple_copy/simple_copy.o 00:02:48.975 CC test/nvme/connect_stress/connect_stress.o 00:02:49.234 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:02:49.234 CC test/nvme/boot_partition/boot_partition.o 00:02:49.234 CC test/nvme/compliance/nvme_compliance.o 00:02:49.234 CC test/nvme/fused_ordering/fused_ordering.o 00:02:49.234 LINK simple_copy 00:02:49.234 LINK spdk_bdev 00:02:49.492 LINK boot_partition 00:02:49.492 LINK connect_stress 00:02:49.492 LINK fused_ordering 00:02:49.750 LINK jsonrpc_server_ut 00:02:49.750 LINK nvme_compliance 00:02:49.750 CC test/accel/dif/dif.o 00:02:50.009 CC test/blobfs/mkfs/mkfs.o 00:02:50.009 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:02:50.267 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:50.267 CC test/nvme/fdp/fdp.o 00:02:50.267 CC test/nvme/cuse/cuse.o 00:02:50.525 LINK mkfs 00:02:50.525 LINK dif 00:02:50.525 LINK doorbell_aers 00:02:50.525 CC test/event/event_perf/event_perf.o 00:02:50.525 CC test/event/reactor/reactor.o 00:02:50.783 CC test/lvol/esnap/esnap.o 00:02:50.783 LINK fdp 00:02:50.783 LINK event_perf 00:02:50.783 LINK reactor 00:02:51.349 CC test/event/reactor_perf/reactor_perf.o 00:02:51.349 LINK rpc_ut 00:02:51.349 CC test/event/app_repeat/app_repeat.o 00:02:51.349 LINK reactor_perf 00:02:51.607 CC test/event/scheduler/scheduler.o 00:02:51.607 CC test/unit/lib/thread/thread.c/thread_ut.o 00:02:51.607 LINK app_repeat 00:02:51.865 CC test/unit/lib/notify/notify.c/notify_ut.o 00:02:51.865 CC test/unit/lib/sock/sock.c/sock_ut.o 00:02:51.865 LINK scheduler 00:02:52.123 LINK cuse 00:02:52.123 CC test/unit/lib/sock/posix.c/posix_ut.o 00:02:52.381 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:02:52.640 CC test/bdev/bdevio/bdevio.o 00:02:52.640 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:02:52.640 LINK notify_ut 00:02:52.898 LINK bdevio 00:02:53.156 LINK keyring_ut 00:02:53.722 LINK posix_ut 00:02:53.722 LINK iobuf_ut 00:02:54.288 LINK sock_ut 00:02:54.546 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:02:54.546 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:02:54.546 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:02:54.546 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:02:54.546 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:02:54.546 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:02:54.546 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:02:54.804 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:02:55.062 LINK thread_ut 00:02:55.320 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:02:56.254 LINK nvme_ctrlr_ocssd_cmd_ut 00:02:56.254 LINK nvme_ns_ut 00:02:56.512 LINK nvme_ctrlr_cmd_ut 00:02:56.512 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:02:56.512 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:02:56.770 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:02:56.770 LINK nvme_poll_group_ut 00:02:56.770 LINK nvme_ut 00:02:56.770 LINK nvme_ns_ocssd_cmd_ut 00:02:57.029 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:02:57.029 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:02:57.287 LINK nvme_quirks_ut 00:02:57.287 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:02:57.287 LINK nvme_ns_cmd_ut 00:02:57.545 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:02:57.545 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:02:57.545 LINK nvme_pcie_ut 00:02:58.109 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:02:58.109 LINK esnap 00:02:58.367 LINK nvme_qpair_ut 00:02:58.626 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:02:58.626 LINK nvme_io_msg_ut 00:02:58.626 LINK nvme_transport_ut 00:02:58.626 CC test/unit/lib/accel/accel.c/accel_ut.o 00:02:58.884 LINK nvme_opal_ut 00:02:59.143 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:02:59.143 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:02:59.143 LINK nvme_fabric_ut 00:02:59.143 LINK nvme_ctrlr_ut 00:02:59.143 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:02:59.402 LINK nvme_pcie_common_ut 00:02:59.402 CC test/unit/lib/blob/blob.c/blob_ut.o 00:02:59.968 LINK rpc_ut 00:03:00.227 LINK subsystem_ut 00:03:00.227 LINK blob_bdev_ut 00:03:00.485 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:00.485 CC test/unit/lib/event/app.c/app_ut.o 00:03:00.743 LINK nvme_tcp_ut 00:03:01.308 LINK nvme_cuse_ut 00:03:01.308 LINK nvme_rdma_ut 00:03:01.566 LINK app_ut 00:03:02.132 LINK reactor_ut 00:03:02.390 LINK accel_ut 00:03:02.648 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:02.648 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:02.648 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:02.648 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:02.906 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:02.906 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:02.906 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:02.906 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:02.906 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:03.164 LINK scsi_nvme_ut 00:03:03.164 LINK bdev_zone_ut 00:03:03.441 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:03.441 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:03.698 LINK gpt_ut 00:03:03.956 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:03.956 LINK vbdev_zone_block_ut 00:03:04.214 LINK bdev_raid_sb_ut 00:03:04.214 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:04.780 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:03:04.780 LINK vbdev_lvol_ut 00:03:04.780 LINK concat_ut 00:03:05.038 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:03:05.295 LINK raid1_ut 00:03:05.865 LINK bdev_raid_ut 00:03:05.865 LINK raid0_ut 00:03:06.796 LINK raid5f_ut 00:03:07.729 LINK part_ut 00:03:08.293 LINK bdev_ut 00:03:09.227 LINK blob_ut 00:03:09.794 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:09.794 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:09.794 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:09.794 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:09.794 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:10.052 LINK bdev_nvme_ut 00:03:10.052 LINK tree_ut 00:03:10.052 LINK blobfs_bdev_ut 00:03:10.310 LINK bdev_ut 00:03:10.568 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:10.826 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:10.826 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:10.826 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:10.826 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:10.826 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:10.826 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:03:11.393 LINK ftl_l2p_ut 00:03:11.393 LINK dev_ut 00:03:11.651 LINK blobfs_sync_ut 00:03:11.651 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:03:11.651 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:11.651 LINK blobfs_async_ut 00:03:12.217 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:12.217 LINK ctrlr_bdev_ut 00:03:12.217 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:12.474 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:12.474 LINK lvol_ut 00:03:12.733 LINK scsi_ut 00:03:12.733 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:03:13.018 LINK lun_ut 00:03:13.018 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:13.276 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:03:13.534 LINK ctrlr_discovery_ut 00:03:13.534 LINK scsi_pr_ut 00:03:13.792 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:13.792 LINK ftl_band_ut 00:03:13.792 LINK scsi_bdev_ut 00:03:13.792 CC test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut.o 00:03:14.051 LINK subsystem_ut 00:03:14.051 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:03:14.051 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:03:14.309 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:14.309 LINK ftl_bitmap_ut 00:03:14.568 LINK ftl_io_ut 00:03:14.568 LINK nvmf_ut 00:03:14.568 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:03:14.827 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:03:14.827 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:03:14.827 LINK ftl_mempool_ut 00:03:15.085 LINK ctrlr_ut 00:03:15.085 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:15.344 LINK auth_ut 00:03:15.344 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:15.602 LINK ftl_p2l_ut 00:03:15.602 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:15.602 LINK ftl_mngt_ut 00:03:15.861 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:15.861 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:03:16.120 LINK init_grp_ut 00:03:16.379 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:16.638 LINK param_ut 00:03:16.638 LINK tcp_ut 00:03:16.638 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:16.896 LINK ftl_layout_upgrade_ut 00:03:16.896 LINK ftl_sb_ut 00:03:17.155 LINK conn_ut 00:03:18.094 LINK portal_grp_ut 00:03:18.353 LINK tgt_node_ut 00:03:18.612 LINK rdma_ut 00:03:18.612 LINK transport_ut 00:03:18.871 LINK iscsi_ut 00:03:19.438 LINK vhost_ut 00:03:19.698 00:03:19.698 real 2m15.460s 00:03:19.698 user 11m7.900s 00:03:19.698 sys 2m24.873s 00:03:19.698 ************************************ 00:03:19.698 END TEST unittest_build 00:03:19.698 ************************************ 00:03:19.698 23:47:15 unittest_build -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:19.698 23:47:15 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:03:19.698 23:47:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:19.698 23:47:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:19.698 23:47:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:19.698 23:47:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.698 23:47:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:19.698 23:47:15 -- pm/common@44 -- $ pid=2419 00:03:19.698 23:47:15 -- pm/common@50 -- $ kill -TERM 2419 00:03:19.698 23:47:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.698 23:47:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:19.698 23:47:15 -- pm/common@44 -- $ pid=2421 00:03:19.698 23:47:15 -- pm/common@50 -- $ kill -TERM 2421 00:03:19.698 23:47:15 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:19.698 23:47:15 -- nvmf/common.sh@7 -- # uname -s 00:03:19.698 23:47:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:19.698 23:47:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:19.698 23:47:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:19.698 23:47:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:19.698 23:47:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:19.698 23:47:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:19.698 23:47:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:19.698 23:47:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:19.698 23:47:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:19.698 23:47:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:19.698 23:47:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bc9a5b99-1c11-456e-aaab-d1f1a68fbb44 00:03:19.698 23:47:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=bc9a5b99-1c11-456e-aaab-d1f1a68fbb44 00:03:19.698 23:47:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:19.698 23:47:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:19.698 23:47:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:19.698 23:47:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:19.698 23:47:15 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:19.698 23:47:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:19.698 23:47:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:19.698 23:47:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:19.698 23:47:15 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:19.698 23:47:15 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:19.698 23:47:15 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:19.698 23:47:15 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:19.698 23:47:15 -- paths/export.sh@6 -- # export PATH 00:03:19.698 23:47:15 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:19.698 23:47:15 -- nvmf/common.sh@47 -- # : 0 00:03:19.698 23:47:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:19.698 23:47:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:19.698 23:47:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:19.698 23:47:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:19.698 23:47:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:19.698 23:47:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:19.698 23:47:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:19.698 23:47:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:19.698 23:47:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:19.698 23:47:15 -- spdk/autotest.sh@32 -- # uname -s 00:03:19.698 23:47:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:19.698 23:47:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:03:19.698 23:47:15 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:19.698 23:47:15 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:19.698 23:47:15 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:19.698 23:47:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:19.698 23:47:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:19.698 23:47:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:03:19.698 23:47:15 -- spdk/autotest.sh@48 -- # udevadm_pid=59095 00:03:19.698 23:47:15 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:03:19.698 23:47:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:19.698 23:47:15 -- pm/common@17 -- # local monitor 00:03:19.698 23:47:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.698 23:47:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.698 23:47:15 -- pm/common@25 -- # sleep 1 00:03:19.698 23:47:15 -- pm/common@21 -- # date +%s 00:03:19.698 23:47:15 -- pm/common@21 -- # date +%s 00:03:19.698 23:47:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721864835 00:03:19.698 23:47:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721864835 00:03:19.957 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721864835_collect-vmstat.pm.log 00:03:19.957 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721864835_collect-cpu-load.pm.log 00:03:20.893 23:47:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:20.893 23:47:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:20.893 23:47:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:20.893 23:47:16 -- common/autotest_common.sh@10 -- # set +x 00:03:20.893 23:47:16 -- spdk/autotest.sh@59 -- # create_test_list 00:03:20.893 23:47:16 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:20.893 23:47:16 -- common/autotest_common.sh@10 -- # set +x 00:03:20.893 23:47:16 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:20.893 23:47:16 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:20.893 23:47:16 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:20.893 23:47:16 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:20.893 23:47:16 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:20.893 23:47:16 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:20.893 23:47:16 -- common/autotest_common.sh@1455 -- # uname 00:03:20.893 23:47:16 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:20.893 23:47:16 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:20.893 23:47:16 -- common/autotest_common.sh@1475 -- # uname 00:03:20.893 23:47:16 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:20.893 23:47:16 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:20.893 23:47:16 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:20.893 23:47:16 -- spdk/autotest.sh@72 -- # hash lcov 00:03:20.893 23:47:16 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:20.893 23:47:16 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:20.893 --rc lcov_branch_coverage=1 00:03:20.893 --rc lcov_function_coverage=1 00:03:20.893 --rc genhtml_branch_coverage=1 00:03:20.893 --rc genhtml_function_coverage=1 00:03:20.893 --rc genhtml_legend=1 00:03:20.893 --rc geninfo_all_blocks=1 00:03:20.893 ' 00:03:20.893 23:47:16 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:20.893 --rc lcov_branch_coverage=1 00:03:20.893 --rc lcov_function_coverage=1 00:03:20.893 --rc genhtml_branch_coverage=1 00:03:20.893 --rc genhtml_function_coverage=1 00:03:20.893 --rc genhtml_legend=1 00:03:20.893 --rc geninfo_all_blocks=1 00:03:20.893 ' 00:03:20.893 23:47:16 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:20.893 --rc lcov_branch_coverage=1 00:03:20.893 --rc lcov_function_coverage=1 00:03:20.893 --rc genhtml_branch_coverage=1 00:03:20.893 --rc genhtml_function_coverage=1 00:03:20.893 --rc genhtml_legend=1 00:03:20.893 --rc geninfo_all_blocks=1 00:03:20.893 --no-external' 00:03:20.893 23:47:16 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:20.893 --rc lcov_branch_coverage=1 00:03:20.893 --rc lcov_function_coverage=1 00:03:20.893 --rc genhtml_branch_coverage=1 00:03:20.893 --rc genhtml_function_coverage=1 00:03:20.893 --rc genhtml_legend=1 00:03:20.893 --rc geninfo_all_blocks=1 00:03:20.893 --no-external' 00:03:20.893 23:47:16 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:20.893 lcov: LCOV version 1.15 00:03:20.893 23:47:16 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:27.456 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:27.456 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:14.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:14.148 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:14.149 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:14.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:24.135 23:48:18 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:24.135 23:48:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:24.135 23:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:24.135 23:48:18 -- spdk/autotest.sh@91 -- # rm -f 00:04:24.135 23:48:18 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:24.135 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:24.135 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:24.135 23:48:18 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:24.135 23:48:18 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:24.135 23:48:18 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:24.135 23:48:18 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:24.135 23:48:18 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:24.135 23:48:18 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:24.135 23:48:18 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:24.135 23:48:18 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:24.135 23:48:18 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:24.135 23:48:18 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:24.135 23:48:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:24.135 23:48:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:24.135 23:48:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:24.135 23:48:18 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:24.135 23:48:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:24.135 No valid GPT data, bailing 00:04:24.135 23:48:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:24.135 23:48:18 -- scripts/common.sh@391 -- # pt= 00:04:24.135 23:48:18 -- scripts/common.sh@392 -- # return 1 00:04:24.135 23:48:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:24.135 1+0 records in 00:04:24.135 1+0 records out 00:04:24.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00561525 s, 187 MB/s 00:04:24.135 23:48:18 -- spdk/autotest.sh@118 -- # sync 00:04:24.135 23:48:18 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:24.135 23:48:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:24.135 23:48:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:24.393 23:48:20 -- spdk/autotest.sh@124 -- # uname -s 00:04:24.393 23:48:20 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:24.393 23:48:20 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:24.393 23:48:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.393 23:48:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.393 23:48:20 -- common/autotest_common.sh@10 -- # set +x 00:04:24.393 ************************************ 00:04:24.394 START TEST setup.sh 00:04:24.394 ************************************ 00:04:24.394 23:48:20 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:24.651 * Looking for test storage... 00:04:24.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:24.651 23:48:20 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:24.651 23:48:20 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:24.651 23:48:20 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:24.651 23:48:20 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.651 23:48:20 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.651 23:48:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:24.651 ************************************ 00:04:24.651 START TEST acl 00:04:24.651 ************************************ 00:04:24.651 23:48:20 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:24.651 * Looking for test storage... 00:04:24.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:24.651 23:48:20 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:24.651 23:48:20 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:24.651 23:48:20 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:24.651 23:48:20 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:24.651 23:48:20 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:24.651 23:48:20 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:24.651 23:48:20 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:24.651 23:48:20 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:24.651 23:48:20 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:24.651 23:48:20 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:24.651 23:48:20 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:24.651 23:48:20 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:24.651 23:48:20 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:24.651 23:48:20 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:24.651 23:48:20 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.651 23:48:20 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:25.251 23:48:20 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:25.251 23:48:20 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:25.251 23:48:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.251 23:48:20 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:25.251 23:48:20 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.251 23:48:20 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:25.510 23:48:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:25.510 23:48:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:25.510 23:48:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.510 Hugepages 00:04:25.510 node hugesize free / total 00:04:25.510 23:48:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:25.510 23:48:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:25.510 23:48:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.510 00:04:25.510 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:25.510 23:48:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:25.510 23:48:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:25.510 23:48:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.510 23:48:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:25.510 23:48:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:25.510 23:48:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:25.510 23:48:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.768 23:48:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:25.768 23:48:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:25.768 23:48:21 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:25.768 23:48:21 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:25.768 23:48:21 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:25.768 23:48:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.768 23:48:21 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:25.768 23:48:21 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:25.768 23:48:21 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.768 23:48:21 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.769 23:48:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:25.769 ************************************ 00:04:25.769 START TEST denied 00:04:25.769 ************************************ 00:04:25.769 23:48:21 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:25.769 23:48:21 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:25.769 23:48:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:25.769 23:48:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:25.769 23:48:21 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.769 23:48:21 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:26.704 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:26.704 23:48:22 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:26.704 23:48:22 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:26.704 23:48:22 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:26.704 23:48:22 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:26.704 23:48:22 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:26.704 23:48:22 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:26.704 23:48:22 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:26.704 23:48:22 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:26.704 23:48:22 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:26.704 23:48:22 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:27.271 00:04:27.271 real 0m1.374s 00:04:27.271 user 0m0.357s 00:04:27.271 sys 0m1.080s 00:04:27.271 23:48:22 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.271 23:48:22 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:27.271 ************************************ 00:04:27.271 END TEST denied 00:04:27.271 ************************************ 00:04:27.271 23:48:22 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:27.271 23:48:22 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.271 23:48:22 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.271 23:48:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:27.271 ************************************ 00:04:27.271 START TEST allowed 00:04:27.271 ************************************ 00:04:27.271 23:48:22 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:27.271 23:48:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:27.271 23:48:22 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:27.271 23:48:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:27.271 23:48:22 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.271 23:48:22 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:28.206 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:28.206 23:48:23 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:28.206 23:48:23 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:28.206 23:48:23 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:28.206 23:48:23 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:28.206 23:48:23 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:28.775 00:04:28.775 real 0m1.490s 00:04:28.775 user 0m0.333s 00:04:28.775 sys 0m1.205s 00:04:28.775 23:48:24 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.775 23:48:24 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:28.775 ************************************ 00:04:28.775 END TEST allowed 00:04:28.775 ************************************ 00:04:28.775 ************************************ 00:04:28.775 END TEST acl 00:04:28.775 ************************************ 00:04:28.775 00:04:28.775 real 0m4.078s 00:04:28.775 user 0m1.216s 00:04:28.775 sys 0m3.036s 00:04:28.775 23:48:24 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.775 23:48:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:28.775 23:48:24 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:28.775 23:48:24 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.775 23:48:24 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.775 23:48:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:28.775 ************************************ 00:04:28.775 START TEST hugepages 00:04:28.775 ************************************ 00:04:28.775 23:48:24 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:28.775 * Looking for test storage... 00:04:28.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 2562652 kB' 'MemAvailable: 7359784 kB' 'Buffers: 35320 kB' 'Cached: 4901304 kB' 'SwapCached: 0 kB' 'Active: 398880 kB' 'Inactive: 4637724 kB' 'Active(anon): 111328 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637724 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 128968 kB' 'Mapped: 58084 kB' 'Shmem: 2600 kB' 'KReclaimable: 193408 kB' 'Slab: 274348 kB' 'SReclaimable: 193408 kB' 'SUnreclaim: 80940 kB' 'KernelStack: 5028 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4026008 kB' 'Committed_AS: 372908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20120 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.775 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:28.777 23:48:24 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:28.777 23:48:24 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.777 23:48:24 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.777 23:48:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:28.777 ************************************ 00:04:28.777 START TEST default_setup 00:04:28.777 ************************************ 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.777 23:48:24 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.344 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:29.344 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4636336 kB' 'MemAvailable: 9433456 kB' 'Buffers: 35320 kB' 'Cached: 4901304 kB' 'SwapCached: 0 kB' 'Active: 415508 kB' 'Inactive: 4637728 kB' 'Active(anon): 127956 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637728 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 145236 kB' 'Mapped: 58304 kB' 'Shmem: 2592 kB' 'KReclaimable: 193392 kB' 'Slab: 274356 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80964 kB' 'KernelStack: 5048 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 389968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20088 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.606 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4636336 kB' 'MemAvailable: 9433460 kB' 'Buffers: 35320 kB' 'Cached: 4901304 kB' 'SwapCached: 0 kB' 'Active: 415252 kB' 'Inactive: 4637732 kB' 'Active(anon): 127700 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637732 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 145296 kB' 'Mapped: 58292 kB' 'Shmem: 2592 kB' 'KReclaimable: 193392 kB' 'Slab: 274352 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80960 kB' 'KernelStack: 5032 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 389968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20088 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.607 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4635580 kB' 'MemAvailable: 9432704 kB' 'Buffers: 35320 kB' 'Cached: 4901304 kB' 'SwapCached: 0 kB' 'Active: 415272 kB' 'Inactive: 4637732 kB' 'Active(anon): 127720 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637732 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 145300 kB' 'Mapped: 58292 kB' 'Shmem: 2592 kB' 'KReclaimable: 193392 kB' 'Slab: 274344 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80952 kB' 'KernelStack: 5032 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 389968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20088 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.608 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:29.609 nr_hugepages=1024 00:04:29.609 resv_hugepages=0 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.609 surplus_hugepages=0 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.609 anon_hugepages=0 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4635580 kB' 'MemAvailable: 9432704 kB' 'Buffers: 35320 kB' 'Cached: 4901304 kB' 'SwapCached: 0 kB' 'Active: 415344 kB' 'Inactive: 4637732 kB' 'Active(anon): 127792 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637732 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 145268 kB' 'Mapped: 58292 kB' 'Shmem: 2592 kB' 'KReclaimable: 193392 kB' 'Slab: 274344 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80952 kB' 'KernelStack: 5016 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 389968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20088 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.870 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4638636 kB' 'MemUsed: 7607688 kB' 'SwapCached: 0 kB' 'Active: 415332 kB' 'Inactive: 4637732 kB' 'Active(anon): 127780 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637732 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 4936624 kB' 'Mapped: 58292 kB' 'AnonPages: 145300 kB' 'Shmem: 2592 kB' 'KernelStack: 5032 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193392 kB' 'Slab: 274340 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.872 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:29.873 node0=1024 expecting 1024 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:29.873 00:04:29.873 real 0m0.918s 00:04:29.873 user 0m0.298s 00:04:29.873 sys 0m0.599s 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.873 23:48:25 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:29.873 ************************************ 00:04:29.873 END TEST default_setup 00:04:29.873 ************************************ 00:04:29.873 23:48:25 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:29.873 23:48:25 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.873 23:48:25 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.873 23:48:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.873 ************************************ 00:04:29.873 START TEST per_node_1G_alloc 00:04:29.873 ************************************ 00:04:29.873 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:29.873 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:29.873 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:29.873 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:29.873 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:29.873 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:29.873 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:29.874 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:29.874 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.874 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:29.874 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:29.874 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:29.874 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.874 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:29.874 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:29.874 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.874 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.874 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:29.874 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:29.874 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:29.874 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:29.874 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:29.874 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:29.874 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:29.874 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.874 23:48:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:30.133 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:30.133 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5682844 kB' 'MemAvailable: 10479972 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 415468 kB' 'Inactive: 4637736 kB' 'Active(anon): 127916 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 145248 kB' 'Mapped: 58128 kB' 'Shmem: 2592 kB' 'KReclaimable: 193392 kB' 'Slab: 274292 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80900 kB' 'KernelStack: 5040 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 389968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5682844 kB' 'MemAvailable: 10479972 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 415272 kB' 'Inactive: 4637736 kB' 'Active(anon): 127720 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 145316 kB' 'Mapped: 58124 kB' 'Shmem: 2592 kB' 'KReclaimable: 193392 kB' 'Slab: 274288 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80896 kB' 'KernelStack: 5008 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 389968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.400 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5682604 kB' 'MemAvailable: 10479732 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 415308 kB' 'Inactive: 4637736 kB' 'Active(anon): 127756 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 145348 kB' 'Mapped: 58124 kB' 'Shmem: 2592 kB' 'KReclaimable: 193392 kB' 'Slab: 274304 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80912 kB' 'KernelStack: 5008 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 389968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.401 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.402 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:30.403 nr_hugepages=512 00:04:30.403 resv_hugepages=0 00:04:30.403 surplus_hugepages=0 00:04:30.403 anon_hugepages=0 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.403 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5682604 kB' 'MemAvailable: 10479732 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 415172 kB' 'Inactive: 4637736 kB' 'Active(anon): 127620 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 145212 kB' 'Mapped: 58124 kB' 'Shmem: 2592 kB' 'KReclaimable: 193392 kB' 'Slab: 274304 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80912 kB' 'KernelStack: 5008 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 389968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.404 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.405 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5682604 kB' 'MemUsed: 6563720 kB' 'SwapCached: 0 kB' 'Active: 415268 kB' 'Inactive: 4637736 kB' 'Active(anon): 127716 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'FilePages: 4936628 kB' 'Mapped: 58124 kB' 'AnonPages: 145332 kB' 'Shmem: 2592 kB' 'KernelStack: 5024 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193392 kB' 'Slab: 274304 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.406 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:30.407 node0=512 expecting 512 00:04:30.407 ************************************ 00:04:30.407 END TEST per_node_1G_alloc 00:04:30.407 ************************************ 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:30.407 00:04:30.407 real 0m0.666s 00:04:30.407 user 0m0.266s 00:04:30.407 sys 0m0.403s 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.407 23:48:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:30.666 23:48:26 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:30.666 23:48:26 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.666 23:48:26 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.666 23:48:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:30.666 ************************************ 00:04:30.666 START TEST even_2G_alloc 00:04:30.666 ************************************ 00:04:30.666 23:48:26 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:30.666 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:30.666 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:30.666 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:30.666 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:30.666 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:30.666 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:30.666 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:30.666 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:30.666 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:30.666 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:30.666 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:30.666 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:30.666 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:30.666 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:30.666 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:30.667 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:30.667 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:30.667 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:30.667 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:30.667 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:30.667 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:30.667 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:30.667 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.667 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:30.925 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:30.925 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.187 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:31.187 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:31.187 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:31.187 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:31.187 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:31.187 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:31.187 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:31.187 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:31.187 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:31.187 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:31.187 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:31.187 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:31.187 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4634188 kB' 'MemAvailable: 9431316 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 415260 kB' 'Inactive: 4637736 kB' 'Active(anon): 127708 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 145260 kB' 'Mapped: 58132 kB' 'Shmem: 2592 kB' 'KReclaimable: 193392 kB' 'Slab: 274332 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80940 kB' 'KernelStack: 5040 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 389968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20088 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4634188 kB' 'MemAvailable: 9431316 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 414984 kB' 'Inactive: 4637736 kB' 'Active(anon): 127432 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 145208 kB' 'Mapped: 58124 kB' 'Shmem: 2592 kB' 'KReclaimable: 193392 kB' 'Slab: 274324 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80932 kB' 'KernelStack: 5008 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 389968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.190 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.191 23:48:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4634188 kB' 'MemAvailable: 9431316 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 415028 kB' 'Inactive: 4637736 kB' 'Active(anon): 127476 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 145316 kB' 'Mapped: 58124 kB' 'Shmem: 2592 kB' 'KReclaimable: 193392 kB' 'Slab: 274320 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80928 kB' 'KernelStack: 5024 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 389968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.191 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.192 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:31.193 nr_hugepages=1024 00:04:31.193 resv_hugepages=0 00:04:31.193 surplus_hugepages=0 00:04:31.193 anon_hugepages=0 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4634188 kB' 'MemAvailable: 9431316 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 415252 kB' 'Inactive: 4637736 kB' 'Active(anon): 127700 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 145232 kB' 'Mapped: 58124 kB' 'Shmem: 2592 kB' 'KReclaimable: 193392 kB' 'Slab: 274320 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80928 kB' 'KernelStack: 5008 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 389968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20040 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.193 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.194 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.195 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.195 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.195 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.195 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.195 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.195 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.195 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.195 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.195 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.195 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.195 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.195 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.195 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.195 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.454 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4634500 kB' 'MemUsed: 7611824 kB' 'SwapCached: 0 kB' 'Active: 415424 kB' 'Inactive: 4637732 kB' 'Active(anon): 127872 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637732 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'FilePages: 4936624 kB' 'Mapped: 58084 kB' 'AnonPages: 145476 kB' 'Shmem: 2592 kB' 'KernelStack: 5024 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193392 kB' 'Slab: 274316 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.455 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.456 node0=1024 expecting 1024 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:31.456 00:04:31.456 real 0m0.785s 00:04:31.456 user 0m0.248s 00:04:31.456 sys 0m0.560s 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.456 ************************************ 00:04:31.456 23:48:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:31.456 END TEST even_2G_alloc 00:04:31.456 ************************************ 00:04:31.456 23:48:27 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:31.456 23:48:27 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.456 23:48:27 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.456 23:48:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:31.456 ************************************ 00:04:31.456 START TEST odd_alloc 00:04:31.456 ************************************ 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.456 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:31.715 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:31.715 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4636484 kB' 'MemAvailable: 9433612 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 415568 kB' 'Inactive: 4637736 kB' 'Active(anon): 128016 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 145268 kB' 'Mapped: 58124 kB' 'Shmem: 2592 kB' 'KReclaimable: 193392 kB' 'Slab: 274368 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80976 kB' 'KernelStack: 5040 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073560 kB' 'Committed_AS: 389968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20120 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.974 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4636736 kB' 'MemAvailable: 9433864 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 415328 kB' 'Inactive: 4637736 kB' 'Active(anon): 127776 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 145316 kB' 'Mapped: 58124 kB' 'Shmem: 2592 kB' 'KReclaimable: 193392 kB' 'Slab: 274368 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80976 kB' 'KernelStack: 5024 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073560 kB' 'Committed_AS: 389968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20104 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.976 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.976 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.238 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.239 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4636988 kB' 'MemAvailable: 9434116 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 415016 kB' 'Inactive: 4637736 kB' 'Active(anon): 127464 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 145048 kB' 'Mapped: 58164 kB' 'Shmem: 2592 kB' 'KReclaimable: 193392 kB' 'Slab: 274352 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80960 kB' 'KernelStack: 5008 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073560 kB' 'Committed_AS: 389584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.240 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:32.241 nr_hugepages=1025 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:32.241 resv_hugepages=0 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.241 surplus_hugepages=0 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.241 anon_hugepages=0 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4637240 kB' 'MemAvailable: 9434368 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 415184 kB' 'Inactive: 4637736 kB' 'Active(anon): 127632 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 145216 kB' 'Mapped: 58124 kB' 'Shmem: 2592 kB' 'KReclaimable: 193392 kB' 'Slab: 274348 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80956 kB' 'KernelStack: 5008 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073560 kB' 'Committed_AS: 389968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.241 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.242 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4637772 kB' 'MemUsed: 7608552 kB' 'SwapCached: 0 kB' 'Active: 415180 kB' 'Inactive: 4637736 kB' 'Active(anon): 127628 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'FilePages: 4936628 kB' 'Mapped: 58124 kB' 'AnonPages: 145232 kB' 'Shmem: 2592 kB' 'KernelStack: 5008 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193392 kB' 'Slab: 274348 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.243 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.244 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.245 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.245 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.245 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.245 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.245 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.245 23:48:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:32.245 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.245 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.245 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.245 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.245 node0=1025 expecting 1025 00:04:32.245 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:32.245 23:48:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:32.245 00:04:32.245 real 0m0.810s 00:04:32.245 user 0m0.241s 00:04:32.245 sys 0m0.611s 00:04:32.245 23:48:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.245 23:48:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:32.245 ************************************ 00:04:32.245 END TEST odd_alloc 00:04:32.245 ************************************ 00:04:32.245 23:48:27 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:32.245 23:48:27 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.245 23:48:27 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.245 23:48:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:32.245 ************************************ 00:04:32.245 START TEST custom_alloc 00:04:32.245 ************************************ 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.245 23:48:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:32.505 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:32.505 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5690224 kB' 'MemAvailable: 10487352 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 415608 kB' 'Inactive: 4637736 kB' 'Active(anon): 128056 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 145344 kB' 'Mapped: 58124 kB' 'Shmem: 2592 kB' 'KReclaimable: 193392 kB' 'Slab: 274344 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80952 kB' 'KernelStack: 5056 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 389968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20120 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.770 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.771 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5690476 kB' 'MemAvailable: 10487604 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 415360 kB' 'Inactive: 4637736 kB' 'Active(anon): 127808 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 145352 kB' 'Mapped: 58124 kB' 'Shmem: 2592 kB' 'KReclaimable: 193392 kB' 'Slab: 274340 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80948 kB' 'KernelStack: 5024 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 389968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20120 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.772 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5690476 kB' 'MemAvailable: 10487604 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 415076 kB' 'Inactive: 4637736 kB' 'Active(anon): 127524 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 145092 kB' 'Mapped: 58124 kB' 'Shmem: 2592 kB' 'KReclaimable: 193392 kB' 'Slab: 274340 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80948 kB' 'KernelStack: 5024 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 389968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20120 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.773 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.774 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:32.775 nr_hugepages=512 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:32.775 resv_hugepages=0 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.775 surplus_hugepages=0 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.775 anon_hugepages=0 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.775 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5690476 kB' 'MemAvailable: 10487604 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 415396 kB' 'Inactive: 4637736 kB' 'Active(anon): 127844 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 145404 kB' 'Mapped: 58124 kB' 'Shmem: 2592 kB' 'KReclaimable: 193392 kB' 'Slab: 274340 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80948 kB' 'KernelStack: 5024 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 389968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20120 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.776 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5690476 kB' 'MemUsed: 6555848 kB' 'SwapCached: 0 kB' 'Active: 415044 kB' 'Inactive: 4637736 kB' 'Active(anon): 127492 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'FilePages: 4936628 kB' 'Mapped: 58124 kB' 'AnonPages: 145048 kB' 'Shmem: 2592 kB' 'KernelStack: 5008 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193392 kB' 'Slab: 274340 kB' 'SReclaimable: 193392 kB' 'SUnreclaim: 80948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.777 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.778 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.779 23:48:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:32.779 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.779 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.779 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.779 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.779 node0=512 expecting 512 00:04:32.779 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:32.779 23:48:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:32.779 00:04:32.779 real 0m0.640s 00:04:32.779 user 0m0.240s 00:04:32.779 sys 0m0.440s 00:04:32.779 23:48:28 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.779 23:48:28 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:32.779 ************************************ 00:04:32.779 END TEST custom_alloc 00:04:32.779 ************************************ 00:04:33.067 23:48:28 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:33.067 23:48:28 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.067 23:48:28 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.067 23:48:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:33.067 ************************************ 00:04:33.067 START TEST no_shrink_alloc 00:04:33.067 ************************************ 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.067 23:48:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:33.327 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:33.327 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4644936 kB' 'MemAvailable: 9442056 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 414180 kB' 'Inactive: 4637736 kB' 'Active(anon): 126628 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 143960 kB' 'Mapped: 57252 kB' 'Shmem: 2592 kB' 'KReclaimable: 193384 kB' 'Slab: 274272 kB' 'SReclaimable: 193384 kB' 'SUnreclaim: 80888 kB' 'KernelStack: 4976 kB' 'PageTables: 4000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 378116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20024 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.591 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.592 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4644936 kB' 'MemAvailable: 9442056 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 413940 kB' 'Inactive: 4637736 kB' 'Active(anon): 126388 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 143944 kB' 'Mapped: 57248 kB' 'Shmem: 2592 kB' 'KReclaimable: 193384 kB' 'Slab: 274272 kB' 'SReclaimable: 193384 kB' 'SUnreclaim: 80888 kB' 'KernelStack: 4944 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 378116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20024 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.593 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.594 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4644936 kB' 'MemAvailable: 9442056 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 413728 kB' 'Inactive: 4637736 kB' 'Active(anon): 126176 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 144 kB' 'AnonPages: 143752 kB' 'Mapped: 57248 kB' 'Shmem: 2592 kB' 'KReclaimable: 193384 kB' 'Slab: 274268 kB' 'SReclaimable: 193384 kB' 'SUnreclaim: 80884 kB' 'KernelStack: 4960 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 378116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20024 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.595 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.596 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:33.597 nr_hugepages=1024 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:33.597 resv_hugepages=0 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:33.597 surplus_hugepages=0 00:04:33.597 anon_hugepages=0 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4644936 kB' 'MemAvailable: 9442056 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 413732 kB' 'Inactive: 4637736 kB' 'Active(anon): 126180 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 148 kB' 'AnonPages: 144020 kB' 'Mapped: 57248 kB' 'Shmem: 2592 kB' 'KReclaimable: 193384 kB' 'Slab: 274268 kB' 'SReclaimable: 193384 kB' 'SUnreclaim: 80884 kB' 'KernelStack: 4960 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 378116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20024 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.597 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.598 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4644936 kB' 'MemUsed: 7601388 kB' 'SwapCached: 0 kB' 'Active: 413692 kB' 'Inactive: 4637736 kB' 'Active(anon): 126140 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 0 kB' 'Writeback: 148 kB' 'FilePages: 4936628 kB' 'Mapped: 57248 kB' 'AnonPages: 143964 kB' 'Shmem: 2592 kB' 'KernelStack: 4944 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193384 kB' 'Slab: 274268 kB' 'SReclaimable: 193384 kB' 'SUnreclaim: 80884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.599 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:33.600 node0=1024 expecting 1024 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.600 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:34.173 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:34.173 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.173 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4647896 kB' 'MemAvailable: 9445016 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 414484 kB' 'Inactive: 4637736 kB' 'Active(anon): 126932 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 72 kB' 'Writeback: 0 kB' 'AnonPages: 144492 kB' 'Mapped: 57292 kB' 'Shmem: 2592 kB' 'KReclaimable: 193384 kB' 'Slab: 274264 kB' 'SReclaimable: 193384 kB' 'SUnreclaim: 80880 kB' 'KernelStack: 4964 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 378116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20088 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.173 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.174 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4648148 kB' 'MemAvailable: 9445268 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 413952 kB' 'Inactive: 4637736 kB' 'Active(anon): 126400 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 72 kB' 'Writeback: 0 kB' 'AnonPages: 143956 kB' 'Mapped: 57252 kB' 'Shmem: 2592 kB' 'KReclaimable: 193384 kB' 'Slab: 274264 kB' 'SReclaimable: 193384 kB' 'SUnreclaim: 80880 kB' 'KernelStack: 4916 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 378116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20040 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.175 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.176 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4647896 kB' 'MemAvailable: 9445016 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 413760 kB' 'Inactive: 4637736 kB' 'Active(anon): 126208 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 72 kB' 'Writeback: 0 kB' 'AnonPages: 143764 kB' 'Mapped: 57248 kB' 'Shmem: 2592 kB' 'KReclaimable: 193384 kB' 'Slab: 274264 kB' 'SReclaimable: 193384 kB' 'SUnreclaim: 80880 kB' 'KernelStack: 4960 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 378116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20040 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.177 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.178 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:34.179 nr_hugepages=1024 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.179 resv_hugepages=0 00:04:34.179 surplus_hugepages=0 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.179 anon_hugepages=0 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4648484 kB' 'MemAvailable: 9445604 kB' 'Buffers: 35320 kB' 'Cached: 4901308 kB' 'SwapCached: 0 kB' 'Active: 414016 kB' 'Inactive: 4637736 kB' 'Active(anon): 126464 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 72 kB' 'Writeback: 0 kB' 'AnonPages: 144024 kB' 'Mapped: 57248 kB' 'Shmem: 2592 kB' 'KReclaimable: 193384 kB' 'Slab: 274264 kB' 'SReclaimable: 193384 kB' 'SUnreclaim: 80880 kB' 'KernelStack: 4960 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 378116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20040 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.179 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.180 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4648484 kB' 'MemUsed: 7597840 kB' 'SwapCached: 0 kB' 'Active: 414016 kB' 'Inactive: 4637736 kB' 'Active(anon): 126464 kB' 'Inactive(anon): 0 kB' 'Active(file): 287552 kB' 'Inactive(file): 4637736 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 72 kB' 'Writeback: 0 kB' 'FilePages: 4936628 kB' 'Mapped: 57248 kB' 'AnonPages: 144020 kB' 'Shmem: 2592 kB' 'KernelStack: 4960 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193384 kB' 'Slab: 274264 kB' 'SReclaimable: 193384 kB' 'SUnreclaim: 80880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.182 node0=1024 expecting 1024 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:34.182 00:04:34.182 real 0m1.251s 00:04:34.182 user 0m0.484s 00:04:34.182 sys 0m0.850s 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.182 ************************************ 00:04:34.182 END TEST no_shrink_alloc 00:04:34.182 23:48:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:34.182 ************************************ 00:04:34.182 23:48:29 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:34.182 23:48:29 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:34.182 23:48:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:34.182 23:48:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.182 23:48:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:34.182 23:48:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.182 23:48:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:34.182 23:48:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:34.182 23:48:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:34.182 00:04:34.182 real 0m5.509s 00:04:34.182 user 0m1.930s 00:04:34.182 sys 0m3.738s 00:04:34.182 ************************************ 00:04:34.182 END TEST hugepages 00:04:34.182 ************************************ 00:04:34.182 23:48:29 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.182 23:48:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:34.182 23:48:30 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:34.182 23:48:30 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.182 23:48:30 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.182 23:48:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:34.182 ************************************ 00:04:34.182 START TEST driver 00:04:34.182 ************************************ 00:04:34.182 23:48:30 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:34.441 * Looking for test storage... 00:04:34.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:34.441 23:48:30 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:34.442 23:48:30 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:34.442 23:48:30 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:35.010 23:48:30 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:35.010 23:48:30 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.010 23:48:30 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.010 23:48:30 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:35.010 ************************************ 00:04:35.010 START TEST guess_driver 00:04:35.010 ************************************ 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.0-36-generic/kernel/drivers/uio/uio.ko.zst 00:04:35.010 insmod /lib/modules/6.8.0-36-generic/kernel/drivers/uio/uio_pci_generic.ko.zst == *\.\k\o* ]] 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:35.010 Looking for driver=uio_pci_generic 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.010 23:48:30 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:35.269 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:35.269 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:35.269 23:48:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.269 23:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.269 23:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:35.269 23:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.836 23:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:35.836 23:48:31 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:35.836 23:48:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:35.836 23:48:31 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:36.403 00:04:36.403 real 0m1.567s 00:04:36.403 user 0m0.375s 00:04:36.403 sys 0m1.242s 00:04:36.403 23:48:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.403 23:48:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:36.403 ************************************ 00:04:36.403 END TEST guess_driver 00:04:36.403 ************************************ 00:04:36.403 00:04:36.403 real 0m2.158s 00:04:36.403 user 0m0.562s 00:04:36.403 sys 0m1.705s 00:04:36.403 23:48:32 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.403 23:48:32 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:36.403 ************************************ 00:04:36.403 END TEST driver 00:04:36.403 ************************************ 00:04:36.403 23:48:32 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:36.403 23:48:32 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.403 23:48:32 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.403 23:48:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:36.403 ************************************ 00:04:36.403 START TEST devices 00:04:36.403 ************************************ 00:04:36.403 23:48:32 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:36.662 * Looking for test storage... 00:04:36.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:36.662 23:48:32 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:36.662 23:48:32 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:36.662 23:48:32 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:36.662 23:48:32 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:36.921 23:48:32 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:36.921 23:48:32 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:36.921 23:48:32 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:36.921 23:48:32 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:36.921 23:48:32 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:36.921 23:48:32 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:36.921 23:48:32 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:36.921 23:48:32 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:36.921 23:48:32 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:36.921 23:48:32 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:36.921 23:48:32 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:36.921 23:48:32 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:36.921 23:48:32 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:36.921 23:48:32 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:36.921 23:48:32 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:36.921 23:48:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:36.921 23:48:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:36.921 23:48:32 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:36.921 23:48:32 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:36.921 23:48:32 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:36.921 23:48:32 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:36.921 23:48:32 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:37.180 No valid GPT data, bailing 00:04:37.180 23:48:32 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:37.180 23:48:32 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:37.180 23:48:32 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:37.180 23:48:32 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:37.180 23:48:32 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:37.180 23:48:32 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:37.180 23:48:32 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:37.180 23:48:32 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:37.180 23:48:32 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:37.180 23:48:32 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:37.180 23:48:32 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:37.180 23:48:32 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:37.180 23:48:32 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:37.180 23:48:32 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.180 23:48:32 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.180 23:48:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:37.180 ************************************ 00:04:37.180 START TEST nvme_mount 00:04:37.180 ************************************ 00:04:37.180 23:48:32 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:37.180 23:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:37.180 23:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:37.180 23:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:37.180 23:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:37.180 23:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:37.180 23:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:37.180 23:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:37.180 23:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:37.180 23:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:37.180 23:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:37.180 23:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:37.180 23:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:37.180 23:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.180 23:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:37.180 23:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:37.180 23:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.180 23:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:37.180 23:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:37.180 23:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:38.117 Creating new GPT entries in memory. 00:04:38.117 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:38.117 other utilities. 00:04:38.117 23:48:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:38.117 23:48:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.117 23:48:33 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:38.117 23:48:33 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:38.117 23:48:33 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:39.495 Creating new GPT entries in memory. 00:04:39.495 The operation has completed successfully. 00:04:39.495 23:48:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:39.495 23:48:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.495 23:48:34 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 63206 00:04:39.495 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:39.495 23:48:34 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:39.495 23:48:34 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:39.495 23:48:34 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:39.495 23:48:34 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:39.495 23:48:34 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:10.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:39.495 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.086 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:40.086 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:40.086 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:40.086 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:40.086 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:40.086 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:40.086 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:40.086 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:40.086 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:40.086 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:40.086 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:40.086 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:40.086 23:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:40.356 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:40.356 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:40.356 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:40.356 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:40.356 23:48:36 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:40.356 23:48:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:40.356 23:48:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:40.356 23:48:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:40.356 23:48:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:40.356 23:48:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:40.356 23:48:36 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:10.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:40.356 23:48:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:40.356 23:48:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:40.356 23:48:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:40.356 23:48:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:40.614 23:48:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:40.614 23:48:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:40.614 23:48:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:40.614 23:48:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:40.614 23:48:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.614 23:48:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:40.614 23:48:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:40.614 23:48:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.614 23:48:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:40.614 23:48:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:40.614 23:48:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:40.614 23:48:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:40.615 23:48:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.615 23:48:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:40.615 23:48:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.874 23:48:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:40.874 23:48:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.441 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.441 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:41.441 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.441 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:41.441 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:41.441 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.441 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:10.0 data@nvme0n1 '' '' 00:04:41.441 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:41.441 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:41.441 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:41.441 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:41.441 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.441 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:41.441 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.441 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.441 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:41.441 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.441 23:48:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.441 23:48:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:41.700 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:41.700 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:41.700 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:41.700 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.700 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:41.700 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.700 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:41.700 23:48:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.267 23:48:38 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.267 23:48:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:42.267 23:48:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:42.267 23:48:38 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:42.267 23:48:38 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:42.267 23:48:38 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.267 23:48:38 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:42.267 23:48:38 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:42.267 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:42.267 00:04:42.267 real 0m5.195s 00:04:42.267 user 0m0.483s 00:04:42.267 sys 0m2.487s 00:04:42.267 23:48:38 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.267 23:48:38 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:42.267 ************************************ 00:04:42.267 END TEST nvme_mount 00:04:42.267 ************************************ 00:04:42.267 23:48:38 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:42.267 23:48:38 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.267 23:48:38 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.267 23:48:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:42.267 ************************************ 00:04:42.267 START TEST dm_mount 00:04:42.267 ************************************ 00:04:42.267 23:48:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:42.267 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:42.267 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:42.267 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:42.267 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:42.267 23:48:38 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:42.267 23:48:38 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:42.267 23:48:38 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:42.267 23:48:38 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:42.267 23:48:38 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:42.267 23:48:38 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:42.267 23:48:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:42.267 23:48:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.267 23:48:38 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:42.267 23:48:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:42.267 23:48:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.267 23:48:38 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:42.267 23:48:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:42.267 23:48:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.268 23:48:38 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:42.268 23:48:38 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:42.268 23:48:38 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:43.645 Creating new GPT entries in memory. 00:04:43.645 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:43.645 other utilities. 00:04:43.645 23:48:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:43.645 23:48:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.645 23:48:39 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:43.645 23:48:39 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:43.645 23:48:39 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:44.581 Creating new GPT entries in memory. 00:04:44.581 The operation has completed successfully. 00:04:44.581 23:48:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:44.581 23:48:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:44.581 23:48:40 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:44.581 23:48:40 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:44.581 23:48:40 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:45.517 The operation has completed successfully. 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 63626 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:10.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.517 23:48:41 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:45.776 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:45.776 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:45.776 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:45.776 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.776 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:45.776 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.776 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:45.776 23:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.344 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.344 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:46.344 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.344 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:46.344 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:46.344 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.344 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:10.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:46.344 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:46.344 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:46.344 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:46.344 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:46.344 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:46.344 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:46.344 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:46.344 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.344 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:46.344 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:46.344 23:48:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.344 23:48:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:46.603 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:46.603 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:46.603 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:46.603 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.603 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:46.603 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.868 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:46.868 23:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.450 23:48:43 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.450 23:48:43 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:47.450 23:48:43 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:47.450 23:48:43 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:47.450 23:48:43 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:47.450 23:48:43 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:47.450 23:48:43 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:47.450 23:48:43 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.450 23:48:43 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:47.450 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:47.450 23:48:43 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:47.450 23:48:43 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:47.450 00:04:47.450 real 0m5.077s 00:04:47.450 user 0m0.305s 00:04:47.450 sys 0m1.723s 00:04:47.450 23:48:43 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.450 23:48:43 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:47.450 ************************************ 00:04:47.450 END TEST dm_mount 00:04:47.450 ************************************ 00:04:47.450 23:48:43 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:47.450 23:48:43 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:47.450 23:48:43 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:47.450 23:48:43 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.450 23:48:43 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:47.450 23:48:43 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.450 23:48:43 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:47.713 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:47.713 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:47.713 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:47.713 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:47.713 23:48:43 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:47.713 23:48:43 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:47.713 23:48:43 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:47.713 23:48:43 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.713 23:48:43 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:47.713 23:48:43 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.713 23:48:43 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:47.713 ************************************ 00:04:47.713 END TEST devices 00:04:47.713 ************************************ 00:04:47.713 00:04:47.713 real 0m11.266s 00:04:47.713 user 0m1.089s 00:04:47.713 sys 0m4.667s 00:04:47.713 23:48:43 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.713 23:48:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:47.713 00:04:47.713 real 0m23.301s 00:04:47.713 user 0m4.887s 00:04:47.713 sys 0m13.335s 00:04:47.713 23:48:43 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.713 23:48:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:47.713 ************************************ 00:04:47.713 END TEST setup.sh 00:04:47.713 ************************************ 00:04:47.970 23:48:43 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:48.229 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:48.229 Hugepages 00:04:48.229 node hugesize free / total 00:04:48.229 node0 1048576kB 0 / 0 00:04:48.229 node0 2048kB 2048 / 2048 00:04:48.229 00:04:48.229 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:48.487 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:48.487 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:48.487 23:48:44 -- spdk/autotest.sh@130 -- # uname -s 00:04:48.487 23:48:44 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:48.487 23:48:44 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:48.487 23:48:44 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:48.745 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:49.003 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.570 23:48:45 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:50.508 23:48:46 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:50.508 23:48:46 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:50.508 23:48:46 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:50.508 23:48:46 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:50.508 23:48:46 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:50.508 23:48:46 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:50.508 23:48:46 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:50.508 23:48:46 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:50.508 23:48:46 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:50.767 23:48:46 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:50.767 23:48:46 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:04:50.767 23:48:46 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:51.025 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:51.025 Waiting for block devices as requested 00:04:51.025 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:51.025 23:48:46 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:51.025 23:48:46 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:51.025 23:48:46 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:51.025 23:48:46 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:51.025 23:48:46 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:04:51.025 23:48:46 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]] 00:04:51.026 23:48:46 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:04:51.026 23:48:46 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:51.026 23:48:46 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:51.026 23:48:46 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:51.026 23:48:46 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:51.026 23:48:46 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:51.026 23:48:46 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:51.284 23:48:46 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:51.284 23:48:46 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:51.284 23:48:46 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:51.284 23:48:46 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:51.284 23:48:46 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:51.284 23:48:46 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:51.284 23:48:46 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:51.284 23:48:46 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:51.284 23:48:46 -- common/autotest_common.sh@1557 -- # continue 00:04:51.284 23:48:46 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:51.284 23:48:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:51.284 23:48:46 -- common/autotest_common.sh@10 -- # set +x 00:04:51.284 23:48:46 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:51.284 23:48:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:51.284 23:48:46 -- common/autotest_common.sh@10 -- # set +x 00:04:51.284 23:48:46 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:51.543 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:51.802 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:52.371 23:48:48 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:52.371 23:48:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:52.371 23:48:48 -- common/autotest_common.sh@10 -- # set +x 00:04:52.371 23:48:48 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:52.371 23:48:48 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:52.371 23:48:48 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:52.371 23:48:48 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:52.371 23:48:48 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:52.371 23:48:48 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:52.371 23:48:48 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:52.371 23:48:48 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:52.371 23:48:48 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:52.371 23:48:48 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:52.371 23:48:48 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:52.371 23:48:48 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:52.371 23:48:48 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:04:52.371 23:48:48 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:52.371 23:48:48 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:52.371 23:48:48 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:52.371 23:48:48 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:52.371 23:48:48 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:52.371 23:48:48 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:52.371 23:48:48 -- common/autotest_common.sh@1593 -- # return 0 00:04:52.371 23:48:48 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:04:52.371 23:48:48 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:52.371 23:48:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.371 23:48:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.371 23:48:48 -- common/autotest_common.sh@10 -- # set +x 00:04:52.371 ************************************ 00:04:52.371 START TEST unittest 00:04:52.371 ************************************ 00:04:52.371 23:48:48 unittest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:52.371 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:52.371 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:04:52.371 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:04:52.371 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:52.371 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:04:52.371 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:52.371 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:04:52.371 ++ rpc_py=rpc_cmd 00:04:52.371 ++ set -e 00:04:52.371 ++ shopt -s nullglob 00:04:52.371 ++ shopt -s extglob 00:04:52.371 ++ shopt -s inherit_errexit 00:04:52.371 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:04:52.371 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:04:52.371 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:04:52.371 +++ CONFIG_WPDK_DIR= 00:04:52.371 +++ CONFIG_ASAN=y 00:04:52.371 +++ CONFIG_VBDEV_COMPRESS=n 00:04:52.371 +++ CONFIG_HAVE_EXECINFO_H=y 00:04:52.371 +++ CONFIG_USDT=n 00:04:52.371 +++ CONFIG_CUSTOMOCF=n 00:04:52.371 +++ CONFIG_PREFIX=/usr/local 00:04:52.371 +++ CONFIG_RBD=n 00:04:52.371 +++ CONFIG_LIBDIR= 00:04:52.371 +++ CONFIG_IDXD=y 00:04:52.371 +++ CONFIG_NVME_CUSE=y 00:04:52.371 +++ CONFIG_SMA=n 00:04:52.371 +++ CONFIG_VTUNE=n 00:04:52.371 +++ CONFIG_TSAN=n 00:04:52.371 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:04:52.371 +++ CONFIG_VFIO_USER_DIR= 00:04:52.371 +++ CONFIG_PGO_CAPTURE=n 00:04:52.371 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:04:52.372 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:52.372 +++ CONFIG_LTO=n 00:04:52.372 +++ CONFIG_ISCSI_INITIATOR=y 00:04:52.372 +++ CONFIG_CET=n 00:04:52.372 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:04:52.372 +++ CONFIG_OCF_PATH= 00:04:52.372 +++ CONFIG_RDMA_SET_TOS=y 00:04:52.372 +++ CONFIG_HAVE_ARC4RANDOM=y 00:04:52.372 +++ CONFIG_HAVE_LIBARCHIVE=n 00:04:52.372 +++ CONFIG_UBLK=y 00:04:52.372 +++ CONFIG_ISAL_CRYPTO=y 00:04:52.372 +++ CONFIG_OPENSSL_PATH= 00:04:52.372 +++ CONFIG_OCF=n 00:04:52.372 +++ CONFIG_FUSE=n 00:04:52.372 +++ CONFIG_VTUNE_DIR= 00:04:52.372 +++ CONFIG_FUZZER_LIB= 00:04:52.372 +++ CONFIG_FUZZER=n 00:04:52.372 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:04:52.372 +++ CONFIG_CRYPTO=n 00:04:52.372 +++ CONFIG_PGO_USE=n 00:04:52.372 +++ CONFIG_VHOST=y 00:04:52.372 +++ CONFIG_DAOS=n 00:04:52.372 +++ CONFIG_DPDK_INC_DIR= 00:04:52.372 +++ CONFIG_DAOS_DIR= 00:04:52.372 +++ CONFIG_UNIT_TESTS=y 00:04:52.372 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:04:52.372 +++ CONFIG_VIRTIO=y 00:04:52.372 +++ CONFIG_DPDK_UADK=n 00:04:52.372 +++ CONFIG_COVERAGE=y 00:04:52.372 +++ CONFIG_RDMA=y 00:04:52.372 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:04:52.372 +++ CONFIG_URING_PATH= 00:04:52.372 +++ CONFIG_XNVME=n 00:04:52.372 +++ CONFIG_VFIO_USER=n 00:04:52.372 +++ CONFIG_ARCH=native 00:04:52.372 +++ CONFIG_HAVE_EVP_MAC=y 00:04:52.372 +++ CONFIG_URING_ZNS=n 00:04:52.372 +++ CONFIG_WERROR=y 00:04:52.372 +++ CONFIG_HAVE_LIBBSD=n 00:04:52.372 +++ CONFIG_UBSAN=y 00:04:52.372 +++ CONFIG_IPSEC_MB_DIR= 00:04:52.372 +++ CONFIG_GOLANG=n 00:04:52.372 +++ CONFIG_ISAL=y 00:04:52.372 +++ CONFIG_IDXD_KERNEL=y 00:04:52.372 +++ CONFIG_DPDK_LIB_DIR= 00:04:52.372 +++ CONFIG_RDMA_PROV=verbs 00:04:52.372 +++ CONFIG_APPS=y 00:04:52.372 +++ CONFIG_SHARED=n 00:04:52.372 +++ CONFIG_HAVE_KEYUTILS=y 00:04:52.372 +++ CONFIG_FC_PATH= 00:04:52.372 +++ CONFIG_DPDK_PKG_CONFIG=n 00:04:52.372 +++ CONFIG_FC=n 00:04:52.372 +++ CONFIG_AVAHI=n 00:04:52.372 +++ CONFIG_FIO_PLUGIN=y 00:04:52.372 +++ CONFIG_RAID5F=y 00:04:52.372 +++ CONFIG_EXAMPLES=y 00:04:52.372 +++ CONFIG_TESTS=y 00:04:52.372 +++ CONFIG_CRYPTO_MLX5=n 00:04:52.372 +++ CONFIG_MAX_LCORES=128 00:04:52.372 +++ CONFIG_IPSEC_MB=n 00:04:52.372 +++ CONFIG_PGO_DIR= 00:04:52.372 +++ CONFIG_DEBUG=y 00:04:52.372 +++ CONFIG_DPDK_COMPRESSDEV=n 00:04:52.372 +++ CONFIG_CROSS_PREFIX= 00:04:52.372 +++ CONFIG_URING=n 00:04:52.372 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:52.372 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:52.372 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:04:52.372 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:04:52.372 +++ _root=/home/vagrant/spdk_repo/spdk 00:04:52.372 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:04:52.372 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:04:52.372 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:04:52.372 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:04:52.372 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:04:52.372 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:04:52.372 +++ VHOST_APP=("$_app_dir/vhost") 00:04:52.372 +++ DD_APP=("$_app_dir/spdk_dd") 00:04:52.372 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:04:52.372 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:04:52.372 +++ [[ #ifndef SPDK_CONFIG_H 00:04:52.372 #define SPDK_CONFIG_H 00:04:52.372 #define SPDK_CONFIG_APPS 1 00:04:52.372 #define SPDK_CONFIG_ARCH native 00:04:52.372 #define SPDK_CONFIG_ASAN 1 00:04:52.372 #undef SPDK_CONFIG_AVAHI 00:04:52.372 #undef SPDK_CONFIG_CET 00:04:52.372 #define SPDK_CONFIG_COVERAGE 1 00:04:52.372 #define SPDK_CONFIG_CROSS_PREFIX 00:04:52.372 #undef SPDK_CONFIG_CRYPTO 00:04:52.372 #undef SPDK_CONFIG_CRYPTO_MLX5 00:04:52.372 #undef SPDK_CONFIG_CUSTOMOCF 00:04:52.372 #undef SPDK_CONFIG_DAOS 00:04:52.372 #define SPDK_CONFIG_DAOS_DIR 00:04:52.372 #define SPDK_CONFIG_DEBUG 1 00:04:52.372 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:04:52.372 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:52.372 #define SPDK_CONFIG_DPDK_INC_DIR 00:04:52.372 #define SPDK_CONFIG_DPDK_LIB_DIR 00:04:52.372 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:04:52.372 #undef SPDK_CONFIG_DPDK_UADK 00:04:52.372 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:52.372 #define SPDK_CONFIG_EXAMPLES 1 00:04:52.372 #undef SPDK_CONFIG_FC 00:04:52.372 #define SPDK_CONFIG_FC_PATH 00:04:52.372 #define SPDK_CONFIG_FIO_PLUGIN 1 00:04:52.372 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:04:52.372 #undef SPDK_CONFIG_FUSE 00:04:52.372 #undef SPDK_CONFIG_FUZZER 00:04:52.372 #define SPDK_CONFIG_FUZZER_LIB 00:04:52.372 #undef SPDK_CONFIG_GOLANG 00:04:52.372 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:04:52.372 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:04:52.372 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:04:52.372 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:04:52.372 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:04:52.372 #undef SPDK_CONFIG_HAVE_LIBBSD 00:04:52.372 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:04:52.372 #define SPDK_CONFIG_IDXD 1 00:04:52.372 #define SPDK_CONFIG_IDXD_KERNEL 1 00:04:52.372 #undef SPDK_CONFIG_IPSEC_MB 00:04:52.372 #define SPDK_CONFIG_IPSEC_MB_DIR 00:04:52.372 #define SPDK_CONFIG_ISAL 1 00:04:52.372 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:04:52.372 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:04:52.372 #define SPDK_CONFIG_LIBDIR 00:04:52.372 #undef SPDK_CONFIG_LTO 00:04:52.372 #define SPDK_CONFIG_MAX_LCORES 128 00:04:52.372 #define SPDK_CONFIG_NVME_CUSE 1 00:04:52.372 #undef SPDK_CONFIG_OCF 00:04:52.372 #define SPDK_CONFIG_OCF_PATH 00:04:52.372 #define SPDK_CONFIG_OPENSSL_PATH 00:04:52.372 #undef SPDK_CONFIG_PGO_CAPTURE 00:04:52.372 #define SPDK_CONFIG_PGO_DIR 00:04:52.372 #undef SPDK_CONFIG_PGO_USE 00:04:52.372 #define SPDK_CONFIG_PREFIX /usr/local 00:04:52.372 #define SPDK_CONFIG_RAID5F 1 00:04:52.372 #undef SPDK_CONFIG_RBD 00:04:52.372 #define SPDK_CONFIG_RDMA 1 00:04:52.372 #define SPDK_CONFIG_RDMA_PROV verbs 00:04:52.372 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:04:52.372 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:04:52.372 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:04:52.372 #undef SPDK_CONFIG_SHARED 00:04:52.372 #undef SPDK_CONFIG_SMA 00:04:52.372 #define SPDK_CONFIG_TESTS 1 00:04:52.372 #undef SPDK_CONFIG_TSAN 00:04:52.372 #define SPDK_CONFIG_UBLK 1 00:04:52.372 #define SPDK_CONFIG_UBSAN 1 00:04:52.372 #define SPDK_CONFIG_UNIT_TESTS 1 00:04:52.372 #undef SPDK_CONFIG_URING 00:04:52.372 #define SPDK_CONFIG_URING_PATH 00:04:52.372 #undef SPDK_CONFIG_URING_ZNS 00:04:52.372 #undef SPDK_CONFIG_USDT 00:04:52.372 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:04:52.372 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:04:52.372 #undef SPDK_CONFIG_VFIO_USER 00:04:52.372 #define SPDK_CONFIG_VFIO_USER_DIR 00:04:52.372 #define SPDK_CONFIG_VHOST 1 00:04:52.372 #define SPDK_CONFIG_VIRTIO 1 00:04:52.372 #undef SPDK_CONFIG_VTUNE 00:04:52.372 #define SPDK_CONFIG_VTUNE_DIR 00:04:52.372 #define SPDK_CONFIG_WERROR 1 00:04:52.372 #define SPDK_CONFIG_WPDK_DIR 00:04:52.372 #undef SPDK_CONFIG_XNVME 00:04:52.372 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:04:52.372 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:04:52.372 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:52.372 +++ [[ -e /bin/wpdk_common.sh ]] 00:04:52.372 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.372 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.372 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:52.372 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:52.372 ++++ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:52.372 ++++ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:52.372 ++++ export PATH 00:04:52.372 ++++ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:52.372 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:52.372 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:52.372 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:52.372 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:52.372 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:04:52.372 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:04:52.372 +++ TEST_TAG=N/A 00:04:52.372 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:04:52.372 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:04:52.372 ++++ uname -s 00:04:52.372 +++ PM_OS=Linux 00:04:52.372 +++ MONITOR_RESOURCES_SUDO=() 00:04:52.372 +++ declare -A MONITOR_RESOURCES_SUDO 00:04:52.372 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:04:52.372 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:04:52.372 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:04:52.372 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:04:52.372 +++ SUDO[0]= 00:04:52.372 +++ SUDO[1]='sudo -E' 00:04:52.373 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:04:52.373 +++ [[ Linux == FreeBSD ]] 00:04:52.373 +++ [[ Linux == Linux ]] 00:04:52.373 +++ [[ QEMU != QEMU ]] 00:04:52.373 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:04:52.373 ++ : 1 00:04:52.373 ++ export RUN_NIGHTLY 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_RUN_VALGRIND 00:04:52.373 ++ : 1 00:04:52.373 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:04:52.373 ++ : 1 00:04:52.373 ++ export SPDK_TEST_UNITTEST 00:04:52.373 ++ : 00:04:52.373 ++ export SPDK_TEST_AUTOBUILD 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_RELEASE_BUILD 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_ISAL 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_ISCSI 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_ISCSI_INITIATOR 00:04:52.373 ++ : 1 00:04:52.373 ++ export SPDK_TEST_NVME 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_NVME_PMR 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_NVME_BP 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_NVME_CLI 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_NVME_CUSE 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_NVME_FDP 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_NVMF 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_VFIOUSER 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_VFIOUSER_QEMU 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_FUZZER 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_FUZZER_SHORT 00:04:52.373 ++ : rdma 00:04:52.373 ++ export SPDK_TEST_NVMF_TRANSPORT 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_RBD 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_VHOST 00:04:52.373 ++ : 1 00:04:52.373 ++ export SPDK_TEST_BLOCKDEV 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_IOAT 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_BLOBFS 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_VHOST_INIT 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_LVOL 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_VBDEV_COMPRESS 00:04:52.373 ++ : 1 00:04:52.373 ++ export SPDK_RUN_ASAN 00:04:52.373 ++ : 1 00:04:52.373 ++ export SPDK_RUN_UBSAN 00:04:52.373 ++ : 00:04:52.373 ++ export SPDK_RUN_EXTERNAL_DPDK 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_RUN_NON_ROOT 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_CRYPTO 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_FTL 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_OCF 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_VMD 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_OPAL 00:04:52.373 ++ : 00:04:52.373 ++ export SPDK_TEST_NATIVE_DPDK 00:04:52.373 ++ : true 00:04:52.373 ++ export SPDK_AUTOTEST_X 00:04:52.373 ++ : 1 00:04:52.373 ++ export SPDK_TEST_RAID5 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_URING 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_USDT 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_USE_IGB_UIO 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_SCHEDULER 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_SCANBUILD 00:04:52.373 ++ : 00:04:52.373 ++ export SPDK_TEST_NVMF_NICS 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_SMA 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_DAOS 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_XNVME 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_ACCEL 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_ACCEL_DSA 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_ACCEL_IAA 00:04:52.373 ++ : 00:04:52.373 ++ export SPDK_TEST_FUZZER_TARGET 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_TEST_NVMF_MDNS 00:04:52.373 ++ : 0 00:04:52.373 ++ export SPDK_JSONRPC_GO_CLIENT 00:04:52.373 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:52.373 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:52.373 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:52.373 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:52.373 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:52.373 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:52.373 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:52.373 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:52.373 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:04:52.373 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:04:52.373 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:52.373 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:52.373 ++ export PYTHONDONTWRITEBYTECODE=1 00:04:52.373 ++ PYTHONDONTWRITEBYTECODE=1 00:04:52.373 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:52.373 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:52.373 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:52.373 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:52.373 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:04:52.373 ++ rm -rf /var/tmp/asan_suppression_file 00:04:52.373 ++ cat 00:04:52.373 ++ echo leak:libfuse3.so 00:04:52.373 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:52.373 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:52.373 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:52.373 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:52.373 ++ '[' -z /var/spdk/dependencies ']' 00:04:52.373 ++ export DEPENDENCY_DIR 00:04:52.373 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:52.373 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:52.373 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:52.373 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:52.373 ++ export QEMU_BIN= 00:04:52.373 ++ QEMU_BIN= 00:04:52.373 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:52.373 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:52.373 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:52.373 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:52.373 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:52.373 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:52.373 ++ '[' 0 -eq 0 ']' 00:04:52.373 ++ export valgrind= 00:04:52.373 ++ valgrind= 00:04:52.373 +++ uname -s 00:04:52.373 ++ '[' Linux = Linux ']' 00:04:52.373 ++ HUGEMEM=4096 00:04:52.373 ++ export CLEAR_HUGE=yes 00:04:52.373 ++ CLEAR_HUGE=yes 00:04:52.373 ++ [[ 0 -eq 1 ]] 00:04:52.373 ++ [[ 0 -eq 1 ]] 00:04:52.373 ++ MAKE=make 00:04:52.373 +++ nproc 00:04:52.373 ++ MAKEFLAGS=-j10 00:04:52.373 ++ export HUGEMEM=4096 00:04:52.373 ++ HUGEMEM=4096 00:04:52.373 ++ NO_HUGE=() 00:04:52.373 ++ TEST_MODE= 00:04:52.373 ++ [[ -z '' ]] 00:04:52.373 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:52.373 ++ exec 00:04:52.373 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:52.373 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:04:52.632 ++ set_test_storage 2147483648 00:04:52.632 ++ [[ -v testdir ]] 00:04:52.632 ++ local requested_size=2147483648 00:04:52.632 ++ local mount target_dir 00:04:52.632 ++ local -A mounts fss sizes avails uses 00:04:52.632 ++ local source fs size avail mount use 00:04:52.632 ++ local storage_fallback storage_candidates 00:04:52.632 +++ mktemp -udt spdk.XXXXXX 00:04:52.632 ++ storage_fallback=/tmp/spdk.mrqtTb 00:04:52.632 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:04:52.632 ++ [[ -n '' ]] 00:04:52.632 ++ [[ -n '' ]] 00:04:52.632 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.mrqtTb/tests/unit /tmp/spdk.mrqtTb 00:04:52.632 ++ requested_size=2214592512 00:04:52.632 ++ read -r source fs size use avail _ mount 00:04:52.632 +++ df -T 00:04:52.632 +++ grep -v Filesystem 00:04:52.632 ++ mounts["$mount"]=tmpfs 00:04:52.632 ++ fss["$mount"]=tmpfs 00:04:52.632 ++ avails["$mount"]=1252958208 00:04:52.632 ++ sizes["$mount"]=1254027264 00:04:52.632 ++ uses["$mount"]=1069056 00:04:52.632 ++ read -r source fs size use avail _ mount 00:04:52.632 ++ mounts["$mount"]=/dev/vda1 00:04:52.632 ++ fss["$mount"]=ext4 00:04:52.632 ++ avails["$mount"]=9872453632 00:04:52.632 ++ sizes["$mount"]=19681529856 00:04:52.632 ++ uses["$mount"]=9792299008 00:04:52.632 ++ read -r source fs size use avail _ mount 00:04:52.632 ++ mounts["$mount"]=tmpfs 00:04:52.632 ++ fss["$mount"]=tmpfs 00:04:52.632 ++ avails["$mount"]=6270115840 00:04:52.632 ++ sizes["$mount"]=6270115840 00:04:52.632 ++ uses["$mount"]=0 00:04:52.632 ++ read -r source fs size use avail _ mount 00:04:52.632 ++ mounts["$mount"]=tmpfs 00:04:52.632 ++ fss["$mount"]=tmpfs 00:04:52.632 ++ avails["$mount"]=5242880 00:04:52.632 ++ sizes["$mount"]=5242880 00:04:52.632 ++ uses["$mount"]=0 00:04:52.633 ++ read -r source fs size use avail _ mount 00:04:52.633 ++ mounts["$mount"]=/dev/vda16 00:04:52.633 ++ fss["$mount"]=ext4 00:04:52.633 ++ avails["$mount"]=777306112 00:04:52.633 ++ sizes["$mount"]=923156480 00:04:52.633 ++ uses["$mount"]=81207296 00:04:52.633 ++ read -r source fs size use avail _ mount 00:04:52.633 ++ mounts["$mount"]=/dev/vda15 00:04:52.633 ++ fss["$mount"]=vfat 00:04:52.633 ++ avails["$mount"]=103000064 00:04:52.633 ++ sizes["$mount"]=109395968 00:04:52.633 ++ uses["$mount"]=6395904 00:04:52.633 ++ read -r source fs size use avail _ mount 00:04:52.633 ++ mounts["$mount"]=tmpfs 00:04:52.633 ++ fss["$mount"]=tmpfs 00:04:52.633 ++ avails["$mount"]=1254010880 00:04:52.633 ++ sizes["$mount"]=1254023168 00:04:52.633 ++ uses["$mount"]=12288 00:04:52.633 ++ read -r source fs size use avail _ mount 00:04:52.633 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output 00:04:52.633 ++ fss["$mount"]=fuse.sshfs 00:04:52.633 ++ avails["$mount"]=97736212480 00:04:52.633 ++ sizes["$mount"]=105088212992 00:04:52.633 ++ uses["$mount"]=1966567424 00:04:52.633 ++ read -r source fs size use avail _ mount 00:04:52.633 ++ printf '* Looking for test storage...\n' 00:04:52.633 * Looking for test storage... 00:04:52.633 ++ local target_space new_size 00:04:52.633 ++ for target_dir in "${storage_candidates[@]}" 00:04:52.633 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:04:52.633 +++ awk '$1 !~ /Filesystem/{print $6}' 00:04:52.633 ++ mount=/ 00:04:52.633 ++ target_space=9872453632 00:04:52.633 ++ (( target_space == 0 || target_space < requested_size )) 00:04:52.633 ++ (( target_space >= requested_size )) 00:04:52.633 ++ [[ ext4 == tmpfs ]] 00:04:52.633 ++ [[ ext4 == ramfs ]] 00:04:52.633 ++ [[ / == / ]] 00:04:52.633 ++ new_size=12006891520 00:04:52.633 ++ (( new_size * 100 / sizes[/] > 95 )) 00:04:52.633 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:52.633 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:52.633 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:04:52.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:04:52.633 ++ return 0 00:04:52.633 ++ set -o errtrace 00:04:52.633 ++ shopt -s extdebug 00:04:52.633 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:04:52.633 ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:04:52.633 23:48:48 unittest -- common/autotest_common.sh@1687 -- # true 00:04:52.633 23:48:48 unittest -- common/autotest_common.sh@1689 -- # xtrace_fd 00:04:52.633 23:48:48 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:04:52.633 23:48:48 unittest -- common/autotest_common.sh@29 -- # exec 00:04:52.633 23:48:48 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:04:52.633 23:48:48 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:04:52.633 23:48:48 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:04:52.633 23:48:48 unittest -- common/autotest_common.sh@18 -- # set -x 00:04:52.633 23:48:48 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:04:52.633 23:48:48 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:04:52.633 23:48:48 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:04:52.633 23:48:48 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:04:52.633 23:48:48 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:04:52.633 23:48:48 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=gcc 00:04:52.633 23:48:48 unittest -- unit/unittest.sh@181 -- # hash lcov 00:04:52.633 23:48:48 unittest -- unit/unittest.sh@181 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:52.633 23:48:48 unittest -- unit/unittest.sh@181 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:52.633 23:48:48 unittest -- unit/unittest.sh@182 -- # cov_avail=yes 00:04:52.633 23:48:48 unittest -- unit/unittest.sh@186 -- # '[' yes = yes ']' 00:04:52.633 23:48:48 unittest -- unit/unittest.sh@188 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:04:52.633 23:48:48 unittest -- unit/unittest.sh@191 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:52.633 23:48:48 unittest -- unit/unittest.sh@193 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:52.633 23:48:48 unittest -- unit/unittest.sh@201 -- # export 'LCOV_OPTS= 00:04:52.633 --rc lcov_branch_coverage=1 00:04:52.633 --rc lcov_function_coverage=1 00:04:52.633 --rc genhtml_branch_coverage=1 00:04:52.633 --rc genhtml_function_coverage=1 00:04:52.633 --rc genhtml_legend=1 00:04:52.633 --rc geninfo_all_blocks=1 00:04:52.633 ' 00:04:52.633 23:48:48 unittest -- unit/unittest.sh@201 -- # LCOV_OPTS=' 00:04:52.633 --rc lcov_branch_coverage=1 00:04:52.633 --rc lcov_function_coverage=1 00:04:52.633 --rc genhtml_branch_coverage=1 00:04:52.633 --rc genhtml_function_coverage=1 00:04:52.633 --rc genhtml_legend=1 00:04:52.633 --rc geninfo_all_blocks=1 00:04:52.633 ' 00:04:52.633 23:48:48 unittest -- unit/unittest.sh@202 -- # export 'LCOV=lcov 00:04:52.633 --rc lcov_branch_coverage=1 00:04:52.633 --rc lcov_function_coverage=1 00:04:52.633 --rc genhtml_branch_coverage=1 00:04:52.633 --rc genhtml_function_coverage=1 00:04:52.633 --rc genhtml_legend=1 00:04:52.633 --rc geninfo_all_blocks=1 00:04:52.633 --no-external' 00:04:52.633 23:48:48 unittest -- unit/unittest.sh@202 -- # LCOV='lcov 00:04:52.633 --rc lcov_branch_coverage=1 00:04:52.633 --rc lcov_function_coverage=1 00:04:52.633 --rc genhtml_branch_coverage=1 00:04:52.633 --rc genhtml_function_coverage=1 00:04:52.633 --rc genhtml_legend=1 00:04:52.633 --rc geninfo_all_blocks=1 00:04:52.633 --no-external' 00:04:52.633 23:48:48 unittest -- unit/unittest.sh@204 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:04:59.220 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:59.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:45.917 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:45.917 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:45.917 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:45.917 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:45.917 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:45.917 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:45.917 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:45.917 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:45.917 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:45.917 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:45.917 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:45.917 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:45.917 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:45.917 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:45.917 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:45.917 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:45.917 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:45.917 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:45.917 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:45.917 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:45.917 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:45.917 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:45.917 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:45.917 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:45.917 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:45.917 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:45.917 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:45.917 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:45.917 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:45.917 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:45.917 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:45.917 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:45.917 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:45.918 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:45.918 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:45.919 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:45.919 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:45.919 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:45.919 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:45.919 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:45.919 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:45.919 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:45.919 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:45.919 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:45.919 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:45.919 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:45.919 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:45.919 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:45.919 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:45.919 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:45.919 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:45.919 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:45.919 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:45.919 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:45.919 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:45.919 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:45.919 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:45.919 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:45.919 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:45.919 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:45.919 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:50.103 23:49:45 unittest -- unit/unittest.sh@208 -- # uname -m 00:05:50.103 23:49:45 unittest -- unit/unittest.sh@208 -- # '[' x86_64 = aarch64 ']' 00:05:50.103 23:49:45 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:50.103 23:49:45 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.103 23:49:45 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.103 23:49:45 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:50.103 ************************************ 00:05:50.103 START TEST unittest_pci_event 00:05:50.103 ************************************ 00:05:50.104 23:49:45 unittest.unittest_pci_event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:50.104 00:05:50.104 00:05:50.104 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.104 http://cunit.sourceforge.net/ 00:05:50.104 00:05:50.104 00:05:50.104 Suite: pci_event 00:05:50.104 Test: test_pci_parse_event ...[2024-07-24 23:49:45.942942] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:05:50.104 passed 00:05:50.104 00:05:50.104 [2024-07-24 23:49:45.943276] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:05:50.104 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.104 suites 1 1 n/a 0 0 00:05:50.104 tests 1 1 1 0 0 00:05:50.104 asserts 15 15 15 0 n/a 00:05:50.104 00:05:50.104 Elapsed time = 0.001 seconds 00:05:50.104 00:05:50.104 real 0m0.035s 00:05:50.104 user 0m0.016s 00:05:50.104 sys 0m0.014s 00:05:50.104 23:49:45 unittest.unittest_pci_event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.104 23:49:45 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:05:50.104 ************************************ 00:05:50.104 END TEST unittest_pci_event 00:05:50.104 ************************************ 00:05:50.362 23:49:45 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:50.362 23:49:45 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.362 23:49:45 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.362 23:49:45 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:50.362 ************************************ 00:05:50.362 START TEST unittest_include 00:05:50.362 ************************************ 00:05:50.362 23:49:46 unittest.unittest_include -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:50.362 00:05:50.362 00:05:50.362 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.362 http://cunit.sourceforge.net/ 00:05:50.362 00:05:50.362 00:05:50.362 Suite: histogram 00:05:50.362 Test: histogram_test ...passed 00:05:50.362 Test: histogram_merge ...passed 00:05:50.362 00:05:50.362 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.362 suites 1 1 n/a 0 0 00:05:50.363 tests 2 2 2 0 0 00:05:50.363 asserts 50 50 50 0 n/a 00:05:50.363 00:05:50.363 Elapsed time = 0.005 seconds 00:05:50.363 00:05:50.363 real 0m0.028s 00:05:50.363 user 0m0.020s 00:05:50.363 sys 0m0.009s 00:05:50.363 23:49:46 unittest.unittest_include -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.363 23:49:46 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:05:50.363 ************************************ 00:05:50.363 END TEST unittest_include 00:05:50.363 ************************************ 00:05:50.363 23:49:46 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:05:50.363 23:49:46 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.363 23:49:46 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.363 23:49:46 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:50.363 ************************************ 00:05:50.363 START TEST unittest_bdev 00:05:50.363 ************************************ 00:05:50.363 23:49:46 unittest.unittest_bdev -- common/autotest_common.sh@1125 -- # unittest_bdev 00:05:50.363 23:49:46 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:05:50.363 00:05:50.363 00:05:50.363 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.363 http://cunit.sourceforge.net/ 00:05:50.363 00:05:50.363 00:05:50.363 Suite: bdev 00:05:50.363 Test: bytes_to_blocks_test ...passed 00:05:50.363 Test: num_blocks_test ...passed 00:05:50.363 Test: io_valid_test ...passed 00:05:50.363 Test: open_write_test ...[2024-07-24 23:49:46.149654] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:05:50.363 [2024-07-24 23:49:46.149948] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:05:50.363 [2024-07-24 23:49:46.150078] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:05:50.363 passed 00:05:50.363 Test: claim_test ...passed 00:05:50.363 Test: alias_add_del_test ...[2024-07-24 23:49:46.213550] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:05:50.363 [2024-07-24 23:49:46.213660] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4663:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:05:50.363 [2024-07-24 23:49:46.213722] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:05:50.621 passed 00:05:50.621 Test: get_device_stat_test ...passed 00:05:50.621 Test: bdev_io_types_test ...passed 00:05:50.621 Test: bdev_io_wait_test ...passed 00:05:50.621 Test: bdev_io_spans_split_test ...passed 00:05:50.621 Test: bdev_io_boundary_split_test ...passed 00:05:50.621 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-24 23:49:46.325223] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3214:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:05:50.621 passed 00:05:50.621 Test: bdev_io_mix_split_test ...passed 00:05:50.621 Test: bdev_io_split_with_io_wait ...passed 00:05:50.622 Test: bdev_io_write_unit_split_test ...[2024-07-24 23:49:46.387499] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2765:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:50.622 [2024-07-24 23:49:46.387619] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2765:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:50.622 [2024-07-24 23:49:46.387660] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2765:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:05:50.622 [2024-07-24 23:49:46.387700] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2765:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:05:50.622 passed 00:05:50.622 Test: bdev_io_alignment_with_boundary ...passed 00:05:50.622 Test: bdev_io_alignment ...passed 00:05:50.622 Test: bdev_histograms ...passed 00:05:50.880 Test: bdev_write_zeroes ...passed 00:05:50.880 Test: bdev_compare_and_write ...passed 00:05:50.880 Test: bdev_compare ...passed 00:05:50.880 Test: bdev_compare_emulated ...passed 00:05:50.880 Test: bdev_zcopy_write ...passed 00:05:50.880 Test: bdev_zcopy_read ...passed 00:05:50.880 Test: bdev_open_while_hotremove ...passed 00:05:50.880 Test: bdev_close_while_hotremove ...passed 00:05:50.880 Test: bdev_open_ext_test ...[2024-07-24 23:49:46.641669] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8217:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:50.880 passed 00:05:50.880 Test: bdev_open_ext_unregister ...[2024-07-24 23:49:46.641873] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8217:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:50.880 passed 00:05:50.880 Test: bdev_set_io_timeout ...passed 00:05:50.880 Test: bdev_set_qd_sampling ...passed 00:05:50.880 Test: lba_range_overlap ...passed 00:05:50.880 Test: lock_lba_range_check_ranges ...passed 00:05:50.880 Test: lock_lba_range_with_io_outstanding ...passed 00:05:50.880 Test: lock_lba_range_overlapped ...passed 00:05:51.139 Test: bdev_quiesce ...[2024-07-24 23:49:46.754565] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10186:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:05:51.139 passed 00:05:51.139 Test: bdev_io_abort ...passed 00:05:51.139 Test: bdev_unmap ...passed 00:05:51.139 Test: bdev_write_zeroes_split_test ...passed 00:05:51.139 Test: bdev_set_options_test ...passed 00:05:51.139 Test: bdev_get_memory_domains ...passed 00:05:51.139 Test: bdev_io_ext ...[2024-07-24 23:49:46.833063] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:05:51.139 passed 00:05:51.139 Test: bdev_io_ext_no_opts ...passed 00:05:51.139 Test: bdev_io_ext_invalid_opts ...passed 00:05:51.139 Test: bdev_io_ext_split ...passed 00:05:51.139 Test: bdev_io_ext_bounce_buffer ...passed 00:05:51.139 Test: bdev_register_uuid_alias ...[2024-07-24 23:49:46.941036] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name ea5e981b-c2a0-48e2-846f-487ce9312d66 already exists 00:05:51.139 [2024-07-24 23:49:46.941131] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:ea5e981b-c2a0-48e2-846f-487ce9312d66 alias for bdev bdev0 00:05:51.139 passed 00:05:51.139 Test: bdev_unregister_by_name ...[2024-07-24 23:49:46.957283] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8007:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:05:51.139 [2024-07-24 23:49:46.957341] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8015:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:05:51.139 passed 00:05:51.139 Test: for_each_bdev_test ...passed 00:05:51.139 Test: bdev_seek_test ...passed 00:05:51.139 Test: bdev_copy ...passed 00:05:51.399 Test: bdev_copy_split_test ...passed 00:05:51.399 Test: examine_locks ...passed 00:05:51.399 Test: claim_v2_rwo ...[2024-07-24 23:49:47.022864] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:51.399 [2024-07-24 23:49:47.022946] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:51.399 [2024-07-24 23:49:47.022982] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:51.399 [2024-07-24 23:49:47.022999] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:51.399 [2024-07-24 23:49:47.023016] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:51.399 [2024-07-24 23:49:47.023058] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8736:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:05:51.399 passed 00:05:51.399 Test: claim_v2_rom ...[2024-07-24 23:49:47.023228] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:51.399 [2024-07-24 23:49:47.023257] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:51.399 [2024-07-24 23:49:47.023275] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:51.399 [2024-07-24 23:49:47.023288] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:51.399 [2024-07-24 23:49:47.023329] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8779:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:05:51.399 passed 00:05:51.399 Test: claim_v2_rwm ...[2024-07-24 23:49:47.023356] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8774:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:51.399 [2024-07-24 23:49:47.023452] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8809:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:51.399 [2024-07-24 23:49:47.023483] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:51.399 [2024-07-24 23:49:47.023509] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:51.399 [2024-07-24 23:49:47.023523] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:51.399 [2024-07-24 23:49:47.023538] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:51.399 [2024-07-24 23:49:47.023551] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8829:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:05:51.399 passed 00:05:51.399 Test: claim_v2_existing_writer ...[2024-07-24 23:49:47.023590] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8809:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:51.399 [2024-07-24 23:49:47.023707] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8774:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:51.399 [2024-07-24 23:49:47.023733] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8774:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:51.399 passed 00:05:51.399 Test: claim_v2_existing_v1 ...[2024-07-24 23:49:47.023866] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:51.399 [2024-07-24 23:49:47.023893] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:51.399 [2024-07-24 23:49:47.023907] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:51.399 passed 00:05:51.399 Test: claim_v1_existing_v2 ...passed 00:05:51.399 Test: examine_claimed ...[2024-07-24 23:49:47.024025] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:51.399 [2024-07-24 23:49:47.024056] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:51.399 [2024-07-24 23:49:47.024089] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:51.399 [2024-07-24 23:49:47.024376] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:05:51.399 passed 00:05:51.399 00:05:51.399 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.399 suites 1 1 n/a 0 0 00:05:51.399 tests 59 59 59 0 0 00:05:51.399 asserts 4599 4599 4599 0 n/a 00:05:51.399 00:05:51.399 Elapsed time = 0.915 seconds 00:05:51.399 23:49:47 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:05:51.399 00:05:51.399 00:05:51.399 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.399 http://cunit.sourceforge.net/ 00:05:51.399 00:05:51.399 00:05:51.399 Suite: nvme 00:05:51.399 Test: test_create_ctrlr ...passed 00:05:51.399 Test: test_reset_ctrlr ...[2024-07-24 23:49:47.070528] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:51.399 passed 00:05:51.399 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:05:51.399 Test: test_failover_ctrlr ...passed 00:05:51.399 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-24 23:49:47.072797] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:51.399 [2024-07-24 23:49:47.073015] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:51.399 [2024-07-24 23:49:47.073233] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:51.399 passed 00:05:51.399 Test: test_pending_reset ...[2024-07-24 23:49:47.074532] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:51.400 [2024-07-24 23:49:47.074736] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:51.400 passed 00:05:51.400 Test: test_attach_ctrlr ...[2024-07-24 23:49:47.075639] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:05:51.400 passed 00:05:51.400 Test: test_aer_cb ...passed 00:05:51.400 Test: test_submit_nvme_cmd ...passed 00:05:51.400 Test: test_add_remove_trid ...passed 00:05:51.400 Test: test_abort ...[2024-07-24 23:49:47.078690] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7480:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:05:51.400 passed 00:05:51.400 Test: test_get_io_qpair ...passed 00:05:51.400 Test: test_bdev_unregister ...passed 00:05:51.400 Test: test_compare_ns ...passed 00:05:51.400 Test: test_init_ana_log_page ...passed 00:05:51.400 Test: test_get_memory_domains ...passed 00:05:51.400 Test: test_reconnect_qpair ...[2024-07-24 23:49:47.081043] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:51.400 passed 00:05:51.400 Test: test_create_bdev_ctrlr ...[2024-07-24 23:49:47.081501] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5407:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:05:51.400 passed 00:05:51.400 Test: test_add_multi_ns_to_bdev ...[2024-07-24 23:49:47.082513] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4574:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:05:51.400 passed 00:05:51.400 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:05:51.400 Test: test_admin_path ...passed 00:05:51.400 Test: test_reset_bdev_ctrlr ...passed 00:05:51.400 Test: test_find_io_path ...passed 00:05:51.400 Test: test_retry_io_if_ana_state_is_updating ...passed 00:05:51.400 Test: test_retry_io_for_io_path_error ...passed 00:05:51.400 Test: test_retry_io_count ...passed 00:05:51.400 Test: test_concurrent_read_ana_log_page ...passed 00:05:51.400 Test: test_retry_io_for_ana_error ...passed 00:05:51.400 Test: test_check_io_error_resiliency_params ...[2024-07-24 23:49:47.087965] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:05:51.400 [2024-07-24 23:49:47.088024] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6108:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:51.400 [2024-07-24 23:49:47.088044] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6117:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:51.400 [2024-07-24 23:49:47.088056] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6120:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:05:51.400 [2024-07-24 23:49:47.088069] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6132:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:51.400 [2024-07-24 23:49:47.088083] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6132:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:51.400 passed 00:05:51.400 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-07-24 23:49:47.088098] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6112:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:05:51.400 [2024-07-24 23:49:47.088109] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6127:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:05:51.400 [2024-07-24 23:49:47.088146] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6124:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:05:51.400 passed 00:05:51.400 Test: test_reconnect_ctrlr ...[2024-07-24 23:49:47.088813] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:51.400 [2024-07-24 23:49:47.088972] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:51.400 [2024-07-24 23:49:47.089171] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:51.400 [2024-07-24 23:49:47.089284] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:51.400 [2024-07-24 23:49:47.089355] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:51.400 passed 00:05:51.400 Test: test_retry_failover_ctrlr ...[2024-07-24 23:49:47.089620] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:51.400 passed 00:05:51.400 Test: test_fail_path ...[2024-07-24 23:49:47.090073] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:51.400 [2024-07-24 23:49:47.090186] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:51.400 [2024-07-24 23:49:47.090294] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:51.400 [2024-07-24 23:49:47.090380] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:51.400 [2024-07-24 23:49:47.090455] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:51.400 passed 00:05:51.400 Test: test_nvme_ns_cmp ...passed 00:05:51.400 Test: test_ana_transition ...passed 00:05:51.400 Test: test_set_preferred_path ...passed 00:05:51.400 Test: test_find_next_io_path ...passed 00:05:51.400 Test: test_find_io_path_min_qd ...passed 00:05:51.400 Test: test_disable_auto_failback ...[2024-07-24 23:49:47.091828] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:51.400 passed 00:05:51.400 Test: test_set_multipath_policy ...passed 00:05:51.400 Test: test_uuid_generation ...passed 00:05:51.400 Test: test_retry_io_to_same_path ...passed 00:05:51.400 Test: test_race_between_reset_and_disconnected ...passed 00:05:51.400 Test: test_ctrlr_op_rpc ...passed 00:05:51.400 Test: test_bdev_ctrlr_op_rpc ...passed 00:05:51.400 Test: test_disable_enable_ctrlr ...[2024-07-24 23:49:47.095089] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:51.400 [2024-07-24 23:49:47.095243] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:51.400 passed 00:05:51.400 Test: test_delete_ctrlr_done ...passed 00:05:51.400 Test: test_ns_remove_during_reset ...passed 00:05:51.400 Test: test_io_path_is_current ...passed 00:05:51.400 00:05:51.400 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.400 suites 1 1 n/a 0 0 00:05:51.400 tests 49 49 49 0 0 00:05:51.400 asserts 3578 3578 3578 0 n/a 00:05:51.400 00:05:51.400 Elapsed time = 0.027 seconds 00:05:51.400 23:49:47 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:05:51.400 00:05:51.400 00:05:51.400 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.400 http://cunit.sourceforge.net/ 00:05:51.400 00:05:51.400 Test Options 00:05:51.400 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:05:51.400 00:05:51.400 Suite: raid 00:05:51.400 Test: test_create_raid ...passed 00:05:51.400 Test: test_create_raid_superblock ...passed 00:05:51.400 Test: test_delete_raid ...passed 00:05:51.400 Test: test_create_raid_invalid_args ...[2024-07-24 23:49:47.143996] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1507:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:05:51.400 [2024-07-24 23:49:47.144498] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1501:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:05:51.400 [2024-07-24 23:49:47.145305] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1491:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:05:51.400 [2024-07-24 23:49:47.145546] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3283:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:51.400 [2024-07-24 23:49:47.145606] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3461:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:05:51.400 [2024-07-24 23:49:47.146667] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3283:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:51.400 [2024-07-24 23:49:47.146722] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3461:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:05:51.400 passed 00:05:51.400 Test: test_delete_raid_invalid_args ...passed 00:05:51.400 Test: test_io_channel ...passed 00:05:51.400 Test: test_reset_io ...passed 00:05:51.400 Test: test_multi_raid ...passed 00:05:51.400 Test: test_io_type_supported ...passed 00:05:51.400 Test: test_raid_json_dump_info ...passed 00:05:51.400 Test: test_context_size ...passed 00:05:51.400 Test: test_raid_level_conversions ...passed 00:05:51.400 Test: test_raid_io_split ...passed 00:05:51.400 Test: test_raid_process ...passed 00:05:51.400 Test: test_raid_process_with_qos ...passed 00:05:51.400 00:05:51.400 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.400 suites 1 1 n/a 0 0 00:05:51.400 tests 15 15 15 0 0 00:05:51.400 asserts 6602 6602 6602 0 n/a 00:05:51.400 00:05:51.400 Elapsed time = 0.030 seconds 00:05:51.400 23:49:47 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:05:51.400 00:05:51.400 00:05:51.400 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.400 http://cunit.sourceforge.net/ 00:05:51.400 00:05:51.400 00:05:51.400 Suite: raid_sb 00:05:51.400 Test: test_raid_bdev_write_superblock ...passed 00:05:51.400 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:51.400 Test: test_raid_bdev_parse_superblock ...[2024-07-24 23:49:47.208584] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:51.400 passed 00:05:51.400 Suite: raid_sb_md 00:05:51.400 Test: test_raid_bdev_write_superblock ...passed 00:05:51.400 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:51.400 Test: test_raid_bdev_parse_superblock ...[2024-07-24 23:49:47.209001] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:51.400 passed 00:05:51.401 Suite: raid_sb_md_interleaved 00:05:51.401 Test: test_raid_bdev_write_superblock ...passed 00:05:51.401 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:51.401 Test: test_raid_bdev_parse_superblock ...passed 00:05:51.401 00:05:51.401 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.401 suites 3 3 n/a 0 0 00:05:51.401 tests 9 9 9 0 0 00:05:51.401 asserts 139 139 139 0 n/a 00:05:51.401 00:05:51.401 Elapsed time = 0.001 seconds[2024-07-24 23:49:47.209319] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:51.401 00:05:51.401 23:49:47 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:05:51.401 00:05:51.401 00:05:51.401 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.401 http://cunit.sourceforge.net/ 00:05:51.401 00:05:51.401 00:05:51.401 Suite: concat 00:05:51.401 Test: test_concat_start ...passed 00:05:51.401 Test: test_concat_rw ...passed 00:05:51.401 Test: test_concat_null_payload ...passed 00:05:51.401 00:05:51.401 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.401 suites 1 1 n/a 0 0 00:05:51.401 tests 3 3 3 0 0 00:05:51.401 asserts 8460 8460 8460 0 n/a 00:05:51.401 00:05:51.401 Elapsed time = 0.009 seconds 00:05:51.677 23:49:47 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:05:51.677 00:05:51.677 00:05:51.677 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.677 http://cunit.sourceforge.net/ 00:05:51.677 00:05:51.677 00:05:51.677 Suite: raid0 00:05:51.677 Test: test_write_io ...passed 00:05:51.677 Test: test_read_io ...passed 00:05:51.677 Test: test_unmap_io ...passed 00:05:51.677 Test: test_io_failure ...passed 00:05:51.677 Suite: raid0_dif 00:05:51.677 Test: test_write_io ...passed 00:05:51.677 Test: test_read_io ...passed 00:05:51.677 Test: test_unmap_io ...passed 00:05:51.677 Test: test_io_failure ...passed 00:05:51.677 00:05:51.677 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.677 suites 2 2 n/a 0 0 00:05:51.677 tests 8 8 8 0 0 00:05:51.677 asserts 368291 368291 368291 0 n/a 00:05:51.677 00:05:51.677 Elapsed time = 0.156 seconds 00:05:51.677 23:49:47 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:05:51.677 00:05:51.677 00:05:51.677 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.678 http://cunit.sourceforge.net/ 00:05:51.678 00:05:51.678 00:05:51.678 Suite: raid1 00:05:51.678 Test: test_raid1_start ...passed 00:05:51.678 Test: test_raid1_read_balancing ...passed 00:05:51.678 Test: test_raid1_write_error ...passed 00:05:51.678 Test: test_raid1_read_error ...passed 00:05:51.678 00:05:51.678 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.678 suites 1 1 n/a 0 0 00:05:51.678 tests 4 4 4 0 0 00:05:51.678 asserts 4374 4374 4374 0 n/a 00:05:51.678 00:05:51.678 Elapsed time = 0.006 seconds 00:05:51.678 23:49:47 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:05:51.678 00:05:51.678 00:05:51.678 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.678 http://cunit.sourceforge.net/ 00:05:51.678 00:05:51.678 00:05:51.678 Suite: zone 00:05:51.678 Test: test_zone_get_operation ...passed 00:05:51.678 Test: test_bdev_zone_get_info ...passed 00:05:51.678 Test: test_bdev_zone_management ...passed 00:05:51.678 Test: test_bdev_zone_append ...passed 00:05:51.678 Test: test_bdev_zone_append_with_md ...passed 00:05:51.678 Test: test_bdev_zone_appendv ...passed 00:05:51.678 Test: test_bdev_zone_appendv_with_md ...passed 00:05:51.678 Test: test_bdev_io_get_append_location ...passed 00:05:51.678 00:05:51.678 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.678 suites 1 1 n/a 0 0 00:05:51.678 tests 8 8 8 0 0 00:05:51.678 asserts 94 94 94 0 n/a 00:05:51.678 00:05:51.678 Elapsed time = 0.000 seconds 00:05:51.961 23:49:47 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:05:51.961 00:05:51.961 00:05:51.961 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.961 http://cunit.sourceforge.net/ 00:05:51.961 00:05:51.961 00:05:51.961 Suite: gpt_parse 00:05:51.961 Test: test_parse_mbr_and_primary ...[2024-07-24 23:49:47.556247] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:51.961 [2024-07-24 23:49:47.556519] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:51.961 [2024-07-24 23:49:47.556608] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:51.961 [2024-07-24 23:49:47.556641] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:51.961 [2024-07-24 23:49:47.556683] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:51.961 [2024-07-24 23:49:47.556713] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:51.961 passed 00:05:51.961 Test: test_parse_secondary ...[2024-07-24 23:49:47.557542] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:51.961 [2024-07-24 23:49:47.557578] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:51.961 [2024-07-24 23:49:47.557625] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:51.961 [2024-07-24 23:49:47.557661] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:51.961 passed 00:05:51.961 Test: test_check_mbr ...[2024-07-24 23:49:47.558455] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:51.961 [2024-07-24 23:49:47.558522] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the relatedpassed 00:05:51.961 Test: test_read_header ... buffer should not be NULL 00:05:51.961 [2024-07-24 23:49:47.558664] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:05:51.962 [2024-07-24 23:49:47.558715] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:05:51.962 [2024-07-24 23:49:47.558762] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:05:51.962 [2024-07-24 23:49:47.558830] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:05:51.962 [2024-07-24 23:49:47.558880] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:05:51.962 passed 00:05:51.962 Test: test_read_partitions ...[2024-07-24 23:49:47.558915] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:05:51.962 [2024-07-24 23:49:47.559028] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:05:51.962 [2024-07-24 23:49:47.559061] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:05:51.962 [2024-07-24 23:49:47.559102] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:05:51.962 [2024-07-24 23:49:47.559132] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:05:51.962 [2024-07-24 23:49:47.559501] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:05:51.962 passed 00:05:51.962 00:05:51.962 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.962 suites 1 1 n/a 0 0 00:05:51.962 tests 5 5 5 0 0 00:05:51.962 asserts 33 33 33 0 n/a 00:05:51.962 00:05:51.962 Elapsed time = 0.004 seconds 00:05:51.962 23:49:47 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:05:51.962 00:05:51.962 00:05:51.962 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.962 http://cunit.sourceforge.net/ 00:05:51.962 00:05:51.962 00:05:51.962 Suite: bdev_part 00:05:51.962 Test: part_test ...[2024-07-24 23:49:47.602440] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name bafd5731-c56b-5319-8e62-a2b2b9fb08b3 already exists 00:05:51.962 [2024-07-24 23:49:47.602702] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:bafd5731-c56b-5319-8e62-a2b2b9fb08b3 alias for bdev test1 00:05:51.962 passed 00:05:51.962 Test: part_free_test ...passed 00:05:51.962 Test: part_get_io_channel_test ...passed 00:05:51.962 Test: part_construct_ext ...passed 00:05:51.962 00:05:51.962 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.962 suites 1 1 n/a 0 0 00:05:51.962 tests 4 4 4 0 0 00:05:51.962 asserts 48 48 48 0 n/a 00:05:51.962 00:05:51.962 Elapsed time = 0.042 seconds 00:05:51.962 23:49:47 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:05:51.962 00:05:51.962 00:05:51.962 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.962 http://cunit.sourceforge.net/ 00:05:51.962 00:05:51.962 00:05:51.962 Suite: scsi_nvme_suite 00:05:51.962 Test: scsi_nvme_translate_test ...passed 00:05:51.962 00:05:51.962 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.962 suites 1 1 n/a 0 0 00:05:51.962 tests 1 1 1 0 0 00:05:51.962 asserts 104 104 104 0 n/a 00:05:51.962 00:05:51.962 Elapsed time = 0.000 seconds 00:05:51.962 23:49:47 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:05:51.962 00:05:51.962 00:05:51.962 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.962 http://cunit.sourceforge.net/ 00:05:51.962 00:05:51.962 00:05:51.962 Suite: lvol 00:05:51.962 Test: ut_lvs_init ...[2024-07-24 23:49:47.717603] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:05:51.962 [2024-07-24 23:49:47.718028] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:05:51.962 passed 00:05:51.962 Test: ut_lvol_init ...passed 00:05:51.962 Test: ut_lvol_snapshot ...passed 00:05:51.962 Test: ut_lvol_clone ...passed 00:05:51.962 Test: ut_lvs_destroy ...passed 00:05:51.962 Test: ut_lvs_unload ...passed 00:05:51.962 Test: ut_lvol_resize ...[2024-07-24 23:49:47.719660] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:05:51.962 passed 00:05:51.962 Test: ut_lvol_set_read_only ...passed 00:05:51.962 Test: ut_lvol_hotremove ...passed 00:05:51.962 Test: ut_vbdev_lvol_get_io_channel ...passed 00:05:51.962 Test: ut_vbdev_lvol_io_type_supported ...passed 00:05:51.962 Test: ut_lvol_read_write ...passed 00:05:51.962 Test: ut_vbdev_lvol_submit_request ...passed 00:05:51.962 Test: ut_lvol_examine_config ...passed 00:05:51.962 Test: ut_lvol_examine_disk ...[2024-07-24 23:49:47.720383] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:05:51.962 passed 00:05:51.962 Test: ut_lvol_rename ...[2024-07-24 23:49:47.721423] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:05:51.962 [2024-07-24 23:49:47.721494] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:05:51.962 passed 00:05:51.962 Test: ut_bdev_finish ...passed 00:05:51.962 Test: ut_lvs_rename ...passed 00:05:51.962 Test: ut_lvol_seek ...passed 00:05:51.962 Test: ut_esnap_dev_create ...[2024-07-24 23:49:47.722143] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:05:51.962 [2024-07-24 23:49:47.722221] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:05:51.962 passed 00:05:51.962 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-24 23:49:47.722258] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:05:51.962 [2024-07-24 23:49:47.722417] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:05:51.962 [2024-07-24 23:49:47.722457] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:05:51.962 passed 00:05:51.962 Test: ut_lvol_shallow_copy ...[2024-07-24 23:49:47.722691] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:05:51.962 passed 00:05:51.962 Test: ut_lvol_set_external_parent ...passed 00:05:51.962 00:05:51.962 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.962 suites 1 1 n/a 0 0 00:05:51.962 tests 23 23 23 0 0 00:05:51.962 asserts 770 770 770 0 n/a 00:05:51.962 00:05:51.962 Elapsed time = 0.005 seconds 00:05:51.962 [2024-07-24 23:49:47.722747] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:05:51.962 [2024-07-24 23:49:47.722857] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:05:51.962 23:49:47 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:05:51.962 00:05:51.962 00:05:51.962 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.962 http://cunit.sourceforge.net/ 00:05:51.962 00:05:51.962 00:05:51.962 Suite: zone_block 00:05:51.962 Test: test_zone_block_create ...passed 00:05:51.962 Test: test_zone_block_create_invalid ...[2024-07-24 23:49:47.791721] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:05:51.962 [2024-07-24 23:49:47.792100] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-24 23:49:47.792262] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:05:51.962 [2024-07-24 23:49:47.792338] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-24 23:49:47.792548] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 861:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:05:51.962 [2024-07-24 23:49:47.792591] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-24 23:49:47.792701] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 866:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:05:51.962 [2024-07-24 23:49:47.792728] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:05:51.962 Test: test_get_zone_info ...[2024-07-24 23:49:47.793451] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.962 [2024-07-24 23:49:47.793570] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.962 [2024-07-24 23:49:47.793631] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.962 passed 00:05:51.962 Test: test_supported_io_types ...passed 00:05:51.962 Test: test_reset_zone ...[2024-07-24 23:49:47.794641] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.962 [2024-07-24 23:49:47.794727] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.962 passed 00:05:51.962 Test: test_open_zone ...[2024-07-24 23:49:47.795284] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.962 [2024-07-24 23:49:47.796187] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.962 [2024-07-24 23:49:47.796272] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.962 passed 00:05:51.962 Test: test_zone_write ...[2024-07-24 23:49:47.796978] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:51.963 [2024-07-24 23:49:47.797028] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.963 [2024-07-24 23:49:47.797095] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:51.963 [2024-07-24 23:49:47.797126] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.963 [2024-07-24 23:49:47.804223] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:05:51.963 [2024-07-24 23:49:47.804301] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.963 [2024-07-24 23:49:47.804360] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:05:51.963 [2024-07-24 23:49:47.804383] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.963 [2024-07-24 23:49:47.811607] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:51.963 [2024-07-24 23:49:47.811669] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.963 passed 00:05:51.963 Test: test_zone_read ...[2024-07-24 23:49:47.812277] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:05:51.963 [2024-07-24 23:49:47.812326] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.963 [2024-07-24 23:49:47.812389] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:05:51.963 [2024-07-24 23:49:47.812431] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.963 [2024-07-24 23:49:47.812985] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:05:51.963 [2024-07-24 23:49:47.813042] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.963 passed 00:05:51.963 Test: test_close_zone ...[2024-07-24 23:49:47.813474] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.963 [2024-07-24 23:49:47.813567] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.963 [2024-07-24 23:49:47.813774] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.963 [2024-07-24 23:49:47.813848] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.963 passed 00:05:51.963 Test: test_finish_zone ...[2024-07-24 23:49:47.814356] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.963 [2024-07-24 23:49:47.814440] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.963 passed 00:05:51.963 Test: test_append_zone ...[2024-07-24 23:49:47.814762] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:51.963 [2024-07-24 23:49:47.814815] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.963 [2024-07-24 23:49:47.814880] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:51.963 [2024-07-24 23:49:47.814897] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.963 [2024-07-24 23:49:47.828035] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:51.963 [2024-07-24 23:49:47.828114] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:51.963 passed 00:05:51.963 00:05:51.963 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.963 suites 1 1 n/a 0 0 00:05:51.963 tests 11 11 11 0 0 00:05:51.963 asserts 3437 3437 3437 0 n/a 00:05:51.963 00:05:51.963 Elapsed time = 0.038 seconds 00:05:52.222 23:49:47 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:05:52.222 00:05:52.222 00:05:52.222 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.222 http://cunit.sourceforge.net/ 00:05:52.222 00:05:52.222 00:05:52.222 Suite: bdev 00:05:52.222 Test: basic ...[2024-07-24 23:49:47.918948] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x570b5f746601): Operation not permitted (rc=-1) 00:05:52.222 [2024-07-24 23:49:47.919330] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x5130000003c0 (0x570b5f7465c0): Operation not permitted (rc=-1) 00:05:52.222 [2024-07-24 23:49:47.919412] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x570b5f746601): Operation not permitted (rc=-1) 00:05:52.222 passed 00:05:52.222 Test: unregister_and_close ...passed 00:05:52.222 Test: unregister_and_close_different_threads ...passed 00:05:52.222 Test: basic_qos ...passed 00:05:52.222 Test: put_channel_during_reset ...passed 00:05:52.481 Test: aborted_reset ...passed 00:05:52.481 Test: aborted_reset_no_outstanding_io ...passed 00:05:52.481 Test: io_during_reset ...passed 00:05:52.481 Test: reset_completions ...passed 00:05:52.481 Test: io_during_qos_queue ...passed 00:05:52.481 Test: io_during_qos_reset ...passed 00:05:52.481 Test: enomem ...passed 00:05:52.481 Test: enomem_multi_bdev ...passed 00:05:52.481 Test: enomem_multi_bdev_unregister ...passed 00:05:52.740 Test: enomem_multi_io_target ...passed 00:05:52.740 Test: qos_dynamic_enable ...passed 00:05:52.740 Test: bdev_histograms_mt ...passed 00:05:52.740 Test: bdev_set_io_timeout_mt ...[2024-07-24 23:49:48.446338] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x5130000003c0 not unregistered 00:05:52.740 passed 00:05:52.740 Test: lock_lba_range_then_submit_io ...[2024-07-24 23:49:48.452709] thread.c:2177:spdk_io_device_register: *ERROR*: io_device 0x570b5f746580 already registered (old:0x5130000003c0 new:0x513000000c80) 00:05:52.740 passed 00:05:52.740 Test: unregister_during_reset ...passed 00:05:52.740 Test: event_notify_and_close ...passed 00:05:52.740 Test: unregister_and_qos_poller ...passed 00:05:52.740 Suite: bdev_wrong_thread 00:05:52.740 Test: spdk_bdev_register_wt ...[2024-07-24 23:49:48.545530] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8535:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x519000158b80 (0x519000158b80) 00:05:52.740 passed 00:05:52.740 Test: spdk_bdev_examine_wt ...[2024-07-24 23:49:48.545881] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 810:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x519000158b80 (0x519000158b80) 00:05:52.740 passed 00:05:52.740 00:05:52.740 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.740 suites 2 2 n/a 0 0 00:05:52.740 tests 24 24 24 0 0 00:05:52.740 asserts 621 621 621 0 n/a 00:05:52.740 00:05:52.740 Elapsed time = 0.639 seconds 00:05:52.740 00:05:52.740 real 0m2.481s 00:05:52.740 user 0m1.257s 00:05:52.740 sys 0m1.229s 00:05:52.740 23:49:48 unittest.unittest_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.740 23:49:48 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:52.740 ************************************ 00:05:52.740 END TEST unittest_bdev 00:05:52.740 ************************************ 00:05:52.740 23:49:48 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:52.999 23:49:48 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:52.999 23:49:48 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:52.999 23:49:48 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:52.999 23:49:48 unittest -- unit/unittest.sh@230 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:05:52.999 23:49:48 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.999 23:49:48 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.999 23:49:48 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:52.999 ************************************ 00:05:52.999 START TEST unittest_bdev_raid5f 00:05:52.999 ************************************ 00:05:52.999 23:49:48 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:05:52.999 00:05:52.999 00:05:52.999 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.999 http://cunit.sourceforge.net/ 00:05:52.999 00:05:52.999 00:05:52.999 Suite: raid5f 00:05:52.999 Test: test_raid5f_start ...passed 00:05:53.566 Test: test_raid5f_submit_read_request ...passed 00:05:53.825 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:05:58.014 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:06:19.960 Test: test_raid5f_chunk_write_error ...passed 00:06:32.162 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:06:35.448 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:07:14.170 Test: test_raid5f_submit_read_request_degraded ...passed 00:07:14.170 00:07:14.170 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.170 suites 1 1 n/a 0 0 00:07:14.170 tests 8 8 8 0 0 00:07:14.170 asserts 518158 518158 518158 0 n/a 00:07:14.170 00:07:14.170 Elapsed time = 78.546 seconds 00:07:14.170 00:07:14.170 real 1m18.646s 00:07:14.170 user 1m15.031s 00:07:14.170 sys 0m3.594s 00:07:14.170 23:51:07 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.170 23:51:07 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:07:14.170 ************************************ 00:07:14.170 END TEST unittest_bdev_raid5f 00:07:14.170 ************************************ 00:07:14.170 23:51:07 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:07:14.170 23:51:07 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.170 23:51:07 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.170 23:51:07 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:14.170 ************************************ 00:07:14.170 START TEST unittest_blob_blobfs 00:07:14.170 ************************************ 00:07:14.170 23:51:07 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1125 -- # unittest_blob 00:07:14.170 23:51:07 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:07:14.170 23:51:07 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:07:14.170 00:07:14.170 00:07:14.170 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.170 http://cunit.sourceforge.net/ 00:07:14.170 00:07:14.170 00:07:14.170 Suite: blob_nocopy_noextent 00:07:14.170 Test: blob_init ...[2024-07-24 23:51:07.362186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:14.170 passed 00:07:14.170 Test: blob_thin_provision ...passed 00:07:14.170 Test: blob_read_only ...passed 00:07:14.170 Test: bs_load ...[2024-07-24 23:51:07.446770] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:14.170 passed 00:07:14.170 Test: bs_load_custom_cluster_size ...passed 00:07:14.170 Test: bs_load_after_failed_grow ...passed 00:07:14.170 Test: bs_cluster_sz ...[2024-07-24 23:51:07.470489] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:14.170 [2024-07-24 23:51:07.471003] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:14.170 [2024-07-24 23:51:07.471101] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:14.170 passed 00:07:14.170 Test: bs_resize_md ...passed 00:07:14.170 Test: bs_destroy ...passed 00:07:14.170 Test: bs_type ...passed 00:07:14.170 Test: bs_super_block ...passed 00:07:14.170 Test: bs_test_recover_cluster_count ...passed 00:07:14.170 Test: bs_grow_live ...passed 00:07:14.170 Test: bs_grow_live_no_space ...passed 00:07:14.170 Test: bs_test_grow ...passed 00:07:14.170 Test: blob_serialize_test ...passed 00:07:14.170 Test: super_block_crc ...passed 00:07:14.170 Test: blob_thin_prov_write_count_io ...passed 00:07:14.170 Test: blob_thin_prov_unmap_cluster ...passed 00:07:14.170 Test: bs_load_iter_test ...passed 00:07:14.170 Test: blob_relations ...[2024-07-24 23:51:07.628983] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:14.170 [2024-07-24 23:51:07.629093] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.170 [2024-07-24 23:51:07.630117] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:14.170 [2024-07-24 23:51:07.630189] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.170 passed 00:07:14.170 Test: blob_relations2 ...[2024-07-24 23:51:07.640510] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:14.170 [2024-07-24 23:51:07.640589] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.170 [2024-07-24 23:51:07.640630] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:14.170 [2024-07-24 23:51:07.640643] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.170 [2024-07-24 23:51:07.642348] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:14.170 [2024-07-24 23:51:07.642425] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.170 [2024-07-24 23:51:07.642967] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:14.170 [2024-07-24 23:51:07.643063] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.170 passed 00:07:14.170 Test: blob_relations3 ...passed 00:07:14.170 Test: blobstore_clean_power_failure ...passed 00:07:14.170 Test: blob_delete_snapshot_power_failure ...[2024-07-24 23:51:07.751312] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:14.170 [2024-07-24 23:51:07.760382] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:14.170 [2024-07-24 23:51:07.760462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:14.170 [2024-07-24 23:51:07.760503] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.170 [2024-07-24 23:51:07.769555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:14.170 [2024-07-24 23:51:07.769650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:14.170 [2024-07-24 23:51:07.769690] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:14.170 [2024-07-24 23:51:07.769713] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.170 [2024-07-24 23:51:07.778897] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:14.170 [2024-07-24 23:51:07.779022] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.170 [2024-07-24 23:51:07.788285] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:14.170 [2024-07-24 23:51:07.788438] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.170 [2024-07-24 23:51:07.798169] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:14.170 [2024-07-24 23:51:07.798302] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.170 passed 00:07:14.170 Test: blob_create_snapshot_power_failure ...[2024-07-24 23:51:07.826836] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:14.170 [2024-07-24 23:51:07.844938] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:14.170 [2024-07-24 23:51:07.853962] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:14.170 passed 00:07:14.170 Test: blob_io_unit ...passed 00:07:14.170 Test: blob_io_unit_compatibility ...passed 00:07:14.170 Test: blob_ext_md_pages ...passed 00:07:14.170 Test: blob_esnap_io_4096_4096 ...passed 00:07:14.170 Test: blob_esnap_io_512_512 ...passed 00:07:14.170 Test: blob_esnap_io_4096_512 ...passed 00:07:14.170 Test: blob_esnap_io_512_4096 ...passed 00:07:14.170 Test: blob_esnap_clone_resize ...passed 00:07:14.170 Suite: blob_bs_nocopy_noextent 00:07:14.170 Test: blob_open ...passed 00:07:14.170 Test: blob_create ...[2024-07-24 23:51:08.062130] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:14.170 passed 00:07:14.170 Test: blob_create_loop ...passed 00:07:14.171 Test: blob_create_fail ...[2024-07-24 23:51:08.142874] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:14.171 passed 00:07:14.171 Test: blob_create_internal ...passed 00:07:14.171 Test: blob_create_zero_extent ...passed 00:07:14.171 Test: blob_snapshot ...passed 00:07:14.171 Test: blob_clone ...passed 00:07:14.171 Test: blob_inflate ...[2024-07-24 23:51:08.273200] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:14.171 passed 00:07:14.171 Test: blob_delete ...passed 00:07:14.171 Test: blob_resize_test ...[2024-07-24 23:51:08.316651] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:14.171 passed 00:07:14.171 Test: blob_resize_thin_test ...passed 00:07:14.171 Test: channel_ops ...passed 00:07:14.171 Test: blob_super ...passed 00:07:14.171 Test: blob_rw_verify_iov ...passed 00:07:14.171 Test: blob_unmap ...passed 00:07:14.171 Test: blob_iter ...passed 00:07:14.171 Test: blob_parse_md ...passed 00:07:14.171 Test: bs_load_pending_removal ...passed 00:07:14.171 Test: bs_unload ...[2024-07-24 23:51:08.527430] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:14.171 passed 00:07:14.171 Test: bs_usable_clusters ...passed 00:07:14.171 Test: blob_crc ...[2024-07-24 23:51:08.571635] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:14.171 [2024-07-24 23:51:08.571799] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:14.171 passed 00:07:14.171 Test: blob_flags ...passed 00:07:14.171 Test: bs_version ...passed 00:07:14.171 Test: blob_set_xattrs_test ...[2024-07-24 23:51:08.639785] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:14.171 [2024-07-24 23:51:08.639912] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:14.171 passed 00:07:14.171 Test: blob_thin_prov_alloc ...passed 00:07:14.171 Test: blob_insert_cluster_msg_test ...passed 00:07:14.171 Test: blob_thin_prov_rw ...passed 00:07:14.171 Test: blob_thin_prov_rle ...passed 00:07:14.171 Test: blob_thin_prov_rw_iov ...passed 00:07:14.171 Test: blob_snapshot_rw ...passed 00:07:14.171 Test: blob_snapshot_rw_iov ...passed 00:07:14.171 Test: blob_inflate_rw ...passed 00:07:14.171 Test: blob_snapshot_freeze_io ...passed 00:07:14.171 Test: blob_operation_split_rw ...passed 00:07:14.171 Test: blob_operation_split_rw_iov ...passed 00:07:14.171 Test: blob_simultaneous_operations ...[2024-07-24 23:51:09.399025] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:14.171 [2024-07-24 23:51:09.399118] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.171 [2024-07-24 23:51:09.400294] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:14.171 [2024-07-24 23:51:09.400344] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.171 [2024-07-24 23:51:09.410255] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:14.171 [2024-07-24 23:51:09.410311] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.171 [2024-07-24 23:51:09.410430] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:14.171 [2024-07-24 23:51:09.410451] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.171 passed 00:07:14.171 Test: blob_persist_test ...passed 00:07:14.171 Test: blob_decouple_snapshot ...passed 00:07:14.171 Test: blob_seek_io_unit ...passed 00:07:14.171 Test: blob_nested_freezes ...passed 00:07:14.171 Test: blob_clone_resize ...passed 00:07:14.171 Test: blob_shallow_copy ...[2024-07-24 23:51:09.592346] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:14.171 [2024-07-24 23:51:09.592589] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:14.171 [2024-07-24 23:51:09.592745] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:14.171 passed 00:07:14.171 Suite: blob_blob_nocopy_noextent 00:07:14.171 Test: blob_write ...passed 00:07:14.171 Test: blob_read ...passed 00:07:14.171 Test: blob_rw_verify ...passed 00:07:14.171 Test: blob_rw_verify_iov_nomem ...passed 00:07:14.171 Test: blob_rw_iov_read_only ...passed 00:07:14.171 Test: blob_xattr ...passed 00:07:14.171 Test: blob_dirty_shutdown ...passed 00:07:14.171 Test: blob_is_degraded ...passed 00:07:14.171 Suite: blob_esnap_bs_nocopy_noextent 00:07:14.171 Test: blob_esnap_create ...passed 00:07:14.171 Test: blob_esnap_thread_add_remove ...passed 00:07:14.171 Test: blob_esnap_clone_snapshot ...passed 00:07:14.171 Test: blob_esnap_clone_inflate ...passed 00:07:14.171 Test: blob_esnap_clone_decouple ...passed 00:07:14.171 Test: blob_esnap_clone_reload ...passed 00:07:14.171 Test: blob_esnap_hotplug ...passed 00:07:14.171 Test: blob_set_parent ...[2024-07-24 23:51:09.932768] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:14.171 [2024-07-24 23:51:09.932938] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:14.171 [2024-07-24 23:51:09.933099] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:14.171 [2024-07-24 23:51:09.933133] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:14.171 [2024-07-24 23:51:09.933771] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:14.171 passed 00:07:14.171 Test: blob_set_external_parent ...[2024-07-24 23:51:09.954839] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:14.171 [2024-07-24 23:51:09.954930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:14.171 [2024-07-24 23:51:09.954973] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:14.171 [2024-07-24 23:51:09.955473] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:14.171 passed 00:07:14.171 Suite: blob_nocopy_extent 00:07:14.171 Test: blob_init ...[2024-07-24 23:51:09.963201] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:14.171 passed 00:07:14.171 Test: blob_thin_provision ...passed 00:07:14.171 Test: blob_read_only ...passed 00:07:14.171 Test: bs_load ...[2024-07-24 23:51:09.991864] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:14.171 passed 00:07:14.171 Test: bs_load_custom_cluster_size ...passed 00:07:14.171 Test: bs_load_after_failed_grow ...passed 00:07:14.171 Test: bs_cluster_sz ...[2024-07-24 23:51:10.011505] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:14.171 [2024-07-24 23:51:10.011822] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:14.171 [2024-07-24 23:51:10.011873] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:14.171 passed 00:07:14.171 Test: bs_resize_md ...passed 00:07:14.171 Test: bs_destroy ...passed 00:07:14.430 Test: bs_type ...passed 00:07:14.430 Test: bs_super_block ...passed 00:07:14.430 Test: bs_test_recover_cluster_count ...passed 00:07:14.430 Test: bs_grow_live ...passed 00:07:14.430 Test: bs_grow_live_no_space ...passed 00:07:14.430 Test: bs_test_grow ...passed 00:07:14.430 Test: blob_serialize_test ...passed 00:07:14.430 Test: super_block_crc ...passed 00:07:14.430 Test: blob_thin_prov_write_count_io ...passed 00:07:14.430 Test: blob_thin_prov_unmap_cluster ...passed 00:07:14.430 Test: bs_load_iter_test ...passed 00:07:14.430 Test: blob_relations ...[2024-07-24 23:51:10.132678] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:14.430 [2024-07-24 23:51:10.132800] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.430 [2024-07-24 23:51:10.134017] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:14.430 [2024-07-24 23:51:10.134076] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.430 passed 00:07:14.430 Test: blob_relations2 ...[2024-07-24 23:51:10.143929] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:14.430 [2024-07-24 23:51:10.144014] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.430 [2024-07-24 23:51:10.144055] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:14.430 [2024-07-24 23:51:10.144067] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.430 [2024-07-24 23:51:10.145597] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:14.430 [2024-07-24 23:51:10.145670] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.430 [2024-07-24 23:51:10.146121] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:14.430 [2024-07-24 23:51:10.146167] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.430 passed 00:07:14.430 Test: blob_relations3 ...passed 00:07:14.430 Test: blobstore_clean_power_failure ...passed 00:07:14.430 Test: blob_delete_snapshot_power_failure ...[2024-07-24 23:51:10.241122] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:14.430 [2024-07-24 23:51:10.249254] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:14.430 [2024-07-24 23:51:10.257391] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:14.430 [2024-07-24 23:51:10.257468] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:14.430 [2024-07-24 23:51:10.257508] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.430 [2024-07-24 23:51:10.266240] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:14.430 [2024-07-24 23:51:10.266333] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:14.430 [2024-07-24 23:51:10.266370] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:14.430 [2024-07-24 23:51:10.266393] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.430 [2024-07-24 23:51:10.275273] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:14.430 [2024-07-24 23:51:10.275367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:14.430 [2024-07-24 23:51:10.275388] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:14.430 [2024-07-24 23:51:10.275412] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.430 [2024-07-24 23:51:10.283856] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:14.430 [2024-07-24 23:51:10.283954] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.430 [2024-07-24 23:51:10.292160] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:14.430 [2024-07-24 23:51:10.292299] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.690 [2024-07-24 23:51:10.301760] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:14.690 [2024-07-24 23:51:10.301920] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:14.690 passed 00:07:14.690 Test: blob_create_snapshot_power_failure ...[2024-07-24 23:51:10.326555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:14.690 [2024-07-24 23:51:10.334998] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:14.690 [2024-07-24 23:51:10.352630] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:14.690 [2024-07-24 23:51:10.362445] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:14.690 passed 00:07:14.690 Test: blob_io_unit ...passed 00:07:14.690 Test: blob_io_unit_compatibility ...passed 00:07:14.690 Test: blob_ext_md_pages ...passed 00:07:14.690 Test: blob_esnap_io_4096_4096 ...passed 00:07:14.690 Test: blob_esnap_io_512_512 ...passed 00:07:14.690 Test: blob_esnap_io_4096_512 ...passed 00:07:14.690 Test: blob_esnap_io_512_4096 ...passed 00:07:14.690 Test: blob_esnap_clone_resize ...passed 00:07:14.690 Suite: blob_bs_nocopy_extent 00:07:14.690 Test: blob_open ...passed 00:07:14.690 Test: blob_create ...[2024-07-24 23:51:10.553953] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:14.949 passed 00:07:14.949 Test: blob_create_loop ...passed 00:07:14.949 Test: blob_create_fail ...[2024-07-24 23:51:10.643455] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:14.949 passed 00:07:14.949 Test: blob_create_internal ...passed 00:07:14.949 Test: blob_create_zero_extent ...passed 00:07:14.949 Test: blob_snapshot ...passed 00:07:14.949 Test: blob_clone ...passed 00:07:14.949 Test: blob_inflate ...[2024-07-24 23:51:10.751762] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:14.949 passed 00:07:14.949 Test: blob_delete ...passed 00:07:14.949 Test: blob_resize_test ...[2024-07-24 23:51:10.789960] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:14.949 passed 00:07:15.209 Test: blob_resize_thin_test ...passed 00:07:15.209 Test: channel_ops ...passed 00:07:15.209 Test: blob_super ...passed 00:07:15.209 Test: blob_rw_verify_iov ...passed 00:07:15.209 Test: blob_unmap ...passed 00:07:15.209 Test: blob_iter ...passed 00:07:15.209 Test: blob_parse_md ...passed 00:07:15.209 Test: bs_load_pending_removal ...passed 00:07:15.209 Test: bs_unload ...[2024-07-24 23:51:10.971142] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:15.209 passed 00:07:15.209 Test: bs_usable_clusters ...passed 00:07:15.209 Test: blob_crc ...[2024-07-24 23:51:11.010071] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:15.209 [2024-07-24 23:51:11.010191] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:15.209 passed 00:07:15.209 Test: blob_flags ...passed 00:07:15.209 Test: bs_version ...passed 00:07:15.209 Test: blob_set_xattrs_test ...[2024-07-24 23:51:11.070428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:15.209 [2024-07-24 23:51:11.070543] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:15.209 passed 00:07:15.468 Test: blob_thin_prov_alloc ...passed 00:07:15.468 Test: blob_insert_cluster_msg_test ...passed 00:07:15.468 Test: blob_thin_prov_rw ...passed 00:07:15.468 Test: blob_thin_prov_rle ...passed 00:07:15.468 Test: blob_thin_prov_rw_iov ...passed 00:07:15.468 Test: blob_snapshot_rw ...passed 00:07:15.468 Test: blob_snapshot_rw_iov ...passed 00:07:15.726 Test: blob_inflate_rw ...passed 00:07:15.726 Test: blob_snapshot_freeze_io ...passed 00:07:15.985 Test: blob_operation_split_rw ...passed 00:07:15.985 Test: blob_operation_split_rw_iov ...passed 00:07:15.985 Test: blob_simultaneous_operations ...[2024-07-24 23:51:11.805585] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:15.985 [2024-07-24 23:51:11.805700] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:15.985 [2024-07-24 23:51:11.806769] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:15.985 [2024-07-24 23:51:11.806877] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:15.985 [2024-07-24 23:51:11.816363] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:15.985 [2024-07-24 23:51:11.816425] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:15.985 [2024-07-24 23:51:11.816554] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:15.985 [2024-07-24 23:51:11.816572] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:15.985 passed 00:07:15.985 Test: blob_persist_test ...passed 00:07:16.244 Test: blob_decouple_snapshot ...passed 00:07:16.244 Test: blob_seek_io_unit ...passed 00:07:16.244 Test: blob_nested_freezes ...passed 00:07:16.244 Test: blob_clone_resize ...passed 00:07:16.244 Test: blob_shallow_copy ...[2024-07-24 23:51:11.997415] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:16.244 [2024-07-24 23:51:11.997660] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:16.244 [2024-07-24 23:51:11.997781] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:16.244 passed 00:07:16.244 Suite: blob_blob_nocopy_extent 00:07:16.244 Test: blob_write ...passed 00:07:16.244 Test: blob_read ...passed 00:07:16.244 Test: blob_rw_verify ...passed 00:07:16.244 Test: blob_rw_verify_iov_nomem ...passed 00:07:16.244 Test: blob_rw_iov_read_only ...passed 00:07:16.503 Test: blob_xattr ...passed 00:07:16.503 Test: blob_dirty_shutdown ...passed 00:07:16.503 Test: blob_is_degraded ...passed 00:07:16.503 Suite: blob_esnap_bs_nocopy_extent 00:07:16.503 Test: blob_esnap_create ...passed 00:07:16.503 Test: blob_esnap_thread_add_remove ...passed 00:07:16.503 Test: blob_esnap_clone_snapshot ...passed 00:07:16.503 Test: blob_esnap_clone_inflate ...passed 00:07:16.503 Test: blob_esnap_clone_decouple ...passed 00:07:16.503 Test: blob_esnap_clone_reload ...passed 00:07:16.503 Test: blob_esnap_hotplug ...passed 00:07:16.503 Test: blob_set_parent ...[2024-07-24 23:51:12.330988] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:16.503 [2024-07-24 23:51:12.331082] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:16.503 [2024-07-24 23:51:12.331203] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:16.503 [2024-07-24 23:51:12.331230] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:16.503 [2024-07-24 23:51:12.331715] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:16.503 passed 00:07:16.503 Test: blob_set_external_parent ...[2024-07-24 23:51:12.352040] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:16.503 [2024-07-24 23:51:12.352132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:16.503 [2024-07-24 23:51:12.352166] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:16.503 [2024-07-24 23:51:12.352552] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:16.503 passed 00:07:16.503 Suite: blob_copy_noextent 00:07:16.503 Test: blob_init ...[2024-07-24 23:51:12.359535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:16.503 passed 00:07:16.762 Test: blob_thin_provision ...passed 00:07:16.762 Test: blob_read_only ...passed 00:07:16.762 Test: bs_load ...[2024-07-24 23:51:12.388000] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:16.762 passed 00:07:16.762 Test: bs_load_custom_cluster_size ...passed 00:07:16.762 Test: bs_load_after_failed_grow ...passed 00:07:16.762 Test: bs_cluster_sz ...[2024-07-24 23:51:12.403498] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:16.762 [2024-07-24 23:51:12.403684] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:16.762 [2024-07-24 23:51:12.403721] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:16.762 passed 00:07:16.762 Test: bs_resize_md ...passed 00:07:16.762 Test: bs_destroy ...passed 00:07:16.762 Test: bs_type ...passed 00:07:16.762 Test: bs_super_block ...passed 00:07:16.762 Test: bs_test_recover_cluster_count ...passed 00:07:16.762 Test: bs_grow_live ...passed 00:07:16.762 Test: bs_grow_live_no_space ...passed 00:07:16.762 Test: bs_test_grow ...passed 00:07:16.762 Test: blob_serialize_test ...passed 00:07:16.762 Test: super_block_crc ...passed 00:07:16.762 Test: blob_thin_prov_write_count_io ...passed 00:07:16.762 Test: blob_thin_prov_unmap_cluster ...passed 00:07:16.762 Test: bs_load_iter_test ...passed 00:07:16.762 Test: blob_relations ...[2024-07-24 23:51:12.535804] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:16.762 [2024-07-24 23:51:12.535929] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:16.762 [2024-07-24 23:51:12.536569] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:16.762 [2024-07-24 23:51:12.536642] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:16.762 passed 00:07:16.762 Test: blob_relations2 ...[2024-07-24 23:51:12.546799] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:16.762 [2024-07-24 23:51:12.546897] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:16.762 [2024-07-24 23:51:12.546921] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:16.762 [2024-07-24 23:51:12.546933] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:16.762 [2024-07-24 23:51:12.548014] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:16.762 [2024-07-24 23:51:12.548059] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:16.762 [2024-07-24 23:51:12.548428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:16.762 [2024-07-24 23:51:12.548468] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:16.762 passed 00:07:16.762 Test: blob_relations3 ...passed 00:07:17.021 Test: blobstore_clean_power_failure ...passed 00:07:17.021 Test: blob_delete_snapshot_power_failure ...[2024-07-24 23:51:12.643183] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:17.021 [2024-07-24 23:51:12.650921] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:17.021 [2024-07-24 23:51:12.651000] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:17.021 [2024-07-24 23:51:12.651036] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:17.021 [2024-07-24 23:51:12.658718] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:17.021 [2024-07-24 23:51:12.658824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:17.021 [2024-07-24 23:51:12.658842] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:17.021 [2024-07-24 23:51:12.658861] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:17.021 [2024-07-24 23:51:12.666514] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:17.021 [2024-07-24 23:51:12.666629] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:17.021 [2024-07-24 23:51:12.674527] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:17.021 [2024-07-24 23:51:12.674648] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:17.021 [2024-07-24 23:51:12.682512] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:17.021 [2024-07-24 23:51:12.682616] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:17.021 passed 00:07:17.021 Test: blob_create_snapshot_power_failure ...[2024-07-24 23:51:12.705402] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:17.021 [2024-07-24 23:51:12.721091] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:17.021 [2024-07-24 23:51:12.728949] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:17.021 passed 00:07:17.021 Test: blob_io_unit ...passed 00:07:17.021 Test: blob_io_unit_compatibility ...passed 00:07:17.021 Test: blob_ext_md_pages ...passed 00:07:17.021 Test: blob_esnap_io_4096_4096 ...passed 00:07:17.021 Test: blob_esnap_io_512_512 ...passed 00:07:17.021 Test: blob_esnap_io_4096_512 ...passed 00:07:17.021 Test: blob_esnap_io_512_4096 ...passed 00:07:17.021 Test: blob_esnap_clone_resize ...passed 00:07:17.021 Suite: blob_bs_copy_noextent 00:07:17.280 Test: blob_open ...passed 00:07:17.280 Test: blob_create ...[2024-07-24 23:51:12.910236] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:17.280 passed 00:07:17.280 Test: blob_create_loop ...passed 00:07:17.280 Test: blob_create_fail ...[2024-07-24 23:51:12.985275] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:17.280 passed 00:07:17.280 Test: blob_create_internal ...passed 00:07:17.280 Test: blob_create_zero_extent ...passed 00:07:17.280 Test: blob_snapshot ...passed 00:07:17.280 Test: blob_clone ...passed 00:07:17.280 Test: blob_inflate ...[2024-07-24 23:51:13.083400] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:17.280 passed 00:07:17.280 Test: blob_delete ...passed 00:07:17.280 Test: blob_resize_test ...[2024-07-24 23:51:13.121546] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:17.280 passed 00:07:17.539 Test: blob_resize_thin_test ...passed 00:07:17.539 Test: channel_ops ...passed 00:07:17.539 Test: blob_super ...passed 00:07:17.539 Test: blob_rw_verify_iov ...passed 00:07:17.539 Test: blob_unmap ...passed 00:07:17.539 Test: blob_iter ...passed 00:07:17.539 Test: blob_parse_md ...passed 00:07:17.539 Test: bs_load_pending_removal ...passed 00:07:17.539 Test: bs_unload ...[2024-07-24 23:51:13.314818] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:17.539 passed 00:07:17.539 Test: bs_usable_clusters ...passed 00:07:17.539 Test: blob_crc ...[2024-07-24 23:51:13.356203] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:17.539 [2024-07-24 23:51:13.356359] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:17.539 passed 00:07:17.539 Test: blob_flags ...passed 00:07:17.539 Test: bs_version ...passed 00:07:17.797 Test: blob_set_xattrs_test ...[2024-07-24 23:51:13.419859] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:17.797 [2024-07-24 23:51:13.419969] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:17.797 passed 00:07:17.797 Test: blob_thin_prov_alloc ...passed 00:07:17.797 Test: blob_insert_cluster_msg_test ...passed 00:07:17.797 Test: blob_thin_prov_rw ...passed 00:07:17.797 Test: blob_thin_prov_rle ...passed 00:07:17.797 Test: blob_thin_prov_rw_iov ...passed 00:07:18.054 Test: blob_snapshot_rw ...passed 00:07:18.054 Test: blob_snapshot_rw_iov ...passed 00:07:18.054 Test: blob_inflate_rw ...passed 00:07:18.054 Test: blob_snapshot_freeze_io ...passed 00:07:18.312 Test: blob_operation_split_rw ...passed 00:07:18.313 Test: blob_operation_split_rw_iov ...passed 00:07:18.313 Test: blob_simultaneous_operations ...[2024-07-24 23:51:14.160751] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:18.313 [2024-07-24 23:51:14.160885] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:18.313 [2024-07-24 23:51:14.161327] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:18.313 [2024-07-24 23:51:14.161362] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:18.313 [2024-07-24 23:51:14.163467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:18.313 [2024-07-24 23:51:14.163520] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:18.313 [2024-07-24 23:51:14.163611] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:18.313 [2024-07-24 23:51:14.163627] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:18.313 passed 00:07:18.571 Test: blob_persist_test ...passed 00:07:18.571 Test: blob_decouple_snapshot ...passed 00:07:18.571 Test: blob_seek_io_unit ...passed 00:07:18.571 Test: blob_nested_freezes ...passed 00:07:18.571 Test: blob_clone_resize ...passed 00:07:18.571 Test: blob_shallow_copy ...[2024-07-24 23:51:14.319857] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:18.571 [2024-07-24 23:51:14.320099] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:18.571 [2024-07-24 23:51:14.320222] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:18.571 passed 00:07:18.571 Suite: blob_blob_copy_noextent 00:07:18.571 Test: blob_write ...passed 00:07:18.571 Test: blob_read ...passed 00:07:18.571 Test: blob_rw_verify ...passed 00:07:18.571 Test: blob_rw_verify_iov_nomem ...passed 00:07:18.571 Test: blob_rw_iov_read_only ...passed 00:07:18.829 Test: blob_xattr ...passed 00:07:18.829 Test: blob_dirty_shutdown ...passed 00:07:18.829 Test: blob_is_degraded ...passed 00:07:18.829 Suite: blob_esnap_bs_copy_noextent 00:07:18.829 Test: blob_esnap_create ...passed 00:07:18.829 Test: blob_esnap_thread_add_remove ...passed 00:07:18.829 Test: blob_esnap_clone_snapshot ...passed 00:07:18.829 Test: blob_esnap_clone_inflate ...passed 00:07:18.829 Test: blob_esnap_clone_decouple ...passed 00:07:18.829 Test: blob_esnap_clone_reload ...passed 00:07:18.829 Test: blob_esnap_hotplug ...passed 00:07:18.830 Test: blob_set_parent ...[2024-07-24 23:51:14.664635] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:18.830 [2024-07-24 23:51:14.664749] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:18.830 [2024-07-24 23:51:14.664855] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:18.830 [2024-07-24 23:51:14.664926] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:18.830 [2024-07-24 23:51:14.665392] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:18.830 passed 00:07:18.830 Test: blob_set_external_parent ...[2024-07-24 23:51:14.686439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:18.830 [2024-07-24 23:51:14.686535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:18.830 [2024-07-24 23:51:14.686569] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:18.830 [2024-07-24 23:51:14.686934] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:18.830 passed 00:07:18.830 Suite: blob_copy_extent 00:07:18.830 Test: blob_init ...[2024-07-24 23:51:14.694332] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:19.088 passed 00:07:19.088 Test: blob_thin_provision ...passed 00:07:19.088 Test: blob_read_only ...passed 00:07:19.088 Test: bs_load ...[2024-07-24 23:51:14.723165] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:19.088 passed 00:07:19.088 Test: bs_load_custom_cluster_size ...passed 00:07:19.088 Test: bs_load_after_failed_grow ...passed 00:07:19.088 Test: bs_cluster_sz ...[2024-07-24 23:51:14.738853] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:19.088 [2024-07-24 23:51:14.739039] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:19.088 [2024-07-24 23:51:14.739075] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:19.088 passed 00:07:19.088 Test: bs_resize_md ...passed 00:07:19.088 Test: bs_destroy ...passed 00:07:19.088 Test: bs_type ...passed 00:07:19.088 Test: bs_super_block ...passed 00:07:19.088 Test: bs_test_recover_cluster_count ...passed 00:07:19.088 Test: bs_grow_live ...passed 00:07:19.088 Test: bs_grow_live_no_space ...passed 00:07:19.088 Test: bs_test_grow ...passed 00:07:19.088 Test: blob_serialize_test ...passed 00:07:19.088 Test: super_block_crc ...passed 00:07:19.088 Test: blob_thin_prov_write_count_io ...passed 00:07:19.088 Test: blob_thin_prov_unmap_cluster ...passed 00:07:19.088 Test: bs_load_iter_test ...passed 00:07:19.089 Test: blob_relations ...[2024-07-24 23:51:14.848825] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:19.089 [2024-07-24 23:51:14.848967] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.089 [2024-07-24 23:51:14.849658] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:19.089 [2024-07-24 23:51:14.849701] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.089 passed 00:07:19.089 Test: blob_relations2 ...[2024-07-24 23:51:14.858676] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:19.089 [2024-07-24 23:51:14.858751] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.089 [2024-07-24 23:51:14.858787] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:19.089 [2024-07-24 23:51:14.858802] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.089 [2024-07-24 23:51:14.859788] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:19.089 [2024-07-24 23:51:14.859866] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.089 [2024-07-24 23:51:14.860186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:19.089 [2024-07-24 23:51:14.860226] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.089 passed 00:07:19.089 Test: blob_relations3 ...passed 00:07:19.089 Test: blobstore_clean_power_failure ...passed 00:07:19.089 Test: blob_delete_snapshot_power_failure ...[2024-07-24 23:51:14.951912] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:19.347 [2024-07-24 23:51:14.960230] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:19.348 [2024-07-24 23:51:14.968093] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:19.348 [2024-07-24 23:51:14.968170] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:19.348 [2024-07-24 23:51:14.968205] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.348 [2024-07-24 23:51:14.975734] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:19.348 [2024-07-24 23:51:14.975838] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:19.348 [2024-07-24 23:51:14.975857] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:19.348 [2024-07-24 23:51:14.975874] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.348 [2024-07-24 23:51:14.983527] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:19.348 [2024-07-24 23:51:14.983623] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:19.348 [2024-07-24 23:51:14.983640] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:19.348 [2024-07-24 23:51:14.983658] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.348 [2024-07-24 23:51:14.991948] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:19.348 [2024-07-24 23:51:14.992063] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.348 [2024-07-24 23:51:15.000460] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:19.348 [2024-07-24 23:51:15.000591] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.348 [2024-07-24 23:51:15.008656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:19.348 [2024-07-24 23:51:15.008762] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:19.348 passed 00:07:19.348 Test: blob_create_snapshot_power_failure ...[2024-07-24 23:51:15.030805] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:19.348 [2024-07-24 23:51:15.038014] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:19.348 [2024-07-24 23:51:15.051934] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:19.348 [2024-07-24 23:51:15.059606] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:19.348 passed 00:07:19.348 Test: blob_io_unit ...passed 00:07:19.348 Test: blob_io_unit_compatibility ...passed 00:07:19.348 Test: blob_ext_md_pages ...passed 00:07:19.348 Test: blob_esnap_io_4096_4096 ...passed 00:07:19.348 Test: blob_esnap_io_512_512 ...passed 00:07:19.348 Test: blob_esnap_io_4096_512 ...passed 00:07:19.348 Test: blob_esnap_io_512_4096 ...passed 00:07:19.348 Test: blob_esnap_clone_resize ...passed 00:07:19.348 Suite: blob_bs_copy_extent 00:07:19.606 Test: blob_open ...passed 00:07:19.606 Test: blob_create ...[2024-07-24 23:51:15.238433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:19.606 passed 00:07:19.606 Test: blob_create_loop ...passed 00:07:19.606 Test: blob_create_fail ...[2024-07-24 23:51:15.312963] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:19.606 passed 00:07:19.606 Test: blob_create_internal ...passed 00:07:19.606 Test: blob_create_zero_extent ...passed 00:07:19.606 Test: blob_snapshot ...passed 00:07:19.606 Test: blob_clone ...passed 00:07:19.606 Test: blob_inflate ...[2024-07-24 23:51:15.420073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:19.606 passed 00:07:19.606 Test: blob_delete ...passed 00:07:19.606 Test: blob_resize_test ...[2024-07-24 23:51:15.457279] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:19.606 passed 00:07:19.865 Test: blob_resize_thin_test ...passed 00:07:19.865 Test: channel_ops ...passed 00:07:19.865 Test: blob_super ...passed 00:07:19.865 Test: blob_rw_verify_iov ...passed 00:07:19.865 Test: blob_unmap ...passed 00:07:19.865 Test: blob_iter ...passed 00:07:19.865 Test: blob_parse_md ...passed 00:07:19.865 Test: bs_load_pending_removal ...passed 00:07:19.865 Test: bs_unload ...[2024-07-24 23:51:15.649081] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:19.865 passed 00:07:19.865 Test: bs_usable_clusters ...passed 00:07:19.865 Test: blob_crc ...[2024-07-24 23:51:15.692213] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:19.865 [2024-07-24 23:51:15.692322] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:19.865 passed 00:07:19.865 Test: blob_flags ...passed 00:07:20.125 Test: bs_version ...passed 00:07:20.125 Test: blob_set_xattrs_test ...[2024-07-24 23:51:15.755650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:20.125 [2024-07-24 23:51:15.755783] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:20.125 passed 00:07:20.125 Test: blob_thin_prov_alloc ...passed 00:07:20.125 Test: blob_insert_cluster_msg_test ...passed 00:07:20.125 Test: blob_thin_prov_rw ...passed 00:07:20.125 Test: blob_thin_prov_rle ...passed 00:07:20.125 Test: blob_thin_prov_rw_iov ...passed 00:07:20.125 Test: blob_snapshot_rw ...passed 00:07:20.125 Test: blob_snapshot_rw_iov ...passed 00:07:20.391 Test: blob_inflate_rw ...passed 00:07:20.391 Test: blob_snapshot_freeze_io ...passed 00:07:20.667 Test: blob_operation_split_rw ...passed 00:07:20.667 Test: blob_operation_split_rw_iov ...passed 00:07:20.667 Test: blob_simultaneous_operations ...[2024-07-24 23:51:16.451561] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:20.667 [2024-07-24 23:51:16.451659] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:20.667 [2024-07-24 23:51:16.452090] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:20.667 [2024-07-24 23:51:16.452121] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:20.667 [2024-07-24 23:51:16.454184] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:20.667 [2024-07-24 23:51:16.454236] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:20.667 [2024-07-24 23:51:16.454332] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:20.667 [2024-07-24 23:51:16.454349] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:20.667 passed 00:07:20.667 Test: blob_persist_test ...passed 00:07:20.667 Test: blob_decouple_snapshot ...passed 00:07:20.667 Test: blob_seek_io_unit ...passed 00:07:20.926 Test: blob_nested_freezes ...passed 00:07:20.926 Test: blob_clone_resize ...passed 00:07:20.926 Test: blob_shallow_copy ...[2024-07-24 23:51:16.603855] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:20.926 [2024-07-24 23:51:16.604093] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:20.926 [2024-07-24 23:51:16.604214] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:20.926 passed 00:07:20.926 Suite: blob_blob_copy_extent 00:07:20.926 Test: blob_write ...passed 00:07:20.926 Test: blob_read ...passed 00:07:20.926 Test: blob_rw_verify ...passed 00:07:20.926 Test: blob_rw_verify_iov_nomem ...passed 00:07:20.926 Test: blob_rw_iov_read_only ...passed 00:07:20.926 Test: blob_xattr ...passed 00:07:20.926 Test: blob_dirty_shutdown ...passed 00:07:20.926 Test: blob_is_degraded ...passed 00:07:20.926 Suite: blob_esnap_bs_copy_extent 00:07:21.186 Test: blob_esnap_create ...passed 00:07:21.186 Test: blob_esnap_thread_add_remove ...passed 00:07:21.186 Test: blob_esnap_clone_snapshot ...passed 00:07:21.186 Test: blob_esnap_clone_inflate ...passed 00:07:21.186 Test: blob_esnap_clone_decouple ...passed 00:07:21.186 Test: blob_esnap_clone_reload ...passed 00:07:21.186 Test: blob_esnap_hotplug ...passed 00:07:21.186 Test: blob_set_parent ...[2024-07-24 23:51:16.952341] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:21.186 [2024-07-24 23:51:16.952431] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:21.186 [2024-07-24 23:51:16.952550] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:21.186 [2024-07-24 23:51:16.952593] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:21.186 [2024-07-24 23:51:16.953195] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:21.186 passed 00:07:21.186 Test: blob_set_external_parent ...[2024-07-24 23:51:16.974028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:21.186 [2024-07-24 23:51:16.974114] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:21.186 [2024-07-24 23:51:16.974149] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:21.186 [2024-07-24 23:51:16.974568] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:21.186 passed 00:07:21.186 00:07:21.186 Run Summary: Type Total Ran Passed Failed Inactive 00:07:21.186 suites 16 16 n/a 0 0 00:07:21.186 tests 376 376 376 0 0 00:07:21.186 asserts 143973 143973 143973 0 n/a 00:07:21.186 00:07:21.186 Elapsed time = 9.620 seconds 00:07:21.186 23:51:17 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:07:21.445 00:07:21.445 00:07:21.445 CUnit - A unit testing framework for C - Version 2.1-3 00:07:21.445 http://cunit.sourceforge.net/ 00:07:21.445 00:07:21.445 00:07:21.445 Suite: blob_bdev 00:07:21.445 Test: create_bs_dev ...passed 00:07:21.445 Test: create_bs_dev_ro ...[2024-07-24 23:51:17.073401] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:07:21.445 passed 00:07:21.445 Test: create_bs_dev_rw ...passed 00:07:21.446 Test: claim_bs_dev ...[2024-07-24 23:51:17.073776] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:07:21.446 passed 00:07:21.446 Test: claim_bs_dev_ro ...passed 00:07:21.446 Test: deferred_destroy_refs ...passed 00:07:21.446 Test: deferred_destroy_channels ...passed 00:07:21.446 Test: deferred_destroy_threads ...passed 00:07:21.446 00:07:21.446 Run Summary: Type Total Ran Passed Failed Inactive 00:07:21.446 suites 1 1 n/a 0 0 00:07:21.446 tests 8 8 8 0 0 00:07:21.446 asserts 119 119 119 0 n/a 00:07:21.446 00:07:21.446 Elapsed time = 0.001 seconds 00:07:21.446 23:51:17 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:07:21.446 00:07:21.446 00:07:21.446 CUnit - A unit testing framework for C - Version 2.1-3 00:07:21.446 http://cunit.sourceforge.net/ 00:07:21.446 00:07:21.446 00:07:21.446 Suite: tree 00:07:21.446 Test: blobfs_tree_op_test ...passed 00:07:21.446 00:07:21.446 Run Summary: Type Total Ran Passed Failed Inactive 00:07:21.446 suites 1 1 n/a 0 0 00:07:21.446 tests 1 1 1 0 0 00:07:21.446 asserts 27 27 27 0 n/a 00:07:21.446 00:07:21.446 Elapsed time = 0.000 seconds 00:07:21.446 23:51:17 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:07:21.446 00:07:21.446 00:07:21.446 CUnit - A unit testing framework for C - Version 2.1-3 00:07:21.446 http://cunit.sourceforge.net/ 00:07:21.446 00:07:21.446 00:07:21.446 Suite: blobfs_async_ut 00:07:21.446 Test: fs_init ...passed 00:07:21.446 Test: fs_open ...passed 00:07:21.446 Test: fs_create ...passed 00:07:21.446 Test: fs_truncate ...passed 00:07:21.446 Test: fs_rename ...[2024-07-24 23:51:17.216541] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:07:21.446 passed 00:07:21.446 Test: fs_rw_async ...passed 00:07:21.446 Test: fs_writev_readv_async ...passed 00:07:21.446 Test: tree_find_buffer_ut ...passed 00:07:21.446 Test: channel_ops ...passed 00:07:21.446 Test: channel_ops_sync ...passed 00:07:21.446 00:07:21.446 Run Summary: Type Total Ran Passed Failed Inactive 00:07:21.446 suites 1 1 n/a 0 0 00:07:21.446 tests 10 10 10 0 0 00:07:21.446 asserts 292 292 292 0 n/a 00:07:21.446 00:07:21.446 Elapsed time = 0.119 seconds 00:07:21.446 23:51:17 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:07:21.446 00:07:21.446 00:07:21.446 CUnit - A unit testing framework for C - Version 2.1-3 00:07:21.446 http://cunit.sourceforge.net/ 00:07:21.446 00:07:21.446 00:07:21.446 Suite: blobfs_sync_ut 00:07:21.705 Test: cache_read_after_write ...[2024-07-24 23:51:17.380211] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:07:21.705 passed 00:07:21.705 Test: file_length ...passed 00:07:21.705 Test: append_write_to_extend_blob ...passed 00:07:21.705 Test: partial_buffer ...passed 00:07:21.705 Test: cache_write_null_buffer ...passed 00:07:21.705 Test: fs_create_sync ...passed 00:07:21.705 Test: fs_rename_sync ...passed 00:07:21.705 Test: cache_append_no_cache ...passed 00:07:21.705 Test: fs_delete_file_without_close ...passed 00:07:21.705 00:07:21.705 Run Summary: Type Total Ran Passed Failed Inactive 00:07:21.705 suites 1 1 n/a 0 0 00:07:21.705 tests 9 9 9 0 0 00:07:21.705 asserts 345 345 345 0 n/a 00:07:21.705 00:07:21.705 Elapsed time = 0.332 seconds 00:07:21.705 23:51:17 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:07:21.705 00:07:21.705 00:07:21.705 CUnit - A unit testing framework for C - Version 2.1-3 00:07:21.705 http://cunit.sourceforge.net/ 00:07:21.705 00:07:21.705 00:07:21.705 Suite: blobfs_bdev_ut 00:07:21.705 Test: spdk_blobfs_bdev_detect_test ...[2024-07-24 23:51:17.552014] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:21.705 passed 00:07:21.705 Test: spdk_blobfs_bdev_create_test ...[2024-07-24 23:51:17.552342] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:21.705 passed 00:07:21.705 Test: spdk_blobfs_bdev_mount_test ...passed 00:07:21.705 00:07:21.705 Run Summary: Type Total Ran Passed Failed Inactive 00:07:21.705 suites 1 1 n/a 0 0 00:07:21.705 tests 3 3 3 0 0 00:07:21.705 asserts 9 9 9 0 n/a 00:07:21.705 00:07:21.705 Elapsed time = 0.001 seconds 00:07:21.705 00:07:21.705 real 0m10.225s 00:07:21.705 user 0m9.690s 00:07:21.705 sys 0m0.715s 00:07:21.705 23:51:17 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.705 23:51:17 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:07:21.705 ************************************ 00:07:21.705 END TEST unittest_blob_blobfs 00:07:21.705 ************************************ 00:07:21.965 23:51:17 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:07:21.965 23:51:17 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.965 23:51:17 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.965 23:51:17 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:21.965 ************************************ 00:07:21.965 START TEST unittest_event 00:07:21.965 ************************************ 00:07:21.965 23:51:17 unittest.unittest_event -- common/autotest_common.sh@1125 -- # unittest_event 00:07:21.965 23:51:17 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:07:21.965 00:07:21.965 00:07:21.965 CUnit - A unit testing framework for C - Version 2.1-3 00:07:21.965 http://cunit.sourceforge.net/ 00:07:21.965 00:07:21.965 00:07:21.965 Suite: app_suite 00:07:21.965 Test: test_spdk_app_parse_args ...app_ut [options] 00:07:21.965 00:07:21.965 CPU options: 00:07:21.965 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:21.965 (like [0,1,10]) 00:07:21.965 --lcores lcore to CPU mapping list. The list is in the format: 00:07:21.965 [<,lcores[@CPUs]>...] 00:07:21.965 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:21.965 Within the group, '-' is used for range separator, 00:07:21.965 ',' is used for single number separator. 00:07:21.965 '( )' can be omitted for single element group, 00:07:21.965 '@' can be omitted if cpus and lcores have the same value 00:07:21.965 --disable-cpumask-locks Disable CPU core lock files. 00:07:21.965 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:21.965 pollers in the app support interrupt mode) 00:07:21.965 -p, --main-core main (primary) core for DPDK 00:07:21.965 00:07:21.965 Configuration options: 00:07:21.965 -c, --config, --json JSON config file 00:07:21.965 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:21.965 app_ut: invalid option -- 'z' 00:07:21.965 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:21.965 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:21.965 --rpcs-allowed comma-separated list of permitted RPCS 00:07:21.965 --json-ignore-init-errors don't exit on invalid config entry 00:07:21.965 00:07:21.965 Memory options: 00:07:21.965 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:21.965 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:21.965 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:21.965 -R, --huge-unlink unlink huge files after initialization 00:07:21.965 -n, --mem-channels number of memory channels used for DPDK 00:07:21.965 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:21.965 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:21.965 --no-huge run without using hugepages 00:07:21.965 -i, --shm-id shared memory ID (optional) 00:07:21.965 -g, --single-file-segments force creating just one hugetlbfs file 00:07:21.965 00:07:21.965 PCI options: 00:07:21.965 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:21.965 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:21.965 -u, --no-pci disable PCI access 00:07:21.965 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:21.965 00:07:21.965 Log options: 00:07:21.965 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:07:21.965 --silence-noticelog disable notice level logging to stderr 00:07:21.965 00:07:21.965 Trace options: 00:07:21.966 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:21.966 setting 0 to disable trace (default 32768) 00:07:21.966 Tracepoints vary in size and can use more than one trace entry. 00:07:21.966 -e, --tpoint-group [:] 00:07:21.966 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:07:21.966 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:21.966 a tracepoint group. First tpoint inside a group can be enabled by 00:07:21.966 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:21.966 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:21.966 in /include/spdk_internal/trace_defs.h 00:07:21.966 00:07:21.966 Other options: 00:07:21.966 -h, --help show this usage 00:07:21.966 -v, --version print SPDK version 00:07:21.966 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:21.966 --env-context Opaque context for use of the env implementation 00:07:21.966 app_ut [options] 00:07:21.966 00:07:21.966 CPU options: 00:07:21.966 app_ut: unrecognized option '--test-long-opt' 00:07:21.966 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:21.966 (like [0,1,10]) 00:07:21.966 --lcores lcore to CPU mapping list. The list is in the format: 00:07:21.966 [<,lcores[@CPUs]>...] 00:07:21.966 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:21.966 Within the group, '-' is used for range separator, 00:07:21.966 ',' is used for single number separator. 00:07:21.966 '( )' can be omitted for single element group, 00:07:21.966 '@' can be omitted if cpus and lcores have the same value 00:07:21.966 --disable-cpumask-locks Disable CPU core lock files. 00:07:21.966 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:21.966 pollers in the app support interrupt mode) 00:07:21.966 -p, --main-core main (primary) core for DPDK 00:07:21.966 00:07:21.966 Configuration options: 00:07:21.966 -c, --config, --json JSON config file 00:07:21.966 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:21.966 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:21.966 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:21.966 --rpcs-allowed comma-separated list of permitted RPCS 00:07:21.966 --json-ignore-init-errors don't exit on invalid config entry 00:07:21.966 00:07:21.966 Memory options: 00:07:21.966 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:21.966 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:21.966 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:21.966 -R, --huge-unlink unlink huge files after initialization 00:07:21.966 -n, --mem-channels number of memory channels used for DPDK 00:07:21.966 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:21.966 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:21.966 --no-huge run without using hugepages 00:07:21.966 -i, --shm-id shared memory ID (optional) 00:07:21.966 -g, --single-file-segments force creating just one hugetlbfs file 00:07:21.966 00:07:21.966 PCI options: 00:07:21.966 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:21.966 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:21.966 -u, --no-pci disable PCI access 00:07:21.966 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:21.966 00:07:21.966 Log options: 00:07:21.966 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:07:21.966 --silence-noticelog disable notice level logging to stderr 00:07:21.966 00:07:21.966 Trace options: 00:07:21.966 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:21.966 setting 0 to disable trace (default 32768) 00:07:21.966 Tracepoints vary in size and can use more than one trace entry. 00:07:21.966 -e, --tpoint-group [:] 00:07:21.966 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:07:21.966 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:21.966 a tracepoint group. First tpoint inside a group can be enabled by 00:07:21.966 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:21.966 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:21.966 in /include/spdk_internal/trace_defs.h 00:07:21.966 00:07:21.966 Other options: 00:07:21.966 -h, --help show this usage 00:07:21.966 -v, --version print SPDK version 00:07:21.966 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:21.966 --env-context Opaque context for use of the env implementation 00:07:21.966 [2024-07-24 23:51:17.637481] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1192:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:07:21.966 app_ut [options] 00:07:21.966 00:07:21.966 CPU options: 00:07:21.966 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:21.966 (like [0,1,10]) 00:07:21.966 --lcores lcore to CPU mapping list. The list is in the format: 00:07:21.966 [<,lcores[@CPUs]>...] 00:07:21.966 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'[2024-07-24 23:51:17.637796] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1373:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:07:21.966 00:07:21.966 Within the group, '-' is used for range separator, 00:07:21.966 ',' is used for single number separator. 00:07:21.966 '( )' can be omitted for single element group, 00:07:21.966 '@' can be omitted if cpus and lcores have the same value 00:07:21.966 --disable-cpumask-locks Disable CPU core lock files. 00:07:21.966 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:21.966 pollers in the app support interrupt mode) 00:07:21.966 -p, --main-core main (primary) core for DPDK 00:07:21.966 00:07:21.966 Configuration options: 00:07:21.966 -c, --config, --json JSON config file 00:07:21.966 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:21.966 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:21.966 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:21.966 --rpcs-allowed comma-separated list of permitted RPCS 00:07:21.966 --json-ignore-init-errors don't exit on invalid config entry 00:07:21.966 00:07:21.966 Memory options: 00:07:21.966 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:21.966 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:21.966 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:21.966 -R, --huge-unlink unlink huge files after initialization 00:07:21.966 -n, --mem-channels number of memory channels used for DPDK 00:07:21.966 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:21.966 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:21.966 --no-huge run without using hugepages 00:07:21.966 -i, --shm-id shared memory ID (optional) 00:07:21.966 -g, --single-file-segments force creating just one hugetlbfs file 00:07:21.966 00:07:21.966 PCI options: 00:07:21.966 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:21.966 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:21.966 -u, --no-pci disable PCI access 00:07:21.966 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:21.966 00:07:21.966 Log options: 00:07:21.966 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:07:21.966 --silence-noticelog disable notice level logging to stderr 00:07:21.966 00:07:21.966 Trace options: 00:07:21.966 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:21.966 setting 0 to disable trace (default 32768) 00:07:21.966 Tracepoints vary in size and can use more than one trace entry. 00:07:21.966 -e, --tpoint-group [:] 00:07:21.966 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:07:21.966 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:21.966 a tracepoint group. First tpoint inside a group can be enabled by 00:07:21.966 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:21.966 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:21.966 in /include/spdk_internal/trace_defs.h 00:07:21.966 00:07:21.966 Other options: 00:07:21.966 -h, --help show this usage 00:07:21.966 -v, --version print SPDK version 00:07:21.966 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:21.966 --env-context Opaque context for use of the env implementation 00:07:21.966 passed 00:07:21.966 00:07:21.966 Run Summary: Type Total Ran Passed Failed Inactive 00:07:21.966 suites 1 1 n/a 0 0 00:07:21.966 tests 1 1 1 0 0 00:07:21.966 asserts 8 8 8 0 n/a 00:07:21.966 00:07:21.966 Elapsed time = 0.002 seconds 00:07:21.966 [2024-07-24 23:51:17.638048] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1278:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:07:21.966 23:51:17 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:07:21.967 00:07:21.967 00:07:21.967 CUnit - A unit testing framework for C - Version 2.1-3 00:07:21.967 http://cunit.sourceforge.net/ 00:07:21.967 00:07:21.967 00:07:21.967 Suite: app_suite 00:07:21.967 Test: test_create_reactor ...passed 00:07:21.967 Test: test_init_reactors ...passed 00:07:21.967 Test: test_event_call ...passed 00:07:21.967 Test: test_schedule_thread ...passed 00:07:21.967 Test: test_reschedule_thread ...passed 00:07:21.967 Test: test_bind_thread ...passed 00:07:21.967 Test: test_for_each_reactor ...passed 00:07:21.967 Test: test_reactor_stats ...passed 00:07:21.967 Test: test_scheduler ...passed 00:07:21.967 Test: test_governor ...passed 00:07:21.967 00:07:21.967 Run Summary: Type Total Ran Passed Failed Inactive 00:07:21.967 suites 1 1 n/a 0 0 00:07:21.967 tests 10 10 10 0 0 00:07:21.967 asserts 344 344 344 0 n/a 00:07:21.967 00:07:21.967 Elapsed time = 0.025 seconds 00:07:21.967 00:07:21.967 real 0m0.098s 00:07:21.967 user 0m0.053s 00:07:21.967 sys 0m0.045s 00:07:21.967 23:51:17 unittest.unittest_event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.967 23:51:17 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:07:21.967 ************************************ 00:07:21.967 END TEST unittest_event 00:07:21.967 ************************************ 00:07:21.967 23:51:17 unittest -- unit/unittest.sh@235 -- # uname -s 00:07:21.967 23:51:17 unittest -- unit/unittest.sh@235 -- # '[' Linux = Linux ']' 00:07:21.967 23:51:17 unittest -- unit/unittest.sh@236 -- # run_test unittest_ftl unittest_ftl 00:07:21.967 23:51:17 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.967 23:51:17 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.967 23:51:17 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:21.967 ************************************ 00:07:21.967 START TEST unittest_ftl 00:07:21.967 ************************************ 00:07:21.967 23:51:17 unittest.unittest_ftl -- common/autotest_common.sh@1125 -- # unittest_ftl 00:07:21.967 23:51:17 unittest.unittest_ftl -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:07:21.967 00:07:21.967 00:07:21.967 CUnit - A unit testing framework for C - Version 2.1-3 00:07:21.967 http://cunit.sourceforge.net/ 00:07:21.967 00:07:21.967 00:07:21.967 Suite: ftl_band_suite 00:07:21.967 Test: test_band_block_offset_from_addr_base ...passed 00:07:22.226 Test: test_band_block_offset_from_addr_offset ...passed 00:07:22.226 Test: test_band_addr_from_block_offset ...passed 00:07:22.226 Test: test_band_set_addr ...passed 00:07:22.226 Test: test_invalidate_addr ...passed 00:07:22.226 Test: test_next_xfer_addr ...passed 00:07:22.226 00:07:22.226 Run Summary: Type Total Ran Passed Failed Inactive 00:07:22.226 suites 1 1 n/a 0 0 00:07:22.226 tests 6 6 6 0 0 00:07:22.226 asserts 30356 30356 30356 0 n/a 00:07:22.226 00:07:22.226 Elapsed time = 0.157 seconds 00:07:22.226 23:51:18 unittest.unittest_ftl -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:07:22.226 00:07:22.226 00:07:22.226 CUnit - A unit testing framework for C - Version 2.1-3 00:07:22.226 http://cunit.sourceforge.net/ 00:07:22.226 00:07:22.226 00:07:22.226 Suite: ftl_bitmap 00:07:22.226 Test: test_ftl_bitmap_create ...[2024-07-24 23:51:18.036087] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:07:22.226 passed 00:07:22.226 Test: test_ftl_bitmap_get ...[2024-07-24 23:51:18.036330] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:07:22.226 passed 00:07:22.226 Test: test_ftl_bitmap_set ...passed 00:07:22.226 Test: test_ftl_bitmap_clear ...passed 00:07:22.226 Test: test_ftl_bitmap_find_first_set ...passed 00:07:22.227 Test: test_ftl_bitmap_find_first_clear ...passed 00:07:22.227 Test: test_ftl_bitmap_count_set ...passed 00:07:22.227 00:07:22.227 Run Summary: Type Total Ran Passed Failed Inactive 00:07:22.227 suites 1 1 n/a 0 0 00:07:22.227 tests 7 7 7 0 0 00:07:22.227 asserts 137 137 137 0 n/a 00:07:22.227 00:07:22.227 Elapsed time = 0.001 seconds 00:07:22.227 23:51:18 unittest.unittest_ftl -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:07:22.227 00:07:22.227 00:07:22.227 CUnit - A unit testing framework for C - Version 2.1-3 00:07:22.227 http://cunit.sourceforge.net/ 00:07:22.227 00:07:22.227 00:07:22.227 Suite: ftl_io_suite 00:07:22.227 Test: test_completion ...passed 00:07:22.227 Test: test_multiple_ios ...passed 00:07:22.227 00:07:22.227 Run Summary: Type Total Ran Passed Failed Inactive 00:07:22.227 suites 1 1 n/a 0 0 00:07:22.227 tests 2 2 2 0 0 00:07:22.227 asserts 47 47 47 0 n/a 00:07:22.227 00:07:22.227 Elapsed time = 0.003 seconds 00:07:22.227 23:51:18 unittest.unittest_ftl -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:07:22.486 00:07:22.486 00:07:22.486 CUnit - A unit testing framework for C - Version 2.1-3 00:07:22.486 http://cunit.sourceforge.net/ 00:07:22.486 00:07:22.486 00:07:22.486 Suite: ftl_mngt 00:07:22.486 Test: test_next_step ...passed 00:07:22.486 Test: test_continue_step ...passed 00:07:22.486 Test: test_get_func_and_step_cntx_alloc ...passed 00:07:22.486 Test: test_fail_step ...passed 00:07:22.486 Test: test_mngt_call_and_call_rollback ...passed 00:07:22.486 Test: test_nested_process_failure ...passed 00:07:22.486 Test: test_call_init_success ...passed 00:07:22.486 Test: test_call_init_failure ...passed 00:07:22.486 00:07:22.486 Run Summary: Type Total Ran Passed Failed Inactive 00:07:22.486 suites 1 1 n/a 0 0 00:07:22.486 tests 8 8 8 0 0 00:07:22.486 asserts 196 196 196 0 n/a 00:07:22.486 00:07:22.486 Elapsed time = 0.002 seconds 00:07:22.486 23:51:18 unittest.unittest_ftl -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:07:22.486 00:07:22.486 00:07:22.486 CUnit - A unit testing framework for C - Version 2.1-3 00:07:22.486 http://cunit.sourceforge.net/ 00:07:22.486 00:07:22.486 00:07:22.486 Suite: ftl_mempool 00:07:22.486 Test: test_ftl_mempool_create ...passed 00:07:22.486 Test: test_ftl_mempool_get_put ...passed 00:07:22.486 00:07:22.486 Run Summary: Type Total Ran Passed Failed Inactive 00:07:22.486 suites 1 1 n/a 0 0 00:07:22.486 tests 2 2 2 0 0 00:07:22.486 asserts 36 36 36 0 n/a 00:07:22.486 00:07:22.486 Elapsed time = 0.000 seconds 00:07:22.486 23:51:18 unittest.unittest_ftl -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:07:22.486 00:07:22.486 00:07:22.486 CUnit - A unit testing framework for C - Version 2.1-3 00:07:22.486 http://cunit.sourceforge.net/ 00:07:22.486 00:07:22.486 00:07:22.486 Suite: ftl_addr64_suite 00:07:22.486 Test: test_addr_cached ...passed 00:07:22.486 00:07:22.486 Run Summary: Type Total Ran Passed Failed Inactive 00:07:22.486 suites 1 1 n/a 0 0 00:07:22.486 tests 1 1 1 0 0 00:07:22.486 asserts 1536 1536 1536 0 n/a 00:07:22.486 00:07:22.486 Elapsed time = 0.000 seconds 00:07:22.486 23:51:18 unittest.unittest_ftl -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:07:22.486 00:07:22.486 00:07:22.486 CUnit - A unit testing framework for C - Version 2.1-3 00:07:22.486 http://cunit.sourceforge.net/ 00:07:22.486 00:07:22.486 00:07:22.486 Suite: ftl_sb 00:07:22.486 Test: test_sb_crc_v2 ...passed 00:07:22.486 Test: test_sb_crc_v3 ...passed 00:07:22.486 Test: test_sb_v3_md_layout ...[2024-07-24 23:51:18.180017] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:07:22.486 [2024-07-24 23:51:18.180239] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:22.486 [2024-07-24 23:51:18.180276] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:22.486 [2024-07-24 23:51:18.180299] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:22.486 [2024-07-24 23:51:18.180325] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:22.486 [2024-07-24 23:51:18.180347] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:07:22.486 [2024-07-24 23:51:18.180372] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:22.486 [2024-07-24 23:51:18.180389] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:22.487 [2024-07-24 23:51:18.180463] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:22.487 [2024-07-24 23:51:18.180486] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:22.487 [2024-07-24 23:51:18.180519] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:22.487 passed 00:07:22.487 Test: test_sb_v5_md_layout ...passed 00:07:22.487 00:07:22.487 Run Summary: Type Total Ran Passed Failed Inactive 00:07:22.487 suites 1 1 n/a 0 0 00:07:22.487 tests 4 4 4 0 0 00:07:22.487 asserts 160 160 160 0 n/a 00:07:22.487 00:07:22.487 Elapsed time = 0.002 seconds 00:07:22.487 23:51:18 unittest.unittest_ftl -- unit/unittest.sh@63 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:07:22.487 00:07:22.487 00:07:22.487 CUnit - A unit testing framework for C - Version 2.1-3 00:07:22.487 http://cunit.sourceforge.net/ 00:07:22.487 00:07:22.487 00:07:22.487 Suite: ftl_layout_upgrade 00:07:22.487 Test: test_l2p_upgrade ...passed 00:07:22.487 00:07:22.487 Run Summary: Type Total Ran Passed Failed Inactive 00:07:22.487 suites 1 1 n/a 0 0 00:07:22.487 tests 1 1 1 0 0 00:07:22.487 asserts 152 152 152 0 n/a 00:07:22.487 00:07:22.487 Elapsed time = 0.001 seconds 00:07:22.487 23:51:18 unittest.unittest_ftl -- unit/unittest.sh@64 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut 00:07:22.487 00:07:22.487 00:07:22.487 CUnit - A unit testing framework for C - Version 2.1-3 00:07:22.487 http://cunit.sourceforge.net/ 00:07:22.487 00:07:22.487 00:07:22.487 Suite: ftl_p2l_suite 00:07:22.487 Test: test_p2l_num_pages ...passed 00:07:22.487 Test: test_ckpt_issue ...passed 00:07:22.487 Test: test_persist_band_p2l ...passed 00:07:22.487 Test: test_clean_restore_p2l ...passed 00:07:22.487 Test: test_dirty_restore_p2l ...passed 00:07:22.487 00:07:22.487 Run Summary: Type Total Ran Passed Failed Inactive 00:07:22.487 suites 1 1 n/a 0 0 00:07:22.487 tests 5 5 5 0 0 00:07:22.487 asserts 10020 10020 10020 0 n/a 00:07:22.487 00:07:22.487 Elapsed time = 0.087 seconds 00:07:22.487 00:07:22.487 real 0m0.566s 00:07:22.487 user 0m0.274s 00:07:22.487 sys 0m0.294s 00:07:22.487 23:51:18 unittest.unittest_ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.487 23:51:18 unittest.unittest_ftl -- common/autotest_common.sh@10 -- # set +x 00:07:22.487 ************************************ 00:07:22.487 END TEST unittest_ftl 00:07:22.487 ************************************ 00:07:22.746 23:51:18 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:22.746 23:51:18 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.746 23:51:18 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.746 23:51:18 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 ************************************ 00:07:22.746 START TEST unittest_accel 00:07:22.746 ************************************ 00:07:22.746 23:51:18 unittest.unittest_accel -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:22.746 00:07:22.746 00:07:22.746 CUnit - A unit testing framework for C - Version 2.1-3 00:07:22.746 http://cunit.sourceforge.net/ 00:07:22.746 00:07:22.746 00:07:22.746 Suite: accel_sequence 00:07:22.746 Test: test_sequence_fill_copy ...passed 00:07:22.746 Test: test_sequence_abort ...passed 00:07:22.746 Test: test_sequence_append_error ...passed 00:07:22.746 Test: test_sequence_completion_error ...[2024-07-24 23:51:18.422236] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1959:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x70f0f71287c0 00:07:22.746 [2024-07-24 23:51:18.422482] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1959:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x70f0f71287c0 00:07:22.746 passed 00:07:22.746 Test: test_sequence_decompress ...[2024-07-24 23:51:18.422553] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1869:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x70f0f71287c0 00:07:22.747 [2024-07-24 23:51:18.422598] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1869:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x70f0f71287c0 00:07:22.747 passed 00:07:22.747 Test: test_sequence_reverse ...passed 00:07:22.747 Test: test_sequence_copy_elision ...passed 00:07:22.747 Test: test_sequence_accel_buffers ...passed 00:07:22.747 Test: test_sequence_memory_domain ...[2024-07-24 23:51:18.434958] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1761:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:07:22.747 [2024-07-24 23:51:18.435140] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1800:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:07:22.747 passed 00:07:22.747 Test: test_sequence_module_memory_domain ...passed 00:07:22.747 Test: test_sequence_crypto ...passed 00:07:22.747 Test: test_sequence_driver ...[2024-07-24 23:51:18.442368] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1908:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x70f0f449e7c0 using driver: ut 00:07:22.747 [2024-07-24 23:51:18.442472] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1972:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x70f0f449e7c0 through driver: ut 00:07:22.747 passed 00:07:22.747 Test: test_sequence_same_iovs ...passed 00:07:22.747 Test: test_sequence_crc32 ...passed 00:07:22.747 Suite: accel 00:07:22.747 Test: test_spdk_accel_task_complete ...passed 00:07:22.747 Test: test_get_task ...passed 00:07:22.747 Test: test_spdk_accel_submit_copy ...passed 00:07:22.747 Test: test_spdk_accel_submit_dualcast ...[2024-07-24 23:51:18.447673] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 425:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:22.747 [2024-07-24 23:51:18.447728] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 425:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:22.747 passed 00:07:22.747 Test: test_spdk_accel_submit_compare ...passed 00:07:22.747 Test: test_spdk_accel_submit_fill ...passed 00:07:22.747 Test: test_spdk_accel_submit_crc32c ...passed 00:07:22.747 Test: test_spdk_accel_submit_crc32cv ...passed 00:07:22.747 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:07:22.747 Test: test_spdk_accel_submit_xor ...passed 00:07:22.747 Test: test_spdk_accel_module_find_by_name ...passed 00:07:22.747 Test: test_spdk_accel_module_register ...passed 00:07:22.747 00:07:22.747 Run Summary: Type Total Ran Passed Failed Inactive 00:07:22.747 suites 2 2 n/a 0 0 00:07:22.747 tests 26 26 26 0 0 00:07:22.747 asserts 830 830 830 0 n/a 00:07:22.747 00:07:22.747 Elapsed time = 0.038 seconds 00:07:22.747 00:07:22.747 real 0m0.080s 00:07:22.747 user 0m0.047s 00:07:22.747 sys 0m0.033s 00:07:22.747 23:51:18 unittest.unittest_accel -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.747 23:51:18 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.747 ************************************ 00:07:22.747 END TEST unittest_accel 00:07:22.747 ************************************ 00:07:22.747 23:51:18 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:22.747 23:51:18 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.747 23:51:18 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.747 23:51:18 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:22.747 ************************************ 00:07:22.747 START TEST unittest_ioat 00:07:22.747 ************************************ 00:07:22.747 23:51:18 unittest.unittest_ioat -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:22.747 00:07:22.747 00:07:22.747 CUnit - A unit testing framework for C - Version 2.1-3 00:07:22.747 http://cunit.sourceforge.net/ 00:07:22.747 00:07:22.747 00:07:22.747 Suite: ioat 00:07:22.747 Test: ioat_state_check ...passed 00:07:22.747 00:07:22.747 Run Summary: Type Total Ran Passed Failed Inactive 00:07:22.747 suites 1 1 n/a 0 0 00:07:22.747 tests 1 1 1 0 0 00:07:22.747 asserts 32 32 32 0 n/a 00:07:22.747 00:07:22.747 Elapsed time = 0.000 seconds 00:07:22.747 00:07:22.747 real 0m0.033s 00:07:22.747 user 0m0.015s 00:07:22.747 sys 0m0.018s 00:07:22.747 23:51:18 unittest.unittest_ioat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.747 23:51:18 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:07:22.747 ************************************ 00:07:22.747 END TEST unittest_ioat 00:07:22.747 ************************************ 00:07:22.747 23:51:18 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:22.747 23:51:18 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:22.747 23:51:18 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.747 23:51:18 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.747 23:51:18 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:22.747 ************************************ 00:07:22.747 START TEST unittest_idxd_user 00:07:22.747 ************************************ 00:07:22.747 23:51:18 unittest.unittest_idxd_user -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:23.007 00:07:23.007 00:07:23.007 CUnit - A unit testing framework for C - Version 2.1-3 00:07:23.007 http://cunit.sourceforge.net/ 00:07:23.007 00:07:23.007 00:07:23.007 Suite: idxd_user 00:07:23.007 Test: test_idxd_wait_cmd ...[2024-07-24 23:51:18.630839] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:23.007 passed 00:07:23.007 Test: test_idxd_reset_dev ...[2024-07-24 23:51:18.631051] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:07:23.007 [2024-07-24 23:51:18.631144] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:23.007 passed 00:07:23.007 Test: test_idxd_group_config ...passed 00:07:23.007 Test: test_idxd_wq_config ...passed[2024-07-24 23:51:18.631180] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:07:23.007 00:07:23.007 00:07:23.007 Run Summary: Type Total Ran Passed Failed Inactive 00:07:23.007 suites 1 1 n/a 0 0 00:07:23.007 tests 4 4 4 0 0 00:07:23.007 asserts 20 20 20 0 n/a 00:07:23.007 00:07:23.007 Elapsed time = 0.001 seconds 00:07:23.007 00:07:23.007 real 0m0.031s 00:07:23.007 user 0m0.013s 00:07:23.007 sys 0m0.018s 00:07:23.007 23:51:18 unittest.unittest_idxd_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.007 23:51:18 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:07:23.007 ************************************ 00:07:23.007 END TEST unittest_idxd_user 00:07:23.007 ************************************ 00:07:23.007 23:51:18 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:07:23.007 23:51:18 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.007 23:51:18 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.007 23:51:18 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:23.007 ************************************ 00:07:23.007 START TEST unittest_iscsi 00:07:23.007 ************************************ 00:07:23.007 23:51:18 unittest.unittest_iscsi -- common/autotest_common.sh@1125 -- # unittest_iscsi 00:07:23.007 23:51:18 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:07:23.007 00:07:23.007 00:07:23.007 CUnit - A unit testing framework for C - Version 2.1-3 00:07:23.007 http://cunit.sourceforge.net/ 00:07:23.007 00:07:23.008 00:07:23.008 Suite: conn_suite 00:07:23.008 Test: read_task_split_in_order_case ...passed 00:07:23.008 Test: read_task_split_reverse_order_case ...passed 00:07:23.008 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:07:23.008 Test: process_non_read_task_completion_test ...passed 00:07:23.008 Test: free_tasks_on_connection ...passed 00:07:23.008 Test: free_tasks_with_queued_datain ...passed 00:07:23.008 Test: abort_queued_datain_task_test ...passed 00:07:23.008 Test: abort_queued_datain_tasks_test ...passed 00:07:23.008 00:07:23.008 Run Summary: Type Total Ran Passed Failed Inactive 00:07:23.008 suites 1 1 n/a 0 0 00:07:23.008 tests 8 8 8 0 0 00:07:23.008 asserts 230 230 230 0 n/a 00:07:23.008 00:07:23.008 Elapsed time = 0.000 seconds 00:07:23.008 23:51:18 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:07:23.008 00:07:23.008 00:07:23.008 CUnit - A unit testing framework for C - Version 2.1-3 00:07:23.008 http://cunit.sourceforge.net/ 00:07:23.008 00:07:23.008 00:07:23.008 Suite: iscsi_suite 00:07:23.008 Test: param_negotiation_test ...passed 00:07:23.008 Test: list_negotiation_test ...passed 00:07:23.008 Test: parse_valid_test ...passed 00:07:23.008 Test: parse_invalid_test ...[2024-07-24 23:51:18.746507] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:07:23.008 [2024-07-24 23:51:18.747034] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:07:23.008 [2024-07-24 23:51:18.747086] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:07:23.008 [2024-07-24 23:51:18.747154] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:07:23.008 [2024-07-24 23:51:18.747477] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:07:23.008 [2024-07-24 23:51:18.747546] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:07:23.008 [2024-07-24 23:51:18.747707] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:07:23.008 passed 00:07:23.008 00:07:23.008 Run Summary: Type Total Ran Passed Failed Inactive 00:07:23.008 suites 1 1 n/a 0 0 00:07:23.008 tests 4 4 4 0 0 00:07:23.008 asserts 161 161 161 0 n/a 00:07:23.008 00:07:23.008 Elapsed time = 0.006 seconds 00:07:23.008 23:51:18 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:07:23.008 00:07:23.008 00:07:23.008 CUnit - A unit testing framework for C - Version 2.1-3 00:07:23.008 http://cunit.sourceforge.net/ 00:07:23.008 00:07:23.008 00:07:23.008 Suite: iscsi_target_node_suite 00:07:23.008 Test: add_lun_test_cases ...[2024-07-24 23:51:18.774407] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1252:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:07:23.008 [2024-07-24 23:51:18.774565] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:07:23.008 [2024-07-24 23:51:18.774632] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:23.008 passed 00:07:23.008 Test: allow_any_allowed ...passed 00:07:23.008 Test: allow_ipv6_allowed ...passed 00:07:23.008 Test: allow_ipv6_denied ...passed 00:07:23.008 Test: allow_ipv6_invalid ...passed 00:07:23.008 Test: allow_ipv4_allowed ...passed 00:07:23.008 Test: allow_ipv4_denied ...passed 00:07:23.008 Test: allow_ipv4_invalid ...passed 00:07:23.008 Test: node_access_allowed ...[2024-07-24 23:51:18.774686] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:23.008 [2024-07-24 23:51:18.774711] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:07:23.008 passed 00:07:23.008 Test: node_access_denied_by_empty_netmask ...passed 00:07:23.008 Test: node_access_multi_initiator_groups_cases ...passed 00:07:23.008 Test: allow_iscsi_name_multi_maps_case ...passed 00:07:23.008 Test: chap_param_test_cases ...[2024-07-24 23:51:18.775138] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:07:23.008 [2024-07-24 23:51:18.775190] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:07:23.008 [2024-07-24 23:51:18.775213] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:07:23.008 passed 00:07:23.008 00:07:23.008 [2024-07-24 23:51:18.775252] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:07:23.008 [2024-07-24 23:51:18.775273] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:07:23.008 Run Summary: Type Total Ran Passed Failed Inactive 00:07:23.008 suites 1 1 n/a 0 0 00:07:23.008 tests 13 13 13 0 0 00:07:23.008 asserts 50 50 50 0 n/a 00:07:23.008 00:07:23.008 Elapsed time = 0.001 seconds 00:07:23.008 23:51:18 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:07:23.008 00:07:23.008 00:07:23.008 CUnit - A unit testing framework for C - Version 2.1-3 00:07:23.008 http://cunit.sourceforge.net/ 00:07:23.008 00:07:23.008 00:07:23.008 Suite: iscsi_suite 00:07:23.008 Test: op_login_check_target_test ...[2024-07-24 23:51:18.813736] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied 00:07:23.008 passed 00:07:23.008 Test: op_login_session_normal_test ...[2024-07-24 23:51:18.814140] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:23.008 [2024-07-24 23:51:18.814198] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:23.008 [2024-07-24 23:51:18.814505] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:23.008 [2024-07-24 23:51:18.814566] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:07:23.008 [2024-07-24 23:51:18.814755] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:23.008 [2024-07-24 23:51:18.814923] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:07:23.008 [2024-07-24 23:51:18.814956] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:23.008 passed 00:07:23.008 Test: maxburstlength_test ...[2024-07-24 23:51:18.815631] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:23.008 [2024-07-24 23:51:18.815716] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:07:23.008 passed 00:07:23.008 Test: underflow_for_read_transfer_test ...passed 00:07:23.008 Test: underflow_for_zero_read_transfer_test ...passed 00:07:23.008 Test: underflow_for_request_sense_test ...passed 00:07:23.008 Test: underflow_for_check_condition_test ...passed 00:07:23.008 Test: add_transfer_task_test ...passed 00:07:23.008 Test: get_transfer_task_test ...passed 00:07:23.008 Test: del_transfer_task_test ...passed 00:07:23.008 Test: clear_all_transfer_tasks_test ...passed 00:07:23.008 Test: build_iovs_test ...passed 00:07:23.008 Test: build_iovs_with_md_test ...passed 00:07:23.008 Test: pdu_hdr_op_login_test ...[2024-07-24 23:51:18.818484] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error 00:07:23.008 [2024-07-24 23:51:18.818834] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:07:23.008 [2024-07-24 23:51:18.818923] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:07:23.008 passed 00:07:23.008 Test: pdu_hdr_op_text_test ...[2024-07-24 23:51:18.819114] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2258:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:23.008 [2024-07-24 23:51:18.819415] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:07:23.008 [2024-07-24 23:51:18.819471] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2303:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:07:23.008 passed 00:07:23.008 Test: pdu_hdr_op_logout_test ...[2024-07-24 23:51:18.819609] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2533:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:07:23.008 passed 00:07:23.008 Test: pdu_hdr_op_scsi_test ...[2024-07-24 23:51:18.820015] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:23.008 [2024-07-24 23:51:18.820077] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:23.008 [2024-07-24 23:51:18.820329] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:07:23.008 [2024-07-24 23:51:18.820401] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3415:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:23.008 [2024-07-24 23:51:18.820652] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3422:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:07:23.008 [2024-07-24 23:51:18.821059] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:07:23.008 passed 00:07:23.008 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-24 23:51:18.821176] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:07:23.008 [2024-07-24 23:51:18.821515] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:07:23.008 passed 00:07:23.008 Test: pdu_hdr_op_nopout_test ...[2024-07-24 23:51:18.821995] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:07:23.008 [2024-07-24 23:51:18.822103] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:23.008 [2024-07-24 23:51:18.822227] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:23.008 [2024-07-24 23:51:18.822488] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:07:23.009 passed 00:07:23.009 Test: pdu_hdr_op_data_test ...[2024-07-24 23:51:18.822543] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:07:23.009 [2024-07-24 23:51:18.822725] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:07:23.009 [2024-07-24 23:51:18.823031] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:23.009 [2024-07-24 23:51:18.823076] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:07:23.009 [2024-07-24 23:51:18.823228] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:07:23.009 [2024-07-24 23:51:18.823397] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:07:23.009 [2024-07-24 23:51:18.823437] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4261:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:07:23.009 passed 00:07:23.009 Test: empty_text_with_cbit_test ...passed 00:07:23.009 Test: pdu_payload_read_test ...[2024-07-24 23:51:18.826086] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4649:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:07:23.009 passed 00:07:23.009 Test: data_out_pdu_sequence_test ...passed 00:07:23.009 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:07:23.009 00:07:23.009 Run Summary: Type Total Ran Passed Failed Inactive 00:07:23.009 suites 1 1 n/a 0 0 00:07:23.009 tests 24 24 24 0 0 00:07:23.009 asserts 150253 150253 150253 0 n/a 00:07:23.009 00:07:23.009 Elapsed time = 0.023 seconds 00:07:23.009 23:51:18 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:07:23.009 00:07:23.009 00:07:23.009 CUnit - A unit testing framework for C - Version 2.1-3 00:07:23.009 http://cunit.sourceforge.net/ 00:07:23.009 00:07:23.009 00:07:23.009 Suite: init_grp_suite 00:07:23.009 Test: create_initiator_group_success_case ...passed 00:07:23.009 Test: find_initiator_group_success_case ...passed 00:07:23.009 Test: register_initiator_group_twice_case ...passed 00:07:23.009 Test: add_initiator_name_success_case ...passed 00:07:23.009 Test: add_initiator_name_fail_case ...[2024-07-24 23:51:18.863791] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:07:23.009 passed 00:07:23.009 Test: delete_all_initiator_names_success_case ...passed 00:07:23.009 Test: add_netmask_success_case ...passed 00:07:23.009 Test: add_netmask_fail_case ...passed 00:07:23.009 Test: delete_all_netmasks_success_case ...[2024-07-24 23:51:18.864168] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:07:23.009 passed 00:07:23.009 Test: initiator_name_overwrite_all_to_any_case ...passed 00:07:23.009 Test: netmask_overwrite_all_to_any_case ...passed 00:07:23.009 Test: add_delete_initiator_names_case ...passed 00:07:23.009 Test: add_duplicated_initiator_names_case ...passed 00:07:23.009 Test: delete_nonexisting_initiator_names_case ...passed 00:07:23.009 Test: add_delete_netmasks_case ...passed 00:07:23.009 Test: add_duplicated_netmasks_case ...passed 00:07:23.009 Test: delete_nonexisting_netmasks_case ...passed 00:07:23.009 00:07:23.009 Run Summary: Type Total Ran Passed Failed Inactive 00:07:23.009 suites 1 1 n/a 0 0 00:07:23.009 tests 17 17 17 0 0 00:07:23.009 asserts 108 108 108 0 n/a 00:07:23.009 00:07:23.009 Elapsed time = 0.001 seconds 00:07:23.268 23:51:18 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:07:23.268 00:07:23.268 00:07:23.268 CUnit - A unit testing framework for C - Version 2.1-3 00:07:23.268 http://cunit.sourceforge.net/ 00:07:23.268 00:07:23.268 00:07:23.268 Suite: portal_grp_suite 00:07:23.268 Test: portal_create_ipv4_normal_case ...passed 00:07:23.268 Test: portal_create_ipv6_normal_case ...passed 00:07:23.268 Test: portal_create_ipv4_wildcard_case ...passed 00:07:23.268 Test: portal_create_ipv6_wildcard_case ...passed 00:07:23.268 Test: portal_create_twice_case ...[2024-07-24 23:51:18.899853] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:07:23.268 passed 00:07:23.268 Test: portal_grp_register_unregister_case ...passed 00:07:23.268 Test: portal_grp_register_twice_case ...passed 00:07:23.268 Test: portal_grp_add_delete_case ...passed 00:07:23.268 Test: portal_grp_add_delete_twice_case ...passed 00:07:23.268 00:07:23.268 Run Summary: Type Total Ran Passed Failed Inactive 00:07:23.268 suites 1 1 n/a 0 0 00:07:23.268 tests 9 9 9 0 0 00:07:23.268 asserts 44 44 44 0 n/a 00:07:23.268 00:07:23.268 Elapsed time = 0.004 seconds 00:07:23.268 00:07:23.268 real 0m0.221s 00:07:23.268 user 0m0.112s 00:07:23.268 sys 0m0.112s 00:07:23.268 23:51:18 unittest.unittest_iscsi -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.268 23:51:18 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:07:23.269 ************************************ 00:07:23.269 END TEST unittest_iscsi 00:07:23.269 ************************************ 00:07:23.269 23:51:18 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:07:23.269 23:51:18 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.269 23:51:18 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.269 23:51:18 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:23.269 ************************************ 00:07:23.269 START TEST unittest_json 00:07:23.269 ************************************ 00:07:23.269 23:51:18 unittest.unittest_json -- common/autotest_common.sh@1125 -- # unittest_json 00:07:23.269 23:51:18 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:07:23.269 00:07:23.269 00:07:23.269 CUnit - A unit testing framework for C - Version 2.1-3 00:07:23.269 http://cunit.sourceforge.net/ 00:07:23.269 00:07:23.269 00:07:23.269 Suite: json 00:07:23.269 Test: test_parse_literal ...passed 00:07:23.269 Test: test_parse_string_simple ...passed 00:07:23.269 Test: test_parse_string_control_chars ...passed 00:07:23.269 Test: test_parse_string_utf8 ...passed 00:07:23.269 Test: test_parse_string_escapes_twochar ...passed 00:07:23.269 Test: test_parse_string_escapes_unicode ...passed 00:07:23.269 Test: test_parse_number ...passed 00:07:23.269 Test: test_parse_array ...passed 00:07:23.269 Test: test_parse_object ...passed 00:07:23.269 Test: test_parse_nesting ...passed 00:07:23.269 Test: test_parse_comment ...passed 00:07:23.269 00:07:23.269 Run Summary: Type Total Ran Passed Failed Inactive 00:07:23.269 suites 1 1 n/a 0 0 00:07:23.269 tests 11 11 11 0 0 00:07:23.269 asserts 1516 1516 1516 0 n/a 00:07:23.269 00:07:23.269 Elapsed time = 0.002 seconds 00:07:23.269 23:51:18 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:07:23.269 00:07:23.269 00:07:23.269 CUnit - A unit testing framework for C - Version 2.1-3 00:07:23.269 http://cunit.sourceforge.net/ 00:07:23.269 00:07:23.269 00:07:23.269 Suite: json 00:07:23.269 Test: test_strequal ...passed 00:07:23.269 Test: test_num_to_uint16 ...passed 00:07:23.269 Test: test_num_to_int32 ...passed 00:07:23.269 Test: test_num_to_uint64 ...passed 00:07:23.269 Test: test_decode_object ...passed 00:07:23.269 Test: test_decode_array ...passed 00:07:23.269 Test: test_decode_bool ...passed 00:07:23.269 Test: test_decode_uint16 ...passed 00:07:23.269 Test: test_decode_int32 ...passed 00:07:23.269 Test: test_decode_uint32 ...passed 00:07:23.269 Test: test_decode_uint64 ...passed 00:07:23.269 Test: test_decode_string ...passed 00:07:23.269 Test: test_decode_uuid ...passed 00:07:23.269 Test: test_find ...passed 00:07:23.269 Test: test_find_array ...passed 00:07:23.269 Test: test_iterating ...passed 00:07:23.269 Test: test_free_object ...passed 00:07:23.269 00:07:23.269 Run Summary: Type Total Ran Passed Failed Inactive 00:07:23.269 suites 1 1 n/a 0 0 00:07:23.269 tests 17 17 17 0 0 00:07:23.269 asserts 236 236 236 0 n/a 00:07:23.269 00:07:23.269 Elapsed time = 0.001 seconds 00:07:23.269 23:51:19 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:07:23.269 00:07:23.269 00:07:23.269 CUnit - A unit testing framework for C - Version 2.1-3 00:07:23.269 http://cunit.sourceforge.net/ 00:07:23.269 00:07:23.269 00:07:23.269 Suite: json 00:07:23.269 Test: test_write_literal ...passed 00:07:23.269 Test: test_write_string_simple ...passed 00:07:23.269 Test: test_write_string_escapes ...passed 00:07:23.269 Test: test_write_string_utf16le ...passed 00:07:23.269 Test: test_write_number_int32 ...passed 00:07:23.269 Test: test_write_number_uint32 ...passed 00:07:23.269 Test: test_write_number_uint128 ...passed 00:07:23.269 Test: test_write_string_number_uint128 ...passed 00:07:23.269 Test: test_write_number_int64 ...passed 00:07:23.269 Test: test_write_number_uint64 ...passed 00:07:23.269 Test: test_write_number_double ...passed 00:07:23.269 Test: test_write_uuid ...passed 00:07:23.269 Test: test_write_array ...passed 00:07:23.269 Test: test_write_object ...passed 00:07:23.269 Test: test_write_nesting ...passed 00:07:23.269 Test: test_write_val ...passed 00:07:23.269 00:07:23.269 Run Summary: Type Total Ran Passed Failed Inactive 00:07:23.269 suites 1 1 n/a 0 0 00:07:23.269 tests 16 16 16 0 0 00:07:23.269 asserts 918 918 918 0 n/a 00:07:23.269 00:07:23.269 Elapsed time = 0.007 seconds 00:07:23.269 23:51:19 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:07:23.269 00:07:23.269 00:07:23.269 CUnit - A unit testing framework for C - Version 2.1-3 00:07:23.269 http://cunit.sourceforge.net/ 00:07:23.269 00:07:23.269 00:07:23.269 Suite: jsonrpc 00:07:23.269 Test: test_parse_request ...passed 00:07:23.269 Test: test_parse_request_streaming ...passed 00:07:23.269 00:07:23.269 Run Summary: Type Total Ran Passed Failed Inactive 00:07:23.269 suites 1 1 n/a 0 0 00:07:23.269 tests 2 2 2 0 0 00:07:23.269 asserts 289 289 289 0 n/a 00:07:23.269 00:07:23.269 Elapsed time = 0.004 seconds 00:07:23.269 00:07:23.269 real 0m0.128s 00:07:23.269 user 0m0.060s 00:07:23.269 sys 0m0.069s 00:07:23.269 23:51:19 unittest.unittest_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.269 ************************************ 00:07:23.269 END TEST unittest_json 00:07:23.269 ************************************ 00:07:23.269 23:51:19 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:07:23.269 23:51:19 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:07:23.269 23:51:19 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.269 23:51:19 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.269 23:51:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:23.529 ************************************ 00:07:23.529 START TEST unittest_rpc 00:07:23.529 ************************************ 00:07:23.529 23:51:19 unittest.unittest_rpc -- common/autotest_common.sh@1125 -- # unittest_rpc 00:07:23.529 23:51:19 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:07:23.529 00:07:23.529 00:07:23.529 CUnit - A unit testing framework for C - Version 2.1-3 00:07:23.529 http://cunit.sourceforge.net/ 00:07:23.529 00:07:23.529 00:07:23.529 Suite: rpc 00:07:23.529 Test: test_jsonrpc_handler ...passed 00:07:23.529 Test: test_spdk_rpc_is_method_allowed ...passed 00:07:23.529 Test: test_rpc_get_methods ...[2024-07-24 23:51:19.162624] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:07:23.529 passed 00:07:23.529 Test: test_rpc_spdk_get_version ...passed 00:07:23.529 Test: test_spdk_rpc_listen_close ...passed 00:07:23.529 Test: test_rpc_run_multiple_servers ...passed 00:07:23.529 00:07:23.529 Run Summary: Type Total Ran Passed Failed Inactive 00:07:23.529 suites 1 1 n/a 0 0 00:07:23.529 tests 6 6 6 0 0 00:07:23.529 asserts 23 23 23 0 n/a 00:07:23.529 00:07:23.529 Elapsed time = 0.001 seconds 00:07:23.529 00:07:23.529 real 0m0.032s 00:07:23.529 user 0m0.019s 00:07:23.529 sys 0m0.014s 00:07:23.529 23:51:19 unittest.unittest_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.529 23:51:19 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.529 ************************************ 00:07:23.529 END TEST unittest_rpc 00:07:23.529 ************************************ 00:07:23.529 23:51:19 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:23.529 23:51:19 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.529 23:51:19 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.529 23:51:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:23.529 ************************************ 00:07:23.529 START TEST unittest_notify 00:07:23.529 ************************************ 00:07:23.529 23:51:19 unittest.unittest_notify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:23.529 00:07:23.529 00:07:23.529 CUnit - A unit testing framework for C - Version 2.1-3 00:07:23.529 http://cunit.sourceforge.net/ 00:07:23.529 00:07:23.529 00:07:23.529 Suite: app_suite 00:07:23.529 Test: notify ...passed 00:07:23.529 00:07:23.529 Run Summary: Type Total Ran Passed Failed Inactive 00:07:23.529 suites 1 1 n/a 0 0 00:07:23.529 tests 1 1 1 0 0 00:07:23.529 asserts 13 13 13 0 n/a 00:07:23.529 00:07:23.529 Elapsed time = 0.000 seconds 00:07:23.529 00:07:23.529 real 0m0.026s 00:07:23.529 user 0m0.008s 00:07:23.529 sys 0m0.019s 00:07:23.529 23:51:19 unittest.unittest_notify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.529 23:51:19 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:07:23.529 ************************************ 00:07:23.529 END TEST unittest_notify 00:07:23.529 ************************************ 00:07:23.529 23:51:19 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:07:23.529 23:51:19 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.529 23:51:19 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.529 23:51:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:23.529 ************************************ 00:07:23.529 START TEST unittest_nvme 00:07:23.529 ************************************ 00:07:23.529 23:51:19 unittest.unittest_nvme -- common/autotest_common.sh@1125 -- # unittest_nvme 00:07:23.529 23:51:19 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:07:23.529 00:07:23.529 00:07:23.529 CUnit - A unit testing framework for C - Version 2.1-3 00:07:23.529 http://cunit.sourceforge.net/ 00:07:23.529 00:07:23.529 00:07:23.529 Suite: nvme 00:07:23.529 Test: test_opc_data_transfer ...passed 00:07:23.529 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:07:23.529 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:07:23.529 Test: test_trid_parse_and_compare ...[2024-07-24 23:51:19.326477] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:07:23.529 [2024-07-24 23:51:19.326731] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:23.529 [2024-07-24 23:51:19.326788] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1211:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:07:23.529 [2024-07-24 23:51:19.326858] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:23.529 [2024-07-24 23:51:19.326895] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1222:parse_next_key: *ERROR*: Key without value 00:07:23.529 [2024-07-24 23:51:19.326921] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:23.529 passed 00:07:23.529 Test: test_trid_trtype_str ...passed 00:07:23.529 Test: test_trid_adrfam_str ...passed 00:07:23.529 Test: test_nvme_ctrlr_probe ...[2024-07-24 23:51:19.327224] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:23.529 passed 00:07:23.529 Test: test_spdk_nvme_probe ...[2024-07-24 23:51:19.327309] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:23.529 [2024-07-24 23:51:19.327344] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:23.529 passed 00:07:23.529 Test: test_spdk_nvme_connect ...[2024-07-24 23:51:19.327445] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:07:23.529 [2024-07-24 23:51:19.327489] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:23.529 [2024-07-24 23:51:19.327573] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1010:spdk_nvme_connect: *ERROR*: No transport ID specified 00:07:23.529 passed 00:07:23.529 Test: test_nvme_ctrlr_probe_internal ...[2024-07-24 23:51:19.328019] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:23.529 passed 00:07:23.529 Test: test_nvme_init_controllers ...[2024-07-24 23:51:19.328217] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:23.529 [2024-07-24 23:51:19.328258] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:07:23.529 passed 00:07:23.529 Test: test_nvme_driver_init ...[2024-07-24 23:51:19.328383] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:07:23.529 [2024-07-24 23:51:19.328471] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:07:23.529 [2024-07-24 23:51:19.328515] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:23.789 [2024-07-24 23:51:19.443307] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:07:23.789 [2024-07-24 23:51:19.443513] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mpassed 00:07:23.789 Test: test_spdk_nvme_detach ...passed 00:07:23.789 Test: test_nvme_completion_poll_cb ...passed 00:07:23.789 Test: test_nvme_user_copy_cmd_complete ...utex 00:07:23.789 passed 00:07:23.789 Test: test_nvme_allocate_request_null ...passed 00:07:23.789 Test: test_nvme_allocate_request ...passed 00:07:23.789 Test: test_nvme_free_request ...passed 00:07:23.789 Test: test_nvme_allocate_request_user_copy ...passed 00:07:23.789 Test: test_nvme_robust_mutex_init_shared ...passed 00:07:23.789 Test: test_nvme_request_check_timeout ...passed 00:07:23.789 Test: test_nvme_wait_for_completion ...passed 00:07:23.789 Test: test_spdk_nvme_parse_func ...passed 00:07:23.789 Test: test_spdk_nvme_detach_async ...passed 00:07:23.789 Test: test_nvme_parse_addr ...[2024-07-24 23:51:19.444684] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1635:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:07:23.789 passed 00:07:23.789 00:07:23.789 Run Summary: Type Total Ran Passed Failed Inactive 00:07:23.789 suites 1 1 n/a 0 0 00:07:23.789 tests 25 25 25 0 0 00:07:23.789 asserts 326 326 326 0 n/a 00:07:23.789 00:07:23.789 Elapsed time = 0.007 seconds 00:07:23.789 23:51:19 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:07:23.789 00:07:23.789 00:07:23.789 CUnit - A unit testing framework for C - Version 2.1-3 00:07:23.789 http://cunit.sourceforge.net/ 00:07:23.789 00:07:23.789 00:07:23.789 Suite: nvme_ctrlr 00:07:23.789 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-24 23:51:19.483573] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:23.789 passed 00:07:23.789 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-24 23:51:19.485461] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:23.789 passed 00:07:23.789 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-24 23:51:19.486861] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:23.789 passed 00:07:23.789 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-24 23:51:19.488179] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:23.789 passed 00:07:23.789 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-24 23:51:19.489518] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:23.789 [2024-07-24 23:51:19.490819] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4070:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-24 23:51:19.491995] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4070:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-24 23:51:19.493165] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4070:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:23.789 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-24 23:51:19.495638] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:23.789 [2024-07-24 23:51:19.498022] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4070:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-24 23:51:19.499275] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4070:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:23.789 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-24 23:51:19.501923] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:23.789 [2024-07-24 23:51:19.503241] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4070:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-24 23:51:19.505719] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4070:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:23.790 Test: test_nvme_ctrlr_init_delay ...[2024-07-24 23:51:19.508502] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:23.790 passed 00:07:23.790 Test: test_alloc_io_qpair_rr_1 ...[2024-07-24 23:51:19.510004] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:23.790 [2024-07-24 23:51:19.510321] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:23.790 [2024-07-24 23:51:19.510449] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:23.790 [2024-07-24 23:51:19.510490] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:23.790 [2024-07-24 23:51:19.510536] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:23.790 passed 00:07:23.790 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:07:23.790 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:07:23.790 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-24 23:51:19.510749] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:23.790 passed 00:07:23.790 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-24 23:51:19.511003] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:23.790 [2024-07-24 23:51:19.511134] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:23.790 passed 00:07:23.790 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-24 23:51:19.511479] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4997:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:07:23.790 [2024-07-24 23:51:19.511608] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5034:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:23.790 [2024-07-24 23:51:19.511716] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5074:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:07:23.790 [2024-07-24 23:51:19.511833] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5034:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:23.790 passed 00:07:23.790 Test: test_nvme_ctrlr_fail ...passed 00:07:23.790 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:07:23.790 Test: test_nvme_ctrlr_set_supported_features ...passed 00:07:23.790 Test: test_nvme_ctrlr_set_host_feature ...[2024-07-24 23:51:19.511956] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:07:23.790 [2024-07-24 23:51:19.512066] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:23.790 passed 00:07:23.790 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:07:23.790 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-24 23:51:19.513565] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:24.049 passed 00:07:24.049 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:07:24.049 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:07:24.049 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:07:24.049 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-24 23:51:19.840285] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:24.049 passed 00:07:24.049 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-24 23:51:19.847443] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:24.049 passed 00:07:24.049 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-24 23:51:19.848761] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:24.049 [2024-07-24 23:51:19.848865] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3006:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:07:24.049 passed 00:07:24.049 Test: test_alloc_io_qpair_fail ...[2024-07-24 23:51:19.850055] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:24.049 passed 00:07:24.049 Test: test_nvme_ctrlr_add_remove_process ...passed 00:07:24.049 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:07:24.049 Test: test_nvme_ctrlr_set_state ...[2024-07-24 23:51:19.850133] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 506:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:07:24.049 passed 00:07:24.049 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-24 23:51:19.850349] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1550:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:07:24.049 [2024-07-24 23:51:19.850391] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:24.049 passed 00:07:24.049 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-24 23:51:19.866596] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:24.049 passed 00:07:24.049 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-24 23:51:19.899691] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:24.049 passed 00:07:24.049 Test: test_nvme_ctrlr_reset ...[2024-07-24 23:51:19.902067] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:24.049 passed 00:07:24.049 Test: test_nvme_ctrlr_aer_callback ...[2024-07-24 23:51:19.902751] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:24.049 passed 00:07:24.049 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-24 23:51:19.904477] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:24.049 passed 00:07:24.049 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:07:24.049 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:07:24.049 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-24 23:51:19.906740] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:24.049 passed 00:07:24.049 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:07:24.049 Test: test_nvme_ctrlr_ana_resize ...[2024-07-24 23:51:19.908450] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:24.049 passed 00:07:24.049 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:07:24.049 Test: test_nvme_transport_ctrlr_ready ...passed 00:07:24.049 Test: test_nvme_ctrlr_disable ...[2024-07-24 23:51:19.910480] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4156:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:07:24.049 [2024-07-24 23:51:19.910584] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4208:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 53 (error) 00:07:24.049 [2024-07-24 23:51:19.910660] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:24.049 passed 00:07:24.049 00:07:24.049 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.049 suites 1 1 n/a 0 0 00:07:24.049 tests 44 44 44 0 0 00:07:24.049 asserts 10434 10434 10434 0 n/a 00:07:24.049 00:07:24.049 Elapsed time = 0.385 seconds 00:07:24.309 23:51:19 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:07:24.309 00:07:24.309 00:07:24.309 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.309 http://cunit.sourceforge.net/ 00:07:24.309 00:07:24.309 00:07:24.309 Suite: nvme_ctrlr_cmd 00:07:24.309 Test: test_get_log_pages ...passed 00:07:24.309 Test: test_set_feature_cmd ...passed 00:07:24.309 Test: test_set_feature_ns_cmd ...passed 00:07:24.309 Test: test_get_feature_cmd ...passed 00:07:24.309 Test: test_get_feature_ns_cmd ...passed 00:07:24.309 Test: test_abort_cmd ...passed 00:07:24.309 Test: test_set_host_id_cmds ...passed 00:07:24.309 Test: test_io_cmd_raw_no_payload_build ...passed 00:07:24.309 Test: test_io_raw_cmd ...passed 00:07:24.309 Test: test_io_raw_cmd_with_md ...passed 00:07:24.309 Test: test_namespace_attach ...passed 00:07:24.309 Test: test_namespace_detach ...passed 00:07:24.309 Test: test_namespace_create ...passed 00:07:24.309 Test: test_namespace_delete ...passed 00:07:24.309 Test: test_doorbell_buffer_config ...passed 00:07:24.309 Test: test_format_nvme ...passed 00:07:24.309 Test: test_fw_commit ...passed 00:07:24.309 Test: test_fw_image_download ...passed 00:07:24.309 Test: test_sanitize ...passed 00:07:24.309 Test: test_directive ...passed 00:07:24.309 Test: test_nvme_request_add_abort ...passed 00:07:24.309 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:07:24.309 Test: test_nvme_ctrlr_cmd_identify ...passed 00:07:24.309 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:07:24.309 00:07:24.309 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.309 suites 1 1 n/a 0 0 00:07:24.309 tests 24 24 24 0 0 00:07:24.309 asserts 198 198 198 0 n/a 00:07:24.309 00:07:24.309 Elapsed time = 0.001 seconds 00:07:24.309 [2024-07-24 23:51:19.964281] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:07:24.309 23:51:19 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:07:24.309 00:07:24.309 00:07:24.309 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.309 http://cunit.sourceforge.net/ 00:07:24.309 00:07:24.309 00:07:24.309 Suite: nvme_ctrlr_cmd 00:07:24.309 Test: test_geometry_cmd ...passed 00:07:24.309 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:07:24.309 00:07:24.309 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.309 suites 1 1 n/a 0 0 00:07:24.309 tests 2 2 2 0 0 00:07:24.309 asserts 7 7 7 0 n/a 00:07:24.309 00:07:24.309 Elapsed time = 0.000 seconds 00:07:24.309 23:51:19 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:07:24.309 00:07:24.309 00:07:24.309 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.309 http://cunit.sourceforge.net/ 00:07:24.309 00:07:24.309 00:07:24.309 Suite: nvme 00:07:24.309 Test: test_nvme_ns_construct ...passed 00:07:24.309 Test: test_nvme_ns_uuid ...passed 00:07:24.309 Test: test_nvme_ns_csi ...passed 00:07:24.309 Test: test_nvme_ns_data ...passed 00:07:24.309 Test: test_nvme_ns_set_identify_data ...passed 00:07:24.309 Test: test_spdk_nvme_ns_get_values ...passed 00:07:24.309 Test: test_spdk_nvme_ns_is_active ...passed 00:07:24.309 Test: spdk_nvme_ns_supports ...passed 00:07:24.309 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:07:24.309 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:07:24.309 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:07:24.309 Test: test_nvme_ns_find_id_desc ...passed 00:07:24.309 00:07:24.309 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.309 suites 1 1 n/a 0 0 00:07:24.309 tests 12 12 12 0 0 00:07:24.309 asserts 95 95 95 0 n/a 00:07:24.309 00:07:24.309 Elapsed time = 0.001 seconds 00:07:24.309 23:51:20 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:07:24.309 00:07:24.309 00:07:24.309 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.309 http://cunit.sourceforge.net/ 00:07:24.309 00:07:24.309 00:07:24.309 Suite: nvme_ns_cmd 00:07:24.309 Test: split_test ...passed 00:07:24.309 Test: split_test2 ...passed 00:07:24.309 Test: split_test3 ...passed 00:07:24.309 Test: split_test4 ...passed 00:07:24.309 Test: test_nvme_ns_cmd_flush ...passed 00:07:24.309 Test: test_nvme_ns_cmd_dataset_management ...passed 00:07:24.309 Test: test_nvme_ns_cmd_copy ...passed 00:07:24.309 Test: test_io_flags ...passed 00:07:24.309 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:07:24.309 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:07:24.309 Test: test_nvme_ns_cmd_reservation_register ...passed 00:07:24.309 Test: test_nvme_ns_cmd_reservation_release ...passed 00:07:24.309 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:07:24.309 Test: test_nvme_ns_cmd_reservation_report ...passed 00:07:24.309 Test: test_cmd_child_request ...passed 00:07:24.309 Test: test_nvme_ns_cmd_readv ...passed 00:07:24.309 Test: test_nvme_ns_cmd_read_with_md ...[2024-07-24 23:51:20.054603] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:07:24.309 passed 00:07:24.309 Test: test_nvme_ns_cmd_writev ...[2024-07-24 23:51:20.055817] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:07:24.309 passed 00:07:24.309 Test: test_nvme_ns_cmd_write_with_md ...passed 00:07:24.309 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:07:24.309 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:07:24.309 Test: test_nvme_ns_cmd_comparev ...passed 00:07:24.309 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:07:24.309 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:07:24.309 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:07:24.309 Test: test_nvme_ns_cmd_setup_request ...passed 00:07:24.309 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:07:24.309 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:07:24.309 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:07:24.309 Test: test_nvme_ns_cmd_verify ...passed 00:07:24.309 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:07:24.309 Test: test_nvme_ns_cmd_io_mgmt_recv ...[2024-07-24 23:51:20.057584] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:24.309 [2024-07-24 23:51:20.057696] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:24.309 passed 00:07:24.309 00:07:24.309 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.309 suites 1 1 n/a 0 0 00:07:24.309 tests 32 32 32 0 0 00:07:24.309 asserts 550 550 550 0 n/a 00:07:24.309 00:07:24.309 Elapsed time = 0.004 seconds 00:07:24.309 23:51:20 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:07:24.309 00:07:24.309 00:07:24.309 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.309 http://cunit.sourceforge.net/ 00:07:24.309 00:07:24.309 00:07:24.309 Suite: nvme_ns_cmd 00:07:24.309 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:07:24.309 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:07:24.309 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:07:24.309 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:07:24.309 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:07:24.309 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:07:24.309 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:07:24.309 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:07:24.309 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:07:24.309 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:07:24.309 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:07:24.309 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:07:24.309 00:07:24.309 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.309 suites 1 1 n/a 0 0 00:07:24.309 tests 12 12 12 0 0 00:07:24.309 asserts 123 123 123 0 n/a 00:07:24.309 00:07:24.309 Elapsed time = 0.002 seconds 00:07:24.309 23:51:20 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:07:24.309 00:07:24.309 00:07:24.309 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.309 http://cunit.sourceforge.net/ 00:07:24.309 00:07:24.309 00:07:24.309 Suite: nvme_qpair 00:07:24.309 Test: test3 ...passed 00:07:24.310 Test: test_ctrlr_failed ...passed 00:07:24.310 Test: struct_packing ...passed 00:07:24.310 Test: test_nvme_qpair_process_completions ...[2024-07-24 23:51:20.122405] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:24.310 [2024-07-24 23:51:20.122788] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:24.310 [2024-07-24 23:51:20.122927] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:07:24.310 [2024-07-24 23:51:20.122990] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:07:24.310 passed 00:07:24.310 Test: test_nvme_completion_is_retry ...passed 00:07:24.310 Test: test_get_status_string ...passed 00:07:24.310 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:07:24.310 Test: test_nvme_qpair_submit_request ...passed 00:07:24.310 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:07:24.310 Test: test_nvme_qpair_manual_complete_request ...passed 00:07:24.310 Test: test_nvme_qpair_init_deinit ...[2024-07-24 23:51:20.123881] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:24.310 passed 00:07:24.310 Test: test_nvme_get_sgl_print_info ...passed 00:07:24.310 00:07:24.310 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.310 suites 1 1 n/a 0 0 00:07:24.310 tests 12 12 12 0 0 00:07:24.310 asserts 154 154 154 0 n/a 00:07:24.310 00:07:24.310 Elapsed time = 0.002 seconds 00:07:24.310 23:51:20 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:07:24.310 00:07:24.310 00:07:24.310 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.310 http://cunit.sourceforge.net/ 00:07:24.310 00:07:24.310 00:07:24.310 Suite: nvme_pcie 00:07:24.310 Test: test_prp_list_append ...[2024-07-24 23:51:20.153721] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1206:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:24.310 [2024-07-24 23:51:20.153983] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:07:24.310 [2024-07-24 23:51:20.154036] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1225:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:07:24.310 passed 00:07:24.310 Test: test_nvme_pcie_hotplug_monitor ...passed 00:07:24.310 Test: test_shadow_doorbell_update ...passed 00:07:24.310 Test: test_build_contig_hw_sgl_request ...passed 00:07:24.310 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:07:24.310 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:07:24.310 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:07:24.310 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:07:24.310 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:07:24.310 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:07:24.310 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:07:24.310 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:07:24.310 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:07:24.310 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:07:24.310 00:07:24.310 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.310 suites 1 1 n/a 0 0 00:07:24.310 tests 14 14 14 0 0 00:07:24.310 asserts 235 235 235 0 n/a 00:07:24.310 00:07:24.310 Elapsed time = 0.001 seconds 00:07:24.310 [2024-07-24 23:51:20.154245] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1219:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:24.310 [2024-07-24 23:51:20.154348] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1219:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:24.310 [2024-07-24 23:51:20.154638] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1206:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:24.310 [2024-07-24 23:51:20.154782] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:07:24.310 [2024-07-24 23:51:20.154891] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:07:24.310 [2024-07-24 23:51:20.155023] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:07:24.310 [2024-07-24 23:51:20.155086] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:07:24.310 23:51:20 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:07:24.570 00:07:24.570 00:07:24.570 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.570 http://cunit.sourceforge.net/ 00:07:24.570 00:07:24.570 00:07:24.570 Suite: nvme_ns_cmd 00:07:24.570 Test: nvme_poll_group_create_test ...passed 00:07:24.570 Test: nvme_poll_group_add_remove_test ...passed 00:07:24.570 Test: nvme_poll_group_process_completions ...passed 00:07:24.570 Test: nvme_poll_group_destroy_test ...passed 00:07:24.570 Test: nvme_poll_group_get_free_stats ...passed 00:07:24.570 00:07:24.570 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.570 suites 1 1 n/a 0 0 00:07:24.570 tests 5 5 5 0 0 00:07:24.570 asserts 75 75 75 0 n/a 00:07:24.570 00:07:24.570 Elapsed time = 0.000 seconds 00:07:24.570 23:51:20 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:07:24.570 00:07:24.570 00:07:24.570 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.570 http://cunit.sourceforge.net/ 00:07:24.570 00:07:24.570 00:07:24.570 Suite: nvme_quirks 00:07:24.570 Test: test_nvme_quirks_striping ...passed 00:07:24.570 00:07:24.570 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.570 suites 1 1 n/a 0 0 00:07:24.570 tests 1 1 1 0 0 00:07:24.570 asserts 5 5 5 0 n/a 00:07:24.570 00:07:24.570 Elapsed time = 0.000 seconds 00:07:24.570 23:51:20 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:07:24.570 00:07:24.570 00:07:24.570 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.570 http://cunit.sourceforge.net/ 00:07:24.570 00:07:24.570 00:07:24.570 Suite: nvme_tcp 00:07:24.570 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:07:24.570 Test: test_nvme_tcp_build_iovs ...passed 00:07:24.570 Test: test_nvme_tcp_build_sgl_request ...passed 00:07:24.570 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:07:24.570 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:07:24.570 Test: test_nvme_tcp_req_complete_safe ...passed 00:07:24.570 Test: test_nvme_tcp_req_get ...passed 00:07:24.570 Test: test_nvme_tcp_req_init ...passed 00:07:24.570 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:07:24.570 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:07:24.570 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-07-24 23:51:20.246398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 848:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7a2c74e0d2e0, and the iovcnt=16, remaining_size=28672 00:07:24.570 passed 00:07:24.570 Test: test_nvme_tcp_alloc_reqs ...[2024-07-24 23:51:20.246988] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2c74a09030 is same with the state(6) to be set 00:07:24.570 passed 00:07:24.570 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:07:24.570 Test: test_nvme_tcp_pdu_ch_handle ...passed 00:07:24.570 Test: test_nvme_tcp_qpair_connect_sock ...passed 00:07:24.570 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:07:24.570 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:07:24.570 Test: test_nvme_tcp_icresp_handle ...passed 00:07:24.570 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:07:24.570 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:07:24.570 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:07:24.570 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...passed 00:07:24.570 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-24 23:51:20.247621] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2c74d09070 is same with the state(5) to be set 00:07:24.570 [2024-07-24 23:51:20.247703] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1190:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7a2c74c0a740 00:07:24.570 [2024-07-24 23:51:20.247751] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1249:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:07:24.570 [2024-07-24 23:51:20.247792] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2c74c0a070 is same with the state(5) to be set 00:07:24.570 [2024-07-24 23:51:20.247844] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1200:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:07:24.570 [2024-07-24 23:51:20.247876] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2c74c0a070 is same with the state(5) to be set 00:07:24.570 [2024-07-24 23:51:20.247911] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:07:24.570 [2024-07-24 23:51:20.247953] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2c74c0a070 is same with the state(5) to be set 00:07:24.571 [2024-07-24 23:51:20.247989] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2c74c0a070 is same with the state(5) to be set 00:07:24.571 [2024-07-24 23:51:20.248024] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2c74c0a070 is same with the state(5) to be set 00:07:24.571 [2024-07-24 23:51:20.248060] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2c74c0a070 is same with the state(5) to be set 00:07:24.571 [2024-07-24 23:51:20.248102] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2c74c0a070 is same with the state(5) to be set 00:07:24.571 [2024-07-24 23:51:20.248132] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2c74c0a070 is same with the state(5) to be set 00:07:24.571 [2024-07-24 23:51:20.248343] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:07:24.571 [2024-07-24 23:51:20.248385] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:24.571 [2024-07-24 23:51:20.248713] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:24.571 [2024-07-24 23:51:20.248868] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1357:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7a2c74c0b5c0): PDU Sequence Error 00:07:24.571 [2024-07-24 23:51:20.248948] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1576:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:07:24.571 [2024-07-24 23:51:20.248997] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1583:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:07:24.571 [2024-07-24 23:51:20.249038] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2c74d0b070 is same with the state(5) to be set 00:07:24.571 [2024-07-24 23:51:20.249067] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1592:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:07:24.571 [2024-07-24 23:51:20.249097] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2c74d0b070 is same with the state(5) to be set 00:07:24.571 [2024-07-24 23:51:20.249132] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2c74d0b070 is same with the state(0) to be set 00:07:24.571 [2024-07-24 23:51:20.249190] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1357:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7a2c74c0c5c0): PDU Sequence Error 00:07:24.571 [2024-07-24 23:51:20.249288] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7a2c74d0d200 00:07:24.571 [2024-07-24 23:51:20.249438] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 358:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7a2c74e294a0, errno=0, rc=0 00:07:24.571 [2024-07-24 23:51:20.249480] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2c74e294a0 is same with the state(5) to be set 00:07:24.571 [2024-07-24 23:51:20.249519] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2c74e294a0 is same with the state(5) to be set 00:07:24.571 [2024-07-24 23:51:20.249568] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a2c74e294a0 (0): Success 00:07:24.571 [2024-07-24 23:51:20.249611] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a2c74e294a0 (0): Success 00:07:24.571 passed 00:07:24.571 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:07:24.571 Test: test_nvme_tcp_poll_group_get_stats ...[2024-07-24 23:51:20.364446] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:24.571 [2024-07-24 23:51:20.364561] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:24.571 passed 00:07:24.571 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-24 23:51:20.365141] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:24.571 [2024-07-24 23:51:20.365212] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:24.571 [2024-07-24 23:51:20.365601] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:24.571 [2024-07-24 23:51:20.365663] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:24.571 [2024-07-24 23:51:20.365836] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:07:24.571 [2024-07-24 23:51:20.365903] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:24.571 [2024-07-24 23:51:20.366053] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x515000001980 with addr=192.168.1.78, port=23 00:07:24.571 [2024-07-24 23:51:20.366107] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:24.571 passed 00:07:24.571 Test: test_nvme_tcp_qpair_submit_request ...[2024-07-24 23:51:20.366345] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 848:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x514000000c40, and the iovcnt=1, remaining_size=1024 00:07:24.571 passed 00:07:24.571 00:07:24.571 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.571 suites 1 1 n/a 0 0 00:07:24.571 tests 27 27 27 0 0 00:07:24.571 asserts 624 624 624 0 n/a 00:07:24.571 00:07:24.571 Elapsed time = 0.120 seconds 00:07:24.571 [2024-07-24 23:51:20.366395] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1035:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:07:24.571 23:51:20 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:07:24.571 00:07:24.571 00:07:24.571 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.571 http://cunit.sourceforge.net/ 00:07:24.571 00:07:24.571 00:07:24.571 Suite: nvme_transport 00:07:24.571 Test: test_nvme_get_transport ...passed 00:07:24.571 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:07:24.571 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:07:24.571 Test: test_nvme_transport_poll_group_add_remove ...passed 00:07:24.571 Test: test_ctrlr_get_memory_domains ...passed 00:07:24.571 00:07:24.571 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.571 suites 1 1 n/a 0 0 00:07:24.571 tests 5 5 5 0 0 00:07:24.571 asserts 28 28 28 0 n/a 00:07:24.571 00:07:24.571 Elapsed time = 0.000 seconds 00:07:24.571 23:51:20 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:07:24.830 00:07:24.830 00:07:24.830 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.830 http://cunit.sourceforge.net/ 00:07:24.830 00:07:24.830 00:07:24.830 Suite: nvme_io_msg 00:07:24.830 Test: test_nvme_io_msg_send ...passed 00:07:24.830 Test: test_nvme_io_msg_process ...passed 00:07:24.830 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:07:24.830 00:07:24.830 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.830 suites 1 1 n/a 0 0 00:07:24.830 tests 3 3 3 0 0 00:07:24.830 asserts 56 56 56 0 n/a 00:07:24.830 00:07:24.830 Elapsed time = 0.000 seconds 00:07:24.830 23:51:20 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:07:24.830 00:07:24.830 00:07:24.830 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.830 http://cunit.sourceforge.net/ 00:07:24.830 00:07:24.830 00:07:24.830 Suite: nvme_pcie_common 00:07:24.830 Test: test_nvme_pcie_ctrlr_alloc_cmb ...passed 00:07:24.830 Test: test_nvme_pcie_qpair_construct_destroy ...[2024-07-24 23:51:20.478489] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:07:24.830 passed 00:07:24.830 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:07:24.830 Test: test_nvme_pcie_ctrlr_connect_qpair ...passed 00:07:24.830 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:07:24.830 Test: test_nvme_pcie_poll_group_get_stats ...[2024-07-24 23:51:20.479366] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 505:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:07:24.830 [2024-07-24 23:51:20.479434] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 458:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:07:24.830 [2024-07-24 23:51:20.479479] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 552:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:07:24.830 [2024-07-24 23:51:20.479942] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1804:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:24.830 passed 00:07:24.830 00:07:24.830 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.830 suites 1 1 n/a 0 0 00:07:24.830 tests 6 6 6 0 0 00:07:24.830 asserts 148 148 148 0 n/a 00:07:24.830 00:07:24.830 Elapsed time = 0.002 seconds 00:07:24.830 [2024-07-24 23:51:20.479975] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1804:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:24.830 23:51:20 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:07:24.830 00:07:24.830 00:07:24.830 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.830 http://cunit.sourceforge.net/ 00:07:24.830 00:07:24.830 00:07:24.830 Suite: nvme_fabric 00:07:24.830 Test: test_nvme_fabric_prop_set_cmd ...passed 00:07:24.830 Test: test_nvme_fabric_prop_get_cmd ...passed 00:07:24.830 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:07:24.830 Test: test_nvme_fabric_discover_probe ...passed 00:07:24.830 Test: test_nvme_fabric_qpair_connect ...[2024-07-24 23:51:20.513154] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:07:24.830 passed 00:07:24.830 00:07:24.830 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.830 suites 1 1 n/a 0 0 00:07:24.830 tests 5 5 5 0 0 00:07:24.830 asserts 60 60 60 0 n/a 00:07:24.830 00:07:24.830 Elapsed time = 0.001 seconds 00:07:24.830 23:51:20 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:07:24.830 00:07:24.830 00:07:24.830 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.830 http://cunit.sourceforge.net/ 00:07:24.830 00:07:24.830 00:07:24.830 Suite: nvme_opal 00:07:24.830 Test: test_opal_nvme_security_recv_send_done ...passed 00:07:24.830 Test: test_opal_add_short_atom_header ...passed 00:07:24.830 00:07:24.830 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.830 suites 1 1 n/a 0 0 00:07:24.830 tests 2 2 2 0 0 00:07:24.830 asserts 22 22 22 0 n/a 00:07:24.830 00:07:24.830 Elapsed time = 0.000 seconds 00:07:24.830 [2024-07-24 23:51:20.548746] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:07:24.830 ************************************ 00:07:24.830 END TEST unittest_nvme 00:07:24.830 ************************************ 00:07:24.830 00:07:24.830 real 0m1.253s 00:07:24.830 user 0m0.640s 00:07:24.830 sys 0m0.461s 00:07:24.830 23:51:20 unittest.unittest_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.830 23:51:20 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:24.830 23:51:20 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:24.830 23:51:20 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.830 23:51:20 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.830 23:51:20 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:24.830 ************************************ 00:07:24.830 START TEST unittest_log 00:07:24.830 ************************************ 00:07:24.830 23:51:20 unittest.unittest_log -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:24.830 00:07:24.830 00:07:24.830 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.830 http://cunit.sourceforge.net/ 00:07:24.830 00:07:24.830 00:07:24.830 Suite: log 00:07:24.830 Test: log_test ...passed 00:07:24.830 Test: deprecation ...[2024-07-24 23:51:20.628338] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:07:24.830 [2024-07-24 23:51:20.628532] log_ut.c: 57:log_test: *DEBUG*: log test 00:07:24.830 log dump test: 00:07:24.830 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:07:24.830 spdk dump test: 00:07:24.830 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:07:24.830 spdk dump test: 00:07:24.830 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:07:24.830 00000010 65 20 63 68 61 72 73 e chars 00:07:25.765 passed 00:07:25.765 00:07:25.765 Run Summary: Type Total Ran Passed Failed Inactive 00:07:25.765 suites 1 1 n/a 0 0 00:07:25.765 tests 2 2 2 0 0 00:07:25.765 asserts 73 73 73 0 n/a 00:07:25.765 00:07:25.765 Elapsed time = 0.001 seconds 00:07:26.025 00:07:26.025 real 0m1.032s 00:07:26.025 user 0m0.016s 00:07:26.025 sys 0m0.017s 00:07:26.025 23:51:21 unittest.unittest_log -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.025 23:51:21 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:07:26.025 ************************************ 00:07:26.025 END TEST unittest_log 00:07:26.025 ************************************ 00:07:26.025 23:51:21 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:26.025 23:51:21 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.025 23:51:21 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.025 23:51:21 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:26.025 ************************************ 00:07:26.025 START TEST unittest_lvol 00:07:26.025 ************************************ 00:07:26.025 23:51:21 unittest.unittest_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:26.025 00:07:26.025 00:07:26.025 CUnit - A unit testing framework for C - Version 2.1-3 00:07:26.025 http://cunit.sourceforge.net/ 00:07:26.025 00:07:26.025 00:07:26.025 Suite: lvol 00:07:26.025 Test: lvs_init_unload_success ...[2024-07-24 23:51:21.722767] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:07:26.025 passed 00:07:26.025 Test: lvs_init_destroy_success ...[2024-07-24 23:51:21.723258] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:07:26.025 passed 00:07:26.025 Test: lvs_init_opts_success ...passed 00:07:26.025 Test: lvs_unload_lvs_is_null_fail ...passed 00:07:26.025 Test: lvs_names ...[2024-07-24 23:51:21.723500] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:07:26.025 [2024-07-24 23:51:21.723557] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:07:26.025 [2024-07-24 23:51:21.723601] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:07:26.025 [2024-07-24 23:51:21.723742] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:07:26.025 passed 00:07:26.025 Test: lvol_create_destroy_success ...passed 00:07:26.025 Test: lvol_create_fail ...[2024-07-24 23:51:21.724286] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:07:26.025 [2024-07-24 23:51:21.724366] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:07:26.025 passed 00:07:26.025 Test: lvol_destroy_fail ...[2024-07-24 23:51:21.724626] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:07:26.025 passed 00:07:26.025 Test: lvol_close ...[2024-07-24 23:51:21.724821] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:07:26.025 [2024-07-24 23:51:21.724865] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:07:26.025 passed 00:07:26.025 Test: lvol_resize ...passed 00:07:26.025 Test: lvol_set_read_only ...passed 00:07:26.026 Test: test_lvs_load ...[2024-07-24 23:51:21.725626] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:07:26.026 passed 00:07:26.026 Test: lvols_load ...[2024-07-24 23:51:21.725680] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:07:26.026 [2024-07-24 23:51:21.725853] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:26.026 passed 00:07:26.026 Test: lvol_open ...[2024-07-24 23:51:21.725969] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:26.026 passed 00:07:26.026 Test: lvol_snapshot ...passed 00:07:26.026 Test: lvol_snapshot_fail ...[2024-07-24 23:51:21.726608] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:07:26.026 passed 00:07:26.026 Test: lvol_clone ...passed 00:07:26.026 Test: lvol_clone_fail ...[2024-07-24 23:51:21.727108] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:07:26.026 passed 00:07:26.026 Test: lvol_iter_clones ...passed 00:07:26.026 Test: lvol_refcnt ...[2024-07-24 23:51:21.727453] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 57833928-5561-4e2b-a115-9ea1a67db2e1 because it is still open 00:07:26.026 passed 00:07:26.026 Test: lvol_names ...[2024-07-24 23:51:21.727589] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:26.026 [2024-07-24 23:51:21.727639] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:26.026 [2024-07-24 23:51:21.727810] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:07:26.026 passed 00:07:26.026 Test: lvol_create_thin_provisioned ...passed 00:07:26.026 Test: lvol_rename ...[2024-07-24 23:51:21.728220] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:26.026 passed 00:07:26.026 Test: lvs_rename ...[2024-07-24 23:51:21.728307] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:07:26.026 passed 00:07:26.026 Test: lvol_inflate ...passed 00:07:26.026 Test: lvol_decouple_parent ...passed 00:07:26.026 Test: lvol_get_xattr ...[2024-07-24 23:51:21.728477] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:07:26.026 [2024-07-24 23:51:21.728638] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:26.026 [2024-07-24 23:51:21.728840] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:26.026 passed 00:07:26.026 Test: lvol_esnap_reload ...passed 00:07:26.026 Test: lvol_esnap_create_bad_args ...[2024-07-24 23:51:21.729174] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:07:26.026 [2024-07-24 23:51:21.729202] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:26.026 [2024-07-24 23:51:21.729225] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:07:26.026 [2024-07-24 23:51:21.729254] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:26.026 passed 00:07:26.026 Test: lvol_esnap_create_delete ...[2024-07-24 23:51:21.729386] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:07:26.026 passed 00:07:26.026 Test: lvol_esnap_load_esnaps ...passed 00:07:26.026 Test: lvol_esnap_missing ...[2024-07-24 23:51:21.729648] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:07:26.026 [2024-07-24 23:51:21.729765] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:26.026 [2024-07-24 23:51:21.729839] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:26.026 passed 00:07:26.026 Test: lvol_esnap_hotplug ... 00:07:26.026 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:07:26.026 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:07:26.026 [2024-07-24 23:51:21.730376] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol e400d4b6-0476-4457-a29b-80cfb0364ce0: failed to create esnap bs_dev: error -12 00:07:26.026 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:07:26.026 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:07:26.026 [2024-07-24 23:51:21.730559] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol f56a30d4-6cfb-4395-82a8-ea8009ad2090: failed to create esnap bs_dev: error -12 00:07:26.026 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:07:26.026 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:07:26.026 [2024-07-24 23:51:21.730664] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol a51df8a3-a9c9-44c4-b356-683cc0f0f5fe: failed to create esnap bs_dev: error -12 00:07:26.026 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:07:26.026 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:07:26.026 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:07:26.026 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:07:26.026 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:07:26.026 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:07:26.026 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:07:26.026 passed 00:07:26.026 Test: lvol_get_by ...passed 00:07:26.026 Test: lvol_shallow_copy ...[2024-07-24 23:51:21.731730] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:07:26.026 passed 00:07:26.026 Test: lvol_set_parent ...[2024-07-24 23:51:21.731780] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol 0c48225f-0f83-42f0-a303-d25134b18de6 shallow copy, ext_dev must not be NULL 00:07:26.026 [2024-07-24 23:51:21.731992] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:07:26.026 [2024-07-24 23:51:21.732045] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:07:26.026 passed 00:07:26.026 Test: lvol_set_external_parent ...[2024-07-24 23:51:21.732197] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:07:26.026 passed 00:07:26.026 00:07:26.026 Run Summary: Type Total Ran Passed Failed Inactive 00:07:26.026 suites 1 1 n/a 0 0 00:07:26.026 tests 37 37 37 0 0 00:07:26.026 asserts 1505 1505 1505 0 n/a 00:07:26.026 00:07:26.026 Elapsed time = 0.010 seconds 00:07:26.026 [2024-07-24 23:51:21.732235] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:07:26.026 [2024-07-24 23:51:21.732254] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:07:26.026 00:07:26.026 real 0m0.051s 00:07:26.026 user 0m0.028s 00:07:26.026 sys 0m0.023s 00:07:26.026 23:51:21 unittest.unittest_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.026 23:51:21 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:26.026 ************************************ 00:07:26.026 END TEST unittest_lvol 00:07:26.026 ************************************ 00:07:26.026 23:51:21 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:26.026 23:51:21 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:26.026 23:51:21 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.026 23:51:21 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.026 23:51:21 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:26.026 ************************************ 00:07:26.026 START TEST unittest_nvme_rdma 00:07:26.026 ************************************ 00:07:26.026 23:51:21 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:26.026 00:07:26.026 00:07:26.026 CUnit - A unit testing framework for C - Version 2.1-3 00:07:26.026 http://cunit.sourceforge.net/ 00:07:26.026 00:07:26.026 00:07:26.026 Suite: nvme_rdma 00:07:26.026 Test: test_nvme_rdma_build_sgl_request ...[2024-07-24 23:51:21.819371] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:07:26.026 passed 00:07:26.026 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:07:26.026 Test: test_nvme_rdma_build_contig_request ...[2024-07-24 23:51:21.819596] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1552:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:26.026 [2024-07-24 23:51:21.819643] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1608:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:07:26.026 passed 00:07:26.026 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:07:26.026 Test: test_nvme_rdma_create_reqs ...[2024-07-24 23:51:21.819741] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1489:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:26.026 [2024-07-24 23:51:21.819880] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 931:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:07:26.026 passed 00:07:26.026 Test: test_nvme_rdma_create_rsps ...passed 00:07:26.026 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-24 23:51:21.820202] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 849:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:07:26.026 passed 00:07:26.026 Test: test_nvme_rdma_poller_create ...[2024-07-24 23:51:21.820366] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1746:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:26.026 [2024-07-24 23:51:21.820398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1746:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:26.026 passed 00:07:26.026 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:07:26.027 Test: test_nvme_rdma_ctrlr_construct ...[2024-07-24 23:51:21.820595] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 450:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:07:26.027 passed 00:07:26.027 Test: test_nvme_rdma_req_put_and_get ...passed 00:07:26.027 Test: test_nvme_rdma_req_init ...passed 00:07:26.027 Test: test_nvme_rdma_validate_cm_event ...[2024-07-24 23:51:21.820925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:07:26.027 [2024-07-24 23:51:21.821003] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:07:26.027 passed 00:07:26.027 Test: test_nvme_rdma_qpair_init ...passed 00:07:26.027 Test: test_nvme_rdma_qpair_submit_request ...passed 00:07:26.027 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:07:26.027 Test: test_rdma_get_memory_translation ...[2024-07-24 23:51:21.821136] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1368:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:07:26.027 [2024-07-24 23:51:21.821194] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:07:26.027 passed 00:07:26.027 Test: test_get_rdma_qpair_from_wc ...passed 00:07:26.027 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:07:26.027 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-24 23:51:21.821294] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:26.027 [2024-07-24 23:51:21.821324] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:26.027 passed 00:07:26.027 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-24 23:51:21.821477] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:26.027 [2024-07-24 23:51:21.821528] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:07:26.027 [2024-07-24 23:51:21.821553] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x717b0d913200 on poll group 0x50c000000040 00:07:26.027 [2024-07-24 23:51:21.821593] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:26.027 [2024-07-24 23:51:21.821623] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:07:26.027 [2024-07-24 23:51:21.821641] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x717b0d913200 on poll group 0x50c000000040 00:07:26.027 [2024-07-24 23:51:21.821697] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 625:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:07:26.027 passed 00:07:26.027 00:07:26.027 Run Summary: Type Total Ran Passed Failed Inactive 00:07:26.027 suites 1 1 n/a 0 0 00:07:26.027 tests 21 21 21 0 0 00:07:26.027 asserts 397 397 397 0 n/a 00:07:26.027 00:07:26.027 Elapsed time = 0.002 seconds 00:07:26.027 00:07:26.027 real 0m0.033s 00:07:26.027 user 0m0.015s 00:07:26.027 sys 0m0.018s 00:07:26.027 23:51:21 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.027 23:51:21 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:26.027 ************************************ 00:07:26.027 END TEST unittest_nvme_rdma 00:07:26.027 ************************************ 00:07:26.027 23:51:21 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:26.027 23:51:21 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.027 23:51:21 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.027 23:51:21 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:26.027 ************************************ 00:07:26.027 START TEST unittest_nvmf_transport 00:07:26.027 ************************************ 00:07:26.027 23:51:21 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:26.286 00:07:26.286 00:07:26.286 CUnit - A unit testing framework for C - Version 2.1-3 00:07:26.286 http://cunit.sourceforge.net/ 00:07:26.286 00:07:26.286 00:07:26.286 Suite: nvmf 00:07:26.286 Test: test_spdk_nvmf_transport_create ...[2024-07-24 23:51:21.904660] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:07:26.286 [2024-07-24 23:51:21.904958] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:07:26.286 [2024-07-24 23:51:21.905016] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 275:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:07:26.286 [2024-07-24 23:51:21.905091] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 258:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:07:26.286 passed 00:07:26.286 Test: test_nvmf_transport_poll_group_create ...passed 00:07:26.286 Test: test_spdk_nvmf_transport_opts_init ...passed 00:07:26.286 Test: test_spdk_nvmf_transport_listen_ext ...[2024-07-24 23:51:21.905395] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 799:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:07:26.286 [2024-07-24 23:51:21.905436] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 804:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:07:26.286 [2024-07-24 23:51:21.905470] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 809:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:07:26.286 passed 00:07:26.286 00:07:26.286 Run Summary: Type Total Ran Passed Failed Inactive 00:07:26.286 suites 1 1 n/a 0 0 00:07:26.286 tests 4 4 4 0 0 00:07:26.286 asserts 49 49 49 0 n/a 00:07:26.286 00:07:26.286 Elapsed time = 0.001 seconds 00:07:26.286 00:07:26.286 real 0m0.038s 00:07:26.286 user 0m0.019s 00:07:26.286 sys 0m0.020s 00:07:26.286 23:51:21 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.286 ************************************ 00:07:26.286 23:51:21 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:07:26.286 END TEST unittest_nvmf_transport 00:07:26.286 ************************************ 00:07:26.286 23:51:21 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:26.286 23:51:21 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.286 23:51:21 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.286 23:51:21 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:26.286 ************************************ 00:07:26.286 START TEST unittest_rdma 00:07:26.286 ************************************ 00:07:26.287 23:51:21 unittest.unittest_rdma -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:26.287 00:07:26.287 00:07:26.287 CUnit - A unit testing framework for C - Version 2.1-3 00:07:26.287 http://cunit.sourceforge.net/ 00:07:26.287 00:07:26.287 00:07:26.287 Suite: rdma_common 00:07:26.287 Test: test_spdk_rdma_pd ...[2024-07-24 23:51:21.989588] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:07:26.287 passed 00:07:26.287 00:07:26.287 [2024-07-24 23:51:21.989902] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:07:26.287 Run Summary: Type Total Ran Passed Failed Inactive 00:07:26.287 suites 1 1 n/a 0 0 00:07:26.287 tests 1 1 1 0 0 00:07:26.287 asserts 31 31 31 0 n/a 00:07:26.287 00:07:26.287 Elapsed time = 0.001 seconds 00:07:26.287 00:07:26.287 real 0m0.028s 00:07:26.287 user 0m0.008s 00:07:26.287 sys 0m0.020s 00:07:26.287 23:51:22 unittest.unittest_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.287 23:51:22 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:26.287 ************************************ 00:07:26.287 END TEST unittest_rdma 00:07:26.287 ************************************ 00:07:26.287 23:51:22 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:26.287 23:51:22 unittest -- unit/unittest.sh@258 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:26.287 23:51:22 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.287 23:51:22 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.287 23:51:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:26.287 ************************************ 00:07:26.287 START TEST unittest_nvme_cuse 00:07:26.287 ************************************ 00:07:26.287 23:51:22 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:26.287 00:07:26.287 00:07:26.287 CUnit - A unit testing framework for C - Version 2.1-3 00:07:26.287 http://cunit.sourceforge.net/ 00:07:26.287 00:07:26.287 00:07:26.287 Suite: nvme_cuse 00:07:26.287 Test: test_cuse_nvme_submit_io_read_write ...passed 00:07:26.287 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:07:26.287 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:07:26.287 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:07:26.287 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:07:26.287 Test: test_cuse_nvme_submit_io ...passed 00:07:26.287 Test: test_cuse_nvme_reset ...[2024-07-24 23:51:22.068334] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:07:26.287 passed 00:07:26.287 Test: test_nvme_cuse_stop ...[2024-07-24 23:51:22.068570] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:07:26.855 passed 00:07:26.855 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:07:26.855 00:07:26.855 Run Summary: Type Total Ran Passed Failed Inactive 00:07:26.855 suites 1 1 n/a 0 0 00:07:26.855 tests 9 9 9 0 0 00:07:26.855 asserts 118 118 118 0 n/a 00:07:26.855 00:07:26.855 Elapsed time = 0.504 seconds 00:07:26.855 00:07:26.855 real 0m0.535s 00:07:26.855 user 0m0.270s 00:07:26.855 sys 0m0.267s 00:07:26.855 23:51:22 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.855 23:51:22 unittest.unittest_nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:07:26.855 ************************************ 00:07:26.855 END TEST unittest_nvme_cuse 00:07:26.855 ************************************ 00:07:26.855 23:51:22 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:07:26.855 23:51:22 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.855 23:51:22 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.855 23:51:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:26.855 ************************************ 00:07:26.855 START TEST unittest_nvmf 00:07:26.855 ************************************ 00:07:26.855 23:51:22 unittest.unittest_nvmf -- common/autotest_common.sh@1125 -- # unittest_nvmf 00:07:26.855 23:51:22 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:07:26.855 00:07:26.855 00:07:26.855 CUnit - A unit testing framework for C - Version 2.1-3 00:07:26.855 http://cunit.sourceforge.net/ 00:07:26.855 00:07:26.855 00:07:26.855 Suite: nvmf 00:07:26.855 Test: test_get_log_page ...[2024-07-24 23:51:22.661155] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:07:26.855 passed 00:07:26.855 Test: test_process_fabrics_cmd ...passed 00:07:26.855 Test: test_connect ...[2024-07-24 23:51:22.661462] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4741:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:07:26.855 [2024-07-24 23:51:22.662306] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1012:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:07:26.855 [2024-07-24 23:51:22.662384] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 875:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:07:26.855 [2024-07-24 23:51:22.662436] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1051:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:07:26.855 [2024-07-24 23:51:22.662476] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:07:26.855 [2024-07-24 23:51:22.662516] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 886:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:07:26.855 [2024-07-24 23:51:22.662553] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 893:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:07:26.855 [2024-07-24 23:51:22.662607] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 899:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:07:26.855 [2024-07-24 23:51:22.662657] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 926:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:07:26.855 [2024-07-24 23:51:22.662770] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:07:26.855 [2024-07-24 23:51:22.662910] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:07:26.855 [2024-07-24 23:51:22.663208] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:07:26.855 [2024-07-24 23:51:22.663311] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 688:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:07:26.855 [2024-07-24 23:51:22.663392] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 695:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:07:26.855 [2024-07-24 23:51:22.663473] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 719:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:07:26.855 [2024-07-24 23:51:22.663573] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 294:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0) 00:07:26.855 [2024-07-24 23:51:22.663726] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group (nil)) 00:07:26.855 passed 00:07:26.855 Test: test_get_ns_id_desc_list ...[2024-07-24 23:51:22.663842] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:07:26.855 passed 00:07:26.855 Test: test_identify_ns ...[2024-07-24 23:51:22.664253] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:26.855 [2024-07-24 23:51:22.664539] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:07:26.855 [2024-07-24 23:51:22.664672] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:07:26.855 passed 00:07:26.855 Test: test_identify_ns_iocs_specific ...[2024-07-24 23:51:22.664861] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:26.855 [2024-07-24 23:51:22.665238] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:26.855 passed 00:07:26.855 Test: test_reservation_write_exclusive ...passed 00:07:26.855 Test: test_reservation_exclusive_access ...passed 00:07:26.855 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:07:26.855 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:07:26.855 Test: test_reservation_notification_log_page ...passed 00:07:26.855 Test: test_get_dif_ctx ...passed 00:07:26.855 Test: test_set_get_features ...[2024-07-24 23:51:22.665925] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:26.855 [2024-07-24 23:51:22.665980] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:26.855 [2024-07-24 23:51:22.666008] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:07:26.855 [2024-07-24 23:51:22.666061] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1735:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:07:26.855 passed 00:07:26.855 Test: test_identify_ctrlr ...passed 00:07:26.855 Test: test_identify_ctrlr_iocs_specific ...passed 00:07:26.856 Test: test_custom_admin_cmd ...passed 00:07:26.856 Test: test_fused_compare_and_write ...[2024-07-24 23:51:22.666631] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4249:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:07:26.856 [2024-07-24 23:51:22.666713] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4238:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:26.856 passed 00:07:26.856 Test: test_multi_async_event_reqs ...passed 00:07:26.856 Test: test_get_ana_log_page_one_ns_per_anagrp ...[2024-07-24 23:51:22.666752] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4256:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:26.856 passed 00:07:26.856 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:07:26.856 Test: test_multi_async_events ...passed 00:07:26.856 Test: test_rae ...passed 00:07:26.856 Test: test_nvmf_ctrlr_create_destruct ...passed 00:07:26.856 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:07:26.856 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:07:26.856 Test: test_zcopy_read ...[2024-07-24 23:51:22.667507] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4741:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:07:26.856 [2024-07-24 23:51:22.667565] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4767:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:07:26.856 passed 00:07:26.856 Test: test_zcopy_write ...passed 00:07:26.856 Test: test_nvmf_property_set ...passed 00:07:26.856 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-24 23:51:22.667856] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:26.856 [2024-07-24 23:51:22.667918] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:26.856 passed 00:07:26.856 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:07:26.856 Test: test_nvmf_ctrlr_ns_attachment ...[2024-07-24 23:51:22.667973] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1970:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:07:26.856 [2024-07-24 23:51:22.667995] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1976:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:07:26.856 [2024-07-24 23:51:22.668023] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1988:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:07:26.856 [2024-07-24 23:51:22.668047] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1988:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:07:26.856 passed 00:07:26.856 Test: test_nvmf_check_qpair_active ...[2024-07-24 23:51:22.668251] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4741:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:07:26.856 passed 00:07:26.856 00:07:26.856 [2024-07-24 23:51:22.668281] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4755:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:07:26.856 [2024-07-24 23:51:22.668298] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4767:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:07:26.856 [2024-07-24 23:51:22.668336] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4767:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:07:26.856 [2024-07-24 23:51:22.668352] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4767:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:07:26.856 Run Summary: Type Total Ran Passed Failed Inactive 00:07:26.856 suites 1 1 n/a 0 0 00:07:26.856 tests 32 32 32 0 0 00:07:26.856 asserts 983 983 983 0 n/a 00:07:26.856 00:07:26.856 Elapsed time = 0.007 seconds 00:07:26.856 23:51:22 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:07:26.856 00:07:26.856 00:07:26.856 CUnit - A unit testing framework for C - Version 2.1-3 00:07:26.856 http://cunit.sourceforge.net/ 00:07:26.856 00:07:26.856 00:07:26.856 Suite: nvmf 00:07:26.856 Test: test_get_rw_params ...passed 00:07:26.856 Test: test_get_rw_ext_params ...passed 00:07:26.856 Test: test_lba_in_range ...passed 00:07:26.856 Test: test_get_dif_ctx ...passed 00:07:26.856 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:07:26.856 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-24 23:51:22.702727] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:07:26.856 passed 00:07:26.856 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-07-24 23:51:22.703024] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:07:26.856 [2024-07-24 23:51:22.703085] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 462:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:07:26.856 [2024-07-24 23:51:22.703160] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:07:26.856 passed 00:07:26.856 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-24 23:51:22.703225] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 972:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:07:26.856 [2024-07-24 23:51:22.703288] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:07:26.856 [2024-07-24 23:51:22.703329] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 408:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:07:26.856 passed 00:07:26.856 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:07:26.856 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:07:26.856 00:07:26.856 [2024-07-24 23:51:22.703385] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:07:26.856 [2024-07-24 23:51:22.703423] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:07:26.856 Run Summary: Type Total Ran Passed Failed Inactive 00:07:26.856 suites 1 1 n/a 0 0 00:07:26.856 tests 10 10 10 0 0 00:07:26.856 asserts 159 159 159 0 n/a 00:07:26.856 00:07:26.856 Elapsed time = 0.001 seconds 00:07:26.856 23:51:22 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:07:27.116 00:07:27.116 00:07:27.116 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.116 http://cunit.sourceforge.net/ 00:07:27.116 00:07:27.116 00:07:27.116 Suite: nvmf 00:07:27.116 Test: test_discovery_log ...passed 00:07:27.116 Test: test_discovery_log_with_filters ...passed 00:07:27.116 00:07:27.116 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.116 suites 1 1 n/a 0 0 00:07:27.116 tests 2 2 2 0 0 00:07:27.116 asserts 238 238 238 0 n/a 00:07:27.116 00:07:27.116 Elapsed time = 0.003 seconds 00:07:27.116 23:51:22 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:07:27.116 00:07:27.116 00:07:27.116 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.116 http://cunit.sourceforge.net/ 00:07:27.116 00:07:27.116 00:07:27.116 Suite: nvmf 00:07:27.116 Test: nvmf_test_create_subsystem ...[2024-07-24 23:51:22.783655] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:07:27.116 [2024-07-24 23:51:22.783893] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:07:27.116 [2024-07-24 23:51:22.784081] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:07:27.116 [2024-07-24 23:51:22.784163] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:07:27.116 [2024-07-24 23:51:22.784250] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:07:27.116 [2024-07-24 23:51:22.784289] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:07:27.116 [2024-07-24 23:51:22.784362] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:07:27.116 [2024-07-24 23:51:22.784413] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:07:27.116 [2024-07-24 23:51:22.784478] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:07:27.116 [2024-07-24 23:51:22.784529] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:07:27.117 [2024-07-24 23:51:22.784575] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:07:27.117 [2024-07-24 23:51:22.784620] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:07:27.117 [2024-07-24 23:51:22.784785] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:07:27.117 [2024-07-24 23:51:22.784887] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:07:27.117 [2024-07-24 23:51:22.785071] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:07:27.117 [2024-07-24 23:51:22.785130] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:07:27.117 [2024-07-24 23:51:22.785287] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:07:27.117 [2024-07-24 23:51:22.785333] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:07:27.117 [2024-07-24 23:51:22.785389] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:27.117 passed 00:07:27.117 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-24 23:51:22.785427] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:07:27.117 [2024-07-24 23:51:22.785504] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:27.117 [2024-07-24 23:51:22.785550] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:07:27.117 [2024-07-24 23:51:22.786066] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:07:27.117 passed 00:07:27.117 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...[2024-07-24 23:51:22.786146] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2031:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:07:27.117 [2024-07-24 23:51:22.786469] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2161:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:07:27.117 passed 00:07:27.117 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:07:27.117 Test: test_spdk_nvmf_ns_visible ...[2024-07-24 23:51:22.786768] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:07:27.117 passed 00:07:27.117 Test: test_reservation_register ...[2024-07-24 23:51:22.787374] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:27.117 [2024-07-24 23:51:22.787543] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3164:nvmf_ns_reservation_register: *ERROR*: No registrant 00:07:27.117 passed 00:07:27.117 Test: test_reservation_register_with_ptpl ...passed 00:07:27.117 Test: test_reservation_acquire_preempt_1 ...[2024-07-24 23:51:22.788874] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:27.117 passed 00:07:27.117 Test: test_reservation_acquire_release_with_ptpl ...passed 00:07:27.117 Test: test_reservation_release ...[2024-07-24 23:51:22.790987] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:27.117 passed 00:07:27.117 Test: test_reservation_unregister_notification ...[2024-07-24 23:51:22.791273] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:27.117 passed 00:07:27.117 Test: test_reservation_release_notification ...[2024-07-24 23:51:22.791554] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:27.117 passed 00:07:27.117 Test: test_reservation_release_notification_write_exclusive ...[2024-07-24 23:51:22.791900] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:27.117 passed 00:07:27.117 Test: test_reservation_clear_notification ...[2024-07-24 23:51:22.792229] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:27.117 passed 00:07:27.117 Test: test_reservation_preempt_notification ...[2024-07-24 23:51:22.792518] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:27.117 passed 00:07:27.117 Test: test_spdk_nvmf_ns_event ...passed 00:07:27.117 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:07:27.117 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:07:27.117 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-24 23:51:22.793478] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 264:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:07:27.117 passed 00:07:27.117 Test: test_nvmf_ns_reservation_report ...[2024-07-24 23:51:22.793572] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:07:27.117 [2024-07-24 23:51:22.793755] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3469:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:07:27.117 passed 00:07:27.117 Test: test_nvmf_nqn_is_valid ...[2024-07-24 23:51:22.793896] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:07:27.117 [2024-07-24 23:51:22.793963] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:027507f1-5392-4046-ac21-3f4e18f3a61": uuid is not the correct length 00:07:27.117 passed 00:07:27.117 Test: test_nvmf_ns_reservation_restore ...[2024-07-24 23:51:22.794011] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:07:27.117 [2024-07-24 23:51:22.794197] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2663:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:07:27.117 passed 00:07:27.117 Test: test_nvmf_subsystem_state_change ...passed 00:07:27.117 Test: test_nvmf_reservation_custom_ops ...passed 00:07:27.117 00:07:27.117 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.117 suites 1 1 n/a 0 0 00:07:27.117 tests 24 24 24 0 0 00:07:27.117 asserts 499 499 499 0 n/a 00:07:27.117 00:07:27.117 Elapsed time = 0.012 seconds 00:07:27.117 23:51:22 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:07:27.117 00:07:27.117 00:07:27.117 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.117 http://cunit.sourceforge.net/ 00:07:27.117 00:07:27.117 00:07:27.117 Suite: nvmf 00:07:27.117 Test: test_nvmf_tcp_create ...[2024-07-24 23:51:22.867484] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 750:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:07:27.117 passed 00:07:27.117 Test: test_nvmf_tcp_destroy ...passed 00:07:27.117 Test: test_nvmf_tcp_poll_group_create ...passed 00:07:27.117 Test: test_nvmf_tcp_send_c2h_data ...passed 00:07:27.117 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:07:27.117 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:07:27.117 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:07:27.117 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-24 23:51:22.981879] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:27.117 passed 00:07:27.117 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:07:27.117 Test: test_nvmf_tcp_icreq_handle ...[2024-07-24 23:51:22.981968] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a4f59d0b020 is same with the state(5) to be set 00:07:27.117 [2024-07-24 23:51:22.982009] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a4f59d0b020 is same with the state(5) to be set 00:07:27.117 [2024-07-24 23:51:22.982048] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:27.117 [2024-07-24 23:51:22.982091] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a4f59d0b020 is same with the state(5) to be set 00:07:27.117 [2024-07-24 23:51:22.982206] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2168:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:07:27.117 [2024-07-24 23:51:22.982250] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:27.117 [2024-07-24 23:51:22.982291] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a4f59d0d180 is same with the state(5) to be set 00:07:27.117 [2024-07-24 23:51:22.982317] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2168:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:07:27.117 [2024-07-24 23:51:22.982352] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a4f59d0d180 is same with the state(5) to be set 00:07:27.117 [2024-07-24 23:51:22.982370] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:27.117 [2024-07-24 23:51:22.982413] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a4f59d0d180 is same with the state(5) to be set 00:07:27.117 [2024-07-24 23:51:22.982453] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:07:27.117 passed 00:07:27.117 Test: test_nvmf_tcp_check_xfer_type ...passed 00:07:27.117 Test: test_nvmf_tcp_invalid_sgl ...[2024-07-24 23:51:22.982497] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a4f59d0d180 is same with the state(5) to be set 00:07:27.117 [2024-07-24 23:51:22.982594] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2563:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:07:27.117 [2024-07-24 23:51:22.982637] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:27.117 [2024-07-24 23:51:22.982663] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a4f59d116c0 is same with the state(5) to be set 00:07:27.117 passed 00:07:27.117 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-24 23:51:22.982708] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2295:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7a4f59c0c8c0 00:07:27.118 [2024-07-24 23:51:22.982752] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:27.118 [2024-07-24 23:51:22.982791] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a4f59c0c020 is same with the state(5) to be set 00:07:27.118 [2024-07-24 23:51:22.982847] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2352:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7a4f59c0c020 00:07:27.118 [2024-07-24 23:51:22.982883] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:27.118 [2024-07-24 23:51:22.982914] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a4f59c0c020 is same with the state(5) to be set 00:07:27.118 [2024-07-24 23:51:22.982953] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2305:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:07:27.118 [2024-07-24 23:51:22.982988] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:27.118 [2024-07-24 23:51:22.983024] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a4f59c0c020 is same with the state(5) to be set 00:07:27.118 [2024-07-24 23:51:22.983055] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2344:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:07:27.118 [2024-07-24 23:51:22.983094] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:27.118 [2024-07-24 23:51:22.983125] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a4f59c0c020 is same with the state(5) to be set 00:07:27.118 [2024-07-24 23:51:22.983176] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:27.118 [2024-07-24 23:51:22.983209] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a4f59c0c020 is same with the state(5) to be set 00:07:27.118 [2024-07-24 23:51:22.983250] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:27.118 [2024-07-24 23:51:22.983285] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a4f59c0c020 is same with the state(5) to be set 00:07:27.118 [2024-07-24 23:51:22.983341] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:27.118 [2024-07-24 23:51:22.983367] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a4f59c0c020 is same with the state(5) to be set 00:07:27.118 [2024-07-24 23:51:22.983402] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:27.118 [2024-07-24 23:51:22.983430] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a4f59c0c020 is same with the state(5) to be set 00:07:27.118 [2024-07-24 23:51:22.983475] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:27.118 [2024-07-24 23:51:22.983502] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a4f59c0c020 is same with the state(5) to be set 00:07:27.118 [2024-07-24 23:51:22.983541] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:27.118 [2024-07-24 23:51:22.983573] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a4f59c0c020 is same with the state(5) to be set 00:07:27.118 passed 00:07:27.377 Test: test_nvmf_tcp_tls_add_remove_credentials ...passed 00:07:27.377 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-24 23:51:23.014310] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:07:27.377 [2024-07-24 23:51:23.014390] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:07:27.377 passed 00:07:27.377 Test: test_nvmf_tcp_tls_generate_retained_psk ...passed 00:07:27.377 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-07-24 23:51:23.015406] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:07:27.377 [2024-07-24 23:51:23.015473] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:07:27.377 passed 00:07:27.377 00:07:27.377 [2024-07-24 23:51:23.016155] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:07:27.377 [2024-07-24 23:51:23.016217] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:07:27.377 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.377 suites 1 1 n/a 0 0 00:07:27.377 tests 17 17 17 0 0 00:07:27.377 asserts 222 222 222 0 n/a 00:07:27.377 00:07:27.377 Elapsed time = 0.174 seconds 00:07:27.377 23:51:23 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:07:27.377 00:07:27.377 00:07:27.377 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.377 http://cunit.sourceforge.net/ 00:07:27.377 00:07:27.377 00:07:27.377 Suite: nvmf 00:07:27.377 Test: test_nvmf_tgt_create_poll_group ...passed 00:07:27.377 00:07:27.377 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.377 suites 1 1 n/a 0 0 00:07:27.377 tests 1 1 1 0 0 00:07:27.377 asserts 17 17 17 0 n/a 00:07:27.377 00:07:27.377 Elapsed time = 0.026 seconds 00:07:27.377 00:07:27.377 real 0m0.542s 00:07:27.377 user 0m0.231s 00:07:27.377 sys 0m0.310s 00:07:27.377 23:51:23 unittest.unittest_nvmf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.377 23:51:23 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:07:27.377 ************************************ 00:07:27.377 END TEST unittest_nvmf 00:07:27.377 ************************************ 00:07:27.377 23:51:23 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:27.377 23:51:23 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:27.377 23:51:23 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:07:27.377 23:51:23 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.377 23:51:23 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.377 23:51:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:27.377 ************************************ 00:07:27.377 START TEST unittest_nvmf_rdma 00:07:27.377 ************************************ 00:07:27.377 23:51:23 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:07:27.652 00:07:27.652 00:07:27.652 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.652 http://cunit.sourceforge.net/ 00:07:27.652 00:07:27.652 00:07:27.652 Suite: nvmf 00:07:27.652 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-24 23:51:23.261036] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1863:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:07:27.652 [2024-07-24 23:51:23.261316] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1913:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:07:27.652 [2024-07-24 23:51:23.261373] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1913:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:07:27.652 passed 00:07:27.652 Test: test_spdk_nvmf_rdma_request_process ...passed 00:07:27.652 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:07:27.652 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:07:27.652 Test: test_nvmf_rdma_opts_init ...passed 00:07:27.652 Test: test_nvmf_rdma_request_free_data ...passed 00:07:27.652 Test: test_nvmf_rdma_resources_create ...passed 00:07:27.652 Test: test_nvmf_rdma_qpair_compare ...passed 00:07:27.652 Test: test_nvmf_rdma_resize_cq ...[2024-07-24 23:51:23.264349] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 954:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:07:27.652 Using CQ of insufficient size may lead to CQ overrun 00:07:27.652 passed 00:07:27.652 00:07:27.652 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.652 suites 1 1 n/a 0 0 00:07:27.652 tests 9 9 9 0 0 00:07:27.652 asserts 579 579 579 0 n/a 00:07:27.652 00:07:27.652 Elapsed time = 0.004 seconds[2024-07-24 23:51:23.264401] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 959:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:07:27.652 [2024-07-24 23:51:23.264484] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 967:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:07:27.652 00:07:27.652 00:07:27.652 real 0m0.043s 00:07:27.652 user 0m0.021s 00:07:27.652 sys 0m0.022s 00:07:27.652 23:51:23 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.652 23:51:23 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:27.652 ************************************ 00:07:27.652 END TEST unittest_nvmf_rdma 00:07:27.652 ************************************ 00:07:27.652 23:51:23 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:27.652 23:51:23 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:07:27.652 23:51:23 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.652 23:51:23 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.652 23:51:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:27.652 ************************************ 00:07:27.652 START TEST unittest_scsi 00:07:27.652 ************************************ 00:07:27.652 23:51:23 unittest.unittest_scsi -- common/autotest_common.sh@1125 -- # unittest_scsi 00:07:27.652 23:51:23 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:07:27.652 00:07:27.652 00:07:27.652 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.652 http://cunit.sourceforge.net/ 00:07:27.652 00:07:27.652 00:07:27.652 Suite: dev_suite 00:07:27.652 Test: dev_destruct_null_dev ...passed 00:07:27.652 Test: dev_destruct_zero_luns ...passed 00:07:27.652 Test: dev_destruct_null_lun ...passed 00:07:27.652 Test: dev_destruct_success ...passed 00:07:27.652 Test: dev_construct_num_luns_zero ...passed 00:07:27.652 Test: dev_construct_no_lun_zero ...[2024-07-24 23:51:23.356826] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:07:27.652 [2024-07-24 23:51:23.357517] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:07:27.652 passed 00:07:27.652 Test: dev_construct_null_lun ...passed 00:07:27.652 Test: dev_construct_name_too_long ...passed 00:07:27.652 Test: dev_construct_success ...passed 00:07:27.652 Test: dev_construct_success_lun_zero_not_first ...[2024-07-24 23:51:23.357566] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:07:27.652 [2024-07-24 23:51:23.357624] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:07:27.652 passed 00:07:27.652 Test: dev_queue_mgmt_task_success ...passed 00:07:27.652 Test: dev_queue_task_success ...passed 00:07:27.652 Test: dev_stop_success ...passed 00:07:27.652 Test: dev_add_port_max_ports ...passed 00:07:27.652 Test: dev_add_port_construct_failure1 ...[2024-07-24 23:51:23.358468] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:07:27.652 [2024-07-24 23:51:23.358549] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:07:27.652 passed 00:07:27.652 Test: dev_add_port_construct_failure2 ...[2024-07-24 23:51:23.358990] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:07:27.652 passed 00:07:27.652 Test: dev_add_port_success1 ...passed 00:07:27.652 Test: dev_add_port_success2 ...passed 00:07:27.652 Test: dev_add_port_success3 ...passed 00:07:27.652 Test: dev_find_port_by_id_num_ports_zero ...passed 00:07:27.652 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:07:27.652 Test: dev_find_port_by_id_success ...passed 00:07:27.652 Test: dev_add_lun_bdev_not_found ...passed 00:07:27.652 Test: dev_add_lun_no_free_lun_id ...passed 00:07:27.652 Test: dev_add_lun_success1 ...passed 00:07:27.652 Test: dev_add_lun_success2 ...passed 00:07:27.652 Test: dev_check_pending_tasks ...passed 00:07:27.652 Test: dev_iterate_luns ...[2024-07-24 23:51:23.360013] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:07:27.652 passed 00:07:27.652 Test: dev_find_free_lun ...passed 00:07:27.652 00:07:27.652 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.652 suites 1 1 n/a 0 0 00:07:27.652 tests 29 29 29 0 0 00:07:27.652 asserts 97 97 97 0 n/a 00:07:27.652 00:07:27.652 Elapsed time = 0.004 seconds 00:07:27.652 23:51:23 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:07:27.652 00:07:27.652 00:07:27.652 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.652 http://cunit.sourceforge.net/ 00:07:27.652 00:07:27.652 00:07:27.652 Suite: lun_suite 00:07:27.652 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:07:27.652 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-07-24 23:51:23.394785] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:07:27.652 [2024-07-24 23:51:23.395159] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:07:27.652 passed 00:07:27.652 Test: lun_task_mgmt_execute_lun_reset ...passed 00:07:27.652 Test: lun_task_mgmt_execute_target_reset ...passed 00:07:27.652 Test: lun_task_mgmt_execute_invalid_case ...[2024-07-24 23:51:23.395354] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:07:27.652 passed 00:07:27.652 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:07:27.652 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:07:27.652 Test: lun_append_task_null_lun_not_supported ...passed 00:07:27.652 Test: lun_execute_scsi_task_pending ...passed 00:07:27.652 Test: lun_execute_scsi_task_complete ...passed 00:07:27.652 Test: lun_execute_scsi_task_resize ...passed 00:07:27.652 Test: lun_destruct_success ...passed 00:07:27.652 Test: lun_construct_null_ctx ...passed 00:07:27.652 Test: lun_construct_success ...passed 00:07:27.652 Test: lun_reset_task_wait_scsi_task_complete ...[2024-07-24 23:51:23.395646] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:07:27.652 passed 00:07:27.652 Test: lun_reset_task_suspend_scsi_task ...passed 00:07:27.652 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:07:27.653 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:07:27.653 00:07:27.653 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.653 suites 1 1 n/a 0 0 00:07:27.653 tests 18 18 18 0 0 00:07:27.653 asserts 153 153 153 0 n/a 00:07:27.653 00:07:27.653 Elapsed time = 0.001 seconds 00:07:27.653 23:51:23 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:07:27.653 00:07:27.653 00:07:27.653 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.653 http://cunit.sourceforge.net/ 00:07:27.653 00:07:27.653 00:07:27.653 Suite: scsi_suite 00:07:27.653 Test: scsi_init ...passed 00:07:27.653 00:07:27.653 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.653 suites 1 1 n/a 0 0 00:07:27.653 tests 1 1 1 0 0 00:07:27.653 asserts 1 1 1 0 n/a 00:07:27.653 00:07:27.653 Elapsed time = 0.000 seconds 00:07:27.653 23:51:23 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:07:27.653 00:07:27.653 00:07:27.653 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.653 http://cunit.sourceforge.net/ 00:07:27.653 00:07:27.653 00:07:27.653 Suite: translation_suite 00:07:27.653 Test: mode_select_6_test ...passed 00:07:27.653 Test: mode_select_6_test2 ...passed 00:07:27.653 Test: mode_sense_6_test ...passed 00:07:27.653 Test: mode_sense_10_test ...passed 00:07:27.653 Test: inquiry_evpd_test ...passed 00:07:27.653 Test: inquiry_standard_test ...passed 00:07:27.653 Test: inquiry_overflow_test ...passed 00:07:27.653 Test: task_complete_test ...passed 00:07:27.653 Test: lba_range_test ...passed 00:07:27.653 Test: xfer_len_test ...[2024-07-24 23:51:23.449687] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:07:27.653 passed 00:07:27.653 Test: xfer_test ...passed 00:07:27.653 Test: scsi_name_padding_test ...passed 00:07:27.653 Test: get_dif_ctx_test ...passed 00:07:27.653 Test: unmap_split_test ...passed 00:07:27.653 00:07:27.653 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.653 suites 1 1 n/a 0 0 00:07:27.653 tests 14 14 14 0 0 00:07:27.653 asserts 1205 1205 1205 0 n/a 00:07:27.653 00:07:27.653 Elapsed time = 0.004 seconds 00:07:27.653 23:51:23 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:07:27.653 00:07:27.653 00:07:27.653 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.653 http://cunit.sourceforge.net/ 00:07:27.653 00:07:27.653 00:07:27.653 Suite: reservation_suite 00:07:27.653 Test: test_reservation_register ...[2024-07-24 23:51:23.482830] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:27.653 passed 00:07:27.653 Test: test_reservation_reserve ...[2024-07-24 23:51:23.483154] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:27.653 [2024-07-24 23:51:23.483260] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:07:27.653 passed 00:07:27.653 Test: test_all_registrant_reservation_reserve ...[2024-07-24 23:51:23.483315] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:07:27.653 passed 00:07:27.653 Test: test_all_registrant_reservation_access ...[2024-07-24 23:51:23.483415] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:27.653 [2024-07-24 23:51:23.483549] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:27.653 [2024-07-24 23:51:23.483652] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0x8 00:07:27.653 [2024-07-24 23:51:23.483694] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0xaa 00:07:27.653 passed 00:07:27.653 Test: test_reservation_preempt_non_all_regs ...[2024-07-24 23:51:23.483787] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:27.653 passed 00:07:27.653 Test: test_reservation_preempt_all_regs ...[2024-07-24 23:51:23.483897] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:07:27.653 passed 00:07:27.653 Test: test_reservation_cmds_conflict ...[2024-07-24 23:51:23.484025] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:27.653 [2024-07-24 23:51:23.484180] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:27.653 [2024-07-24 23:51:23.484280] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 857:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:07:27.653 [2024-07-24 23:51:23.484341] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:07:27.653 [2024-07-24 23:51:23.484399] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:07:27.653 [2024-07-24 23:51:23.484443] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:07:27.653 passed 00:07:27.653 Test: test_scsi2_reserve_release ...passed 00:07:27.653 Test: test_pr_with_scsi2_reserve_release ...[2024-07-24 23:51:23.484510] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:07:27.653 passed 00:07:27.653 00:07:27.653 [2024-07-24 23:51:23.484626] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:27.653 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.653 suites 1 1 n/a 0 0 00:07:27.653 tests 9 9 9 0 0 00:07:27.653 asserts 344 344 344 0 n/a 00:07:27.653 00:07:27.653 Elapsed time = 0.002 seconds 00:07:27.653 00:07:27.653 real 0m0.157s 00:07:27.653 user 0m0.071s 00:07:27.653 sys 0m0.087s 00:07:27.653 23:51:23 unittest.unittest_scsi -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.653 23:51:23 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:07:27.653 ************************************ 00:07:27.653 END TEST unittest_scsi 00:07:27.653 ************************************ 00:07:27.922 23:51:23 unittest -- unit/unittest.sh@278 -- # uname -s 00:07:27.922 23:51:23 unittest -- unit/unittest.sh@278 -- # '[' Linux = Linux ']' 00:07:27.922 23:51:23 unittest -- unit/unittest.sh@279 -- # run_test unittest_sock unittest_sock 00:07:27.922 23:51:23 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.922 23:51:23 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.922 23:51:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:27.922 ************************************ 00:07:27.922 START TEST unittest_sock 00:07:27.922 ************************************ 00:07:27.922 23:51:23 unittest.unittest_sock -- common/autotest_common.sh@1125 -- # unittest_sock 00:07:27.922 23:51:23 unittest.unittest_sock -- unit/unittest.sh@125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:07:27.922 00:07:27.922 00:07:27.922 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.922 http://cunit.sourceforge.net/ 00:07:27.922 00:07:27.922 00:07:27.922 Suite: sock 00:07:27.922 Test: posix_sock ...passed 00:07:27.922 Test: ut_sock ...passed 00:07:27.922 Test: posix_sock_group ...passed 00:07:27.922 Test: ut_sock_group ...passed 00:07:27.922 Test: posix_sock_group_fairness ...passed 00:07:27.922 Test: _posix_sock_close ...passed 00:07:27.922 Test: sock_get_default_opts ...passed 00:07:27.922 Test: ut_sock_impl_get_set_opts ...passed 00:07:27.922 Test: posix_sock_impl_get_set_opts ...passed 00:07:27.922 Test: ut_sock_map ...passed 00:07:27.922 Test: override_impl_opts ...passed 00:07:27.922 Test: ut_sock_group_get_ctx ...passed 00:07:27.922 Test: posix_get_interface_name ...passed 00:07:27.922 00:07:27.922 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.922 suites 1 1 n/a 0 0 00:07:27.922 tests 13 13 13 0 0 00:07:27.922 asserts 360 360 360 0 n/a 00:07:27.922 00:07:27.922 Elapsed time = 0.013 seconds 00:07:27.922 23:51:23 unittest.unittest_sock -- unit/unittest.sh@126 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:07:27.922 00:07:27.922 00:07:27.922 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.922 http://cunit.sourceforge.net/ 00:07:27.922 00:07:27.922 00:07:27.922 Suite: posix 00:07:27.922 Test: flush ...passed 00:07:27.922 00:07:27.922 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.922 suites 1 1 n/a 0 0 00:07:27.922 tests 1 1 1 0 0 00:07:27.922 asserts 28 28 28 0 n/a 00:07:27.922 00:07:27.922 Elapsed time = 0.000 seconds 00:07:27.922 23:51:23 unittest.unittest_sock -- unit/unittest.sh@128 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:27.922 00:07:27.922 real 0m0.115s 00:07:27.922 user 0m0.042s 00:07:27.922 sys 0m0.050s 00:07:27.922 23:51:23 unittest.unittest_sock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.922 ************************************ 00:07:27.922 END TEST unittest_sock 00:07:27.922 23:51:23 unittest.unittest_sock -- common/autotest_common.sh@10 -- # set +x 00:07:27.922 ************************************ 00:07:27.922 23:51:23 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:27.922 23:51:23 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.922 23:51:23 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.922 23:51:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:27.922 ************************************ 00:07:27.922 START TEST unittest_thread 00:07:27.922 ************************************ 00:07:27.922 23:51:23 unittest.unittest_thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:27.922 00:07:27.922 00:07:27.922 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.922 http://cunit.sourceforge.net/ 00:07:27.922 00:07:27.922 00:07:27.922 Suite: io_channel 00:07:27.922 Test: thread_alloc ...passed 00:07:27.922 Test: thread_send_msg ...passed 00:07:27.922 Test: thread_poller ...passed 00:07:27.922 Test: poller_pause ...passed 00:07:27.922 Test: thread_for_each ...passed 00:07:27.922 Test: for_each_channel_remove ...passed 00:07:27.922 Test: for_each_channel_unreg ...[2024-07-24 23:51:23.757649] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2177:spdk_io_device_register: *ERROR*: io_device 0x740989509640 already registered (old:0x513000000200 new:0x5130000003c0) 00:07:27.922 passed 00:07:27.922 Test: thread_name ...passed 00:07:27.922 Test: channel ...[2024-07-24 23:51:23.761571] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2311:spdk_get_io_channel: *ERROR*: could not find io_device 0x65342550d1c0 00:07:27.922 passed 00:07:27.922 Test: channel_destroy_races ...passed 00:07:27.922 Test: thread_exit_test ...[2024-07-24 23:51:23.766260] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 639:thread_exit: *ERROR*: thread 0x519000007380 got timeout, and move it to the exited state forcefully 00:07:27.922 passed 00:07:27.922 Test: thread_update_stats_test ...passed 00:07:27.922 Test: nested_channel ...passed 00:07:27.922 Test: device_unregister_and_thread_exit_race ...passed 00:07:27.922 Test: cache_closest_timed_poller ...passed 00:07:27.922 Test: multi_timed_pollers_have_same_expiration ...passed 00:07:27.922 Test: io_device_lookup ...passed 00:07:27.922 Test: spdk_spin ...[2024-07-24 23:51:23.776476] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:27.922 [2024-07-24 23:51:23.776516] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x74098950a020 00:07:27.922 [2024-07-24 23:51:23.776563] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3120:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:27.922 [2024-07-24 23:51:23.778278] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:27.922 [2024-07-24 23:51:23.778335] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x74098950a020 00:07:27.923 [2024-07-24 23:51:23.778359] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:27.923 [2024-07-24 23:51:23.778390] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x74098950a020 00:07:27.923 [2024-07-24 23:51:23.778416] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:27.923 [2024-07-24 23:51:23.778446] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x74098950a020 00:07:27.923 [2024-07-24 23:51:23.778460] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:07:27.923 [2024-07-24 23:51:23.778486] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x74098950a020 00:07:27.923 passed 00:07:27.923 Test: for_each_channel_and_thread_exit_race ...passed 00:07:27.923 Test: for_each_thread_and_thread_exit_race ...passed 00:07:27.923 00:07:27.923 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.923 suites 1 1 n/a 0 0 00:07:27.923 tests 20 20 20 0 0 00:07:27.923 asserts 409 409 409 0 n/a 00:07:27.923 00:07:27.923 Elapsed time = 0.048 seconds 00:07:28.181 00:07:28.181 real 0m0.080s 00:07:28.181 user 0m0.054s 00:07:28.181 sys 0m0.027s 00:07:28.181 23:51:23 unittest.unittest_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.181 23:51:23 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.181 ************************************ 00:07:28.181 END TEST unittest_thread 00:07:28.181 ************************************ 00:07:28.181 23:51:23 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:28.181 23:51:23 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.181 23:51:23 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.181 23:51:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:28.181 ************************************ 00:07:28.181 START TEST unittest_iobuf 00:07:28.181 ************************************ 00:07:28.181 23:51:23 unittest.unittest_iobuf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:28.181 00:07:28.181 00:07:28.181 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.181 http://cunit.sourceforge.net/ 00:07:28.181 00:07:28.181 00:07:28.181 Suite: io_channel 00:07:28.181 Test: iobuf ...passed 00:07:28.181 Test: iobuf_cache ...[2024-07-24 23:51:23.869576] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:28.181 [2024-07-24 23:51:23.869785] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:28.181 [2024-07-24 23:51:23.869908] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:07:28.181 [2024-07-24 23:51:23.869944] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:28.181 [2024-07-24 23:51:23.870270] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:28.181 [2024-07-24 23:51:23.870396] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:28.181 passed 00:07:28.181 Test: iobuf_priority ...passed 00:07:28.181 00:07:28.181 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.181 suites 1 1 n/a 0 0 00:07:28.181 tests 3 3 3 0 0 00:07:28.181 asserts 131 131 131 0 n/a 00:07:28.181 00:07:28.181 Elapsed time = 0.009 seconds 00:07:28.181 00:07:28.181 real 0m0.043s 00:07:28.181 user 0m0.029s 00:07:28.181 sys 0m0.014s 00:07:28.181 23:51:23 unittest.unittest_iobuf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.181 23:51:23 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:07:28.181 ************************************ 00:07:28.181 END TEST unittest_iobuf 00:07:28.181 ************************************ 00:07:28.181 23:51:23 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:07:28.181 23:51:23 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.181 23:51:23 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.181 23:51:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:28.181 ************************************ 00:07:28.181 START TEST unittest_util 00:07:28.181 ************************************ 00:07:28.181 23:51:23 unittest.unittest_util -- common/autotest_common.sh@1125 -- # unittest_util 00:07:28.182 23:51:23 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:07:28.182 00:07:28.182 00:07:28.182 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.182 http://cunit.sourceforge.net/ 00:07:28.182 00:07:28.182 00:07:28.182 Suite: base64 00:07:28.182 Test: test_base64_get_encoded_strlen ...passed 00:07:28.182 Test: test_base64_get_decoded_len ...passed 00:07:28.182 Test: test_base64_encode ...passed 00:07:28.182 Test: test_base64_decode ...passed 00:07:28.182 Test: test_base64_urlsafe_encode ...passed 00:07:28.182 Test: test_base64_urlsafe_decode ...passed 00:07:28.182 00:07:28.182 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.182 suites 1 1 n/a 0 0 00:07:28.182 tests 6 6 6 0 0 00:07:28.182 asserts 112 112 112 0 n/a 00:07:28.182 00:07:28.182 Elapsed time = 0.000 seconds 00:07:28.182 23:51:23 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:07:28.182 00:07:28.182 00:07:28.182 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.182 http://cunit.sourceforge.net/ 00:07:28.182 00:07:28.182 00:07:28.182 Suite: bit_array 00:07:28.182 Test: test_1bit ...passed 00:07:28.182 Test: test_64bit ...passed 00:07:28.182 Test: test_find ...passed 00:07:28.182 Test: test_resize ...passed 00:07:28.182 Test: test_errors ...passed 00:07:28.182 Test: test_count ...passed 00:07:28.182 Test: test_mask_store_load ...passed 00:07:28.182 Test: test_mask_clear ...passed 00:07:28.182 00:07:28.182 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.182 suites 1 1 n/a 0 0 00:07:28.182 tests 8 8 8 0 0 00:07:28.182 asserts 5075 5075 5075 0 n/a 00:07:28.182 00:07:28.182 Elapsed time = 0.002 seconds 00:07:28.182 23:51:23 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:07:28.182 00:07:28.182 00:07:28.182 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.182 http://cunit.sourceforge.net/ 00:07:28.182 00:07:28.182 00:07:28.182 Suite: cpuset 00:07:28.182 Test: test_cpuset ...passed 00:07:28.182 Test: test_cpuset_parse ...[2024-07-24 23:51:24.007685] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '[' 00:07:28.182 [2024-07-24 23:51:24.007947] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:07:28.182 [2024-07-24 23:51:24.007996] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:07:28.182 [2024-07-24 23:51:24.008043] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 236:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:07:28.182 [2024-07-24 23:51:24.008081] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:07:28.182 [2024-07-24 23:51:24.008115] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:07:28.182 [2024-07-24 23:51:24.008144] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:07:28.182 [2024-07-24 23:51:24.008177] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:07:28.182 passed 00:07:28.182 Test: test_cpuset_fmt ...passed 00:07:28.182 Test: test_cpuset_foreach ...passed 00:07:28.182 00:07:28.182 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.182 suites 1 1 n/a 0 0 00:07:28.182 tests 4 4 4 0 0 00:07:28.182 asserts 90 90 90 0 n/a 00:07:28.182 00:07:28.182 Elapsed time = 0.002 seconds 00:07:28.182 23:51:24 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:07:28.182 00:07:28.182 00:07:28.182 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.182 http://cunit.sourceforge.net/ 00:07:28.182 00:07:28.182 00:07:28.182 Suite: crc16 00:07:28.182 Test: test_crc16_t10dif ...passed 00:07:28.182 Test: test_crc16_t10dif_seed ...passed 00:07:28.182 Test: test_crc16_t10dif_copy ...passed 00:07:28.182 00:07:28.182 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.182 suites 1 1 n/a 0 0 00:07:28.182 tests 3 3 3 0 0 00:07:28.182 asserts 5 5 5 0 n/a 00:07:28.182 00:07:28.182 Elapsed time = 0.000 seconds 00:07:28.182 23:51:24 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:07:28.443 00:07:28.443 00:07:28.443 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.443 http://cunit.sourceforge.net/ 00:07:28.443 00:07:28.443 00:07:28.443 Suite: crc32_ieee 00:07:28.443 Test: test_crc32_ieee ...passed 00:07:28.443 00:07:28.443 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.443 suites 1 1 n/a 0 0 00:07:28.443 tests 1 1 1 0 0 00:07:28.443 asserts 1 1 1 0 n/a 00:07:28.443 00:07:28.443 Elapsed time = 0.000 seconds 00:07:28.443 23:51:24 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:07:28.443 00:07:28.443 00:07:28.443 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.443 http://cunit.sourceforge.net/ 00:07:28.443 00:07:28.443 00:07:28.443 Suite: crc32c 00:07:28.443 Test: test_crc32c ...passed 00:07:28.443 Test: test_crc32c_nvme ...passed 00:07:28.443 00:07:28.443 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.443 suites 1 1 n/a 0 0 00:07:28.443 tests 2 2 2 0 0 00:07:28.443 asserts 16 16 16 0 n/a 00:07:28.443 00:07:28.443 Elapsed time = 0.000 seconds 00:07:28.443 23:51:24 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:07:28.443 00:07:28.443 00:07:28.443 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.443 http://cunit.sourceforge.net/ 00:07:28.443 00:07:28.443 00:07:28.443 Suite: crc64 00:07:28.443 Test: test_crc64_nvme ...passed 00:07:28.443 00:07:28.443 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.443 suites 1 1 n/a 0 0 00:07:28.443 tests 1 1 1 0 0 00:07:28.443 asserts 4 4 4 0 n/a 00:07:28.443 00:07:28.443 Elapsed time = 0.000 seconds 00:07:28.443 23:51:24 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:07:28.443 00:07:28.443 00:07:28.443 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.443 http://cunit.sourceforge.net/ 00:07:28.443 00:07:28.443 00:07:28.443 Suite: string 00:07:28.443 Test: test_parse_ip_addr ...passed 00:07:28.443 Test: test_str_chomp ...passed 00:07:28.443 Test: test_parse_capacity ...passed 00:07:28.443 Test: test_sprintf_append_realloc ...passed 00:07:28.443 Test: test_strtol ...passed 00:07:28.443 Test: test_strtoll ...passed 00:07:28.443 Test: test_strarray ...passed 00:07:28.443 Test: test_strcpy_replace ...passed 00:07:28.443 00:07:28.443 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.443 suites 1 1 n/a 0 0 00:07:28.443 tests 8 8 8 0 0 00:07:28.443 asserts 161 161 161 0 n/a 00:07:28.443 00:07:28.443 Elapsed time = 0.001 seconds 00:07:28.443 23:51:24 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:07:28.443 00:07:28.443 00:07:28.443 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.443 http://cunit.sourceforge.net/ 00:07:28.443 00:07:28.443 00:07:28.443 Suite: dif 00:07:28.443 Test: dif_generate_and_verify_test ...[2024-07-24 23:51:24.170178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:28.443 [2024-07-24 23:51:24.170590] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:28.443 [2024-07-24 23:51:24.170920] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:28.443 [2024-07-24 23:51:24.171205] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:28.443 [2024-07-24 23:51:24.171500] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:28.443 [2024-07-24 23:51:24.171767] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:28.443 passed 00:07:28.443 Test: dif_disable_check_test ...[2024-07-24 23:51:24.172824] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:28.443 [2024-07-24 23:51:24.173126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:28.443 [2024-07-24 23:51:24.173447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:28.443 passed 00:07:28.443 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-24 23:51:24.174561] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:07:28.443 [2024-07-24 23:51:24.174903] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:07:28.443 [2024-07-24 23:51:24.175208] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:07:28.443 [2024-07-24 23:51:24.175526] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:07:28.443 [2024-07-24 23:51:24.175854] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:28.443 [2024-07-24 23:51:24.176196] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:28.443 [2024-07-24 23:51:24.176492] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:28.443 [2024-07-24 23:51:24.176816] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:28.443 [2024-07-24 23:51:24.177157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:28.443 [2024-07-24 23:51:24.177486] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:28.443 [2024-07-24 23:51:24.177837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:28.443 passed 00:07:28.443 Test: dif_apptag_mask_test ...[2024-07-24 23:51:24.178152] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:28.443 [2024-07-24 23:51:24.178439] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:28.443 passed 00:07:28.443 Test: dif_sec_8_md_8_error_test ...[2024-07-24 23:51:24.178624] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 555:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed 00:07:28.443 passed 00:07:28.443 Test: dif_sec_512_md_0_error_test ...passed 00:07:28.443 Test: dif_sec_512_md_16_error_test ...[2024-07-24 23:51:24.178684] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:28.443 [2024-07-24 23:51:24.178721] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:07:28.443 passed 00:07:28.443 Test: dif_sec_4096_md_0_8_error_test ...[2024-07-24 23:51:24.178759] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:07:28.443 [2024-07-24 23:51:24.178828] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:28.443 passed 00:07:28.443 Test: dif_sec_4100_md_128_error_test ...passed 00:07:28.443 Test: dif_guard_seed_test ...passed 00:07:28.443 Test: dif_guard_value_test ...[2024-07-24 23:51:24.178870] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:28.443 [2024-07-24 23:51:24.178909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:28.443 [2024-07-24 23:51:24.178938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:28.443 [2024-07-24 23:51:24.178976] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:07:28.443 [2024-07-24 23:51:24.178998] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:07:28.443 passed 00:07:28.443 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:07:28.443 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:07:28.444 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:28.444 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:28.444 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:28.444 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:07:28.444 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:28.444 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:28.444 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:07:28.444 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:28.444 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:07:28.444 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:07:28.444 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:28.444 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:28.444 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:28.444 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:28.444 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:28.444 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:28.444 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-24 23:51:24.224220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd44, Actual=fd4c 00:07:28.444 [2024-07-24 23:51:24.226696] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fe29, Actual=fe21 00:07:28.444 [2024-07-24 23:51:24.229166] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=80 00:07:28.444 [2024-07-24 23:51:24.231610] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=80 00:07:28.444 [2024-07-24 23:51:24.234062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=55 00:07:28.444 [2024-07-24 23:51:24.236492] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=55 00:07:28.444 [2024-07-24 23:51:24.238941] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=42ec 00:07:28.444 [2024-07-24 23:51:24.240757] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fe21, Actual=8c95 00:07:28.444 [2024-07-24 23:51:24.242585] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753e5, Actual=1ab753ed 00:07:28.444 [2024-07-24 23:51:24.245047] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=38574668, Actual=38574660 00:07:28.444 [2024-07-24 23:51:24.247491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=80 00:07:28.444 [2024-07-24 23:51:24.249958] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=80 00:07:28.444 [2024-07-24 23:51:24.252400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=80000005d 00:07:28.444 [2024-07-24 23:51:24.254851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=80000005d 00:07:28.444 [2024-07-24 23:51:24.257340] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=6b05fbd4 00:07:28.444 [2024-07-24 23:51:24.259183] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=38574660, Actual=6f6eac0b 00:07:28.444 [2024-07-24 23:51:24.261022] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a77a8ecc20d3, Actual=a576a7728ecc20d3 00:07:28.444 [2024-07-24 23:51:24.263469] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=88010a254837a266, Actual=88010a2d4837a266 00:07:28.444 [2024-07-24 23:51:24.265967] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=80 00:07:28.444 [2024-07-24 23:51:24.268417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=80 00:07:28.444 [2024-07-24 23:51:24.270880] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=8005d 00:07:28.444 [2024-07-24 23:51:24.273324] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=8005d 00:07:28.444 [2024-07-24 23:51:24.275764] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=f8fbc92ab5c1fb20 00:07:28.444 [2024-07-24 23:51:24.277635] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=88010a2d4837a266, Actual=2e374322adf6c76d 00:07:28.444 passed 00:07:28.444 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-24 23:51:24.278602] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd44, Actual=fd4c 00:07:28.444 [2024-07-24 23:51:24.278914] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe29, Actual=fe21 00:07:28.444 [2024-07-24 23:51:24.279203] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.444 [2024-07-24 23:51:24.279481] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.444 [2024-07-24 23:51:24.279762] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=50 00:07:28.444 [2024-07-24 23:51:24.280054] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=50 00:07:28.444 [2024-07-24 23:51:24.280352] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=42ec 00:07:28.444 [2024-07-24 23:51:24.280562] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=8c95 00:07:28.444 [2024-07-24 23:51:24.280783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753e5, Actual=1ab753ed 00:07:28.444 [2024-07-24 23:51:24.281120] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574668, Actual=38574660 00:07:28.444 [2024-07-24 23:51:24.281414] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.444 [2024-07-24 23:51:24.281703] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.444 [2024-07-24 23:51:24.282029] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:07:28.444 [2024-07-24 23:51:24.282318] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:07:28.444 [2024-07-24 23:51:24.282619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6b05fbd4 00:07:28.444 [2024-07-24 23:51:24.282849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=6f6eac0b 00:07:28.444 [2024-07-24 23:51:24.283074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a77a8ecc20d3, Actual=a576a7728ecc20d3 00:07:28.444 [2024-07-24 23:51:24.283371] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a254837a266, Actual=88010a2d4837a266 00:07:28.444 [2024-07-24 23:51:24.283658] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.444 [2024-07-24 23:51:24.283962] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.444 [2024-07-24 23:51:24.284263] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:07:28.444 [2024-07-24 23:51:24.284605] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:07:28.444 [2024-07-24 23:51:24.284928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f8fbc92ab5c1fb20 00:07:28.444 [2024-07-24 23:51:24.285152] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=2e374322adf6c76d 00:07:28.444 passed 00:07:28.444 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-24 23:51:24.285396] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd44, Actual=fd4c 00:07:28.444 [2024-07-24 23:51:24.285689] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe29, Actual=fe21 00:07:28.444 [2024-07-24 23:51:24.286014] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.444 [2024-07-24 23:51:24.286288] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.444 [2024-07-24 23:51:24.286579] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=50 00:07:28.444 [2024-07-24 23:51:24.286876] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=50 00:07:28.444 [2024-07-24 23:51:24.287162] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=42ec 00:07:28.444 [2024-07-24 23:51:24.287372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=8c95 00:07:28.444 [2024-07-24 23:51:24.287580] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753e5, Actual=1ab753ed 00:07:28.444 [2024-07-24 23:51:24.287879] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574668, Actual=38574660 00:07:28.444 [2024-07-24 23:51:24.288173] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.444 [2024-07-24 23:51:24.288468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.444 [2024-07-24 23:51:24.288752] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:07:28.444 [2024-07-24 23:51:24.289060] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:07:28.444 [2024-07-24 23:51:24.289352] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6b05fbd4 00:07:28.444 [2024-07-24 23:51:24.289568] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=6f6eac0b 00:07:28.445 [2024-07-24 23:51:24.289793] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a77a8ecc20d3, Actual=a576a7728ecc20d3 00:07:28.445 [2024-07-24 23:51:24.290094] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a254837a266, Actual=88010a2d4837a266 00:07:28.445 [2024-07-24 23:51:24.290381] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.445 [2024-07-24 23:51:24.290662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.445 [2024-07-24 23:51:24.290972] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:07:28.445 [2024-07-24 23:51:24.291248] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:07:28.445 [2024-07-24 23:51:24.291539] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f8fbc92ab5c1fb20 00:07:28.445 [2024-07-24 23:51:24.291767] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=2e374322adf6c76d 00:07:28.445 passed 00:07:28.445 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-24 23:51:24.292030] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd44, Actual=fd4c 00:07:28.445 [2024-07-24 23:51:24.292318] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe29, Actual=fe21 00:07:28.445 [2024-07-24 23:51:24.292605] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.445 [2024-07-24 23:51:24.292899] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.445 [2024-07-24 23:51:24.293216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=50 00:07:28.445 [2024-07-24 23:51:24.293501] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=50 00:07:28.445 [2024-07-24 23:51:24.293777] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=42ec 00:07:28.445 [2024-07-24 23:51:24.294003] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=8c95 00:07:28.445 [2024-07-24 23:51:24.294222] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753e5, Actual=1ab753ed 00:07:28.445 [2024-07-24 23:51:24.294507] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574668, Actual=38574660 00:07:28.445 [2024-07-24 23:51:24.294817] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.445 [2024-07-24 23:51:24.295101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.445 [2024-07-24 23:51:24.295392] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:07:28.445 [2024-07-24 23:51:24.295674] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:07:28.445 [2024-07-24 23:51:24.295961] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6b05fbd4 00:07:28.445 [2024-07-24 23:51:24.296180] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=6f6eac0b 00:07:28.445 [2024-07-24 23:51:24.296388] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a77a8ecc20d3, Actual=a576a7728ecc20d3 00:07:28.445 [2024-07-24 23:51:24.296669] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a254837a266, Actual=88010a2d4837a266 00:07:28.445 [2024-07-24 23:51:24.296987] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.445 [2024-07-24 23:51:24.297271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.445 [2024-07-24 23:51:24.297575] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:07:28.445 [2024-07-24 23:51:24.297872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:07:28.445 [2024-07-24 23:51:24.298164] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f8fbc92ab5c1fb20 00:07:28.445 [2024-07-24 23:51:24.298373] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=2e374322adf6c76d 00:07:28.445 passed 00:07:28.445 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-24 23:51:24.298620] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd44, Actual=fd4c 00:07:28.445 [2024-07-24 23:51:24.298918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe29, Actual=fe21 00:07:28.445 [2024-07-24 23:51:24.299208] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.445 [2024-07-24 23:51:24.299489] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.445 [2024-07-24 23:51:24.299782] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=50 00:07:28.445 [2024-07-24 23:51:24.300083] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=50 00:07:28.445 [2024-07-24 23:51:24.300378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=42ec 00:07:28.445 [2024-07-24 23:51:24.300590] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=8c95 00:07:28.445 passed 00:07:28.445 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-24 23:51:24.300868] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753e5, Actual=1ab753ed 00:07:28.445 [2024-07-24 23:51:24.301162] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574668, Actual=38574660 00:07:28.445 [2024-07-24 23:51:24.301456] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.445 [2024-07-24 23:51:24.301737] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.445 [2024-07-24 23:51:24.302031] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:07:28.445 [2024-07-24 23:51:24.302316] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:07:28.445 [2024-07-24 23:51:24.302612] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6b05fbd4 00:07:28.445 [2024-07-24 23:51:24.302833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=6f6eac0b 00:07:28.445 [2024-07-24 23:51:24.303081] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a77a8ecc20d3, Actual=a576a7728ecc20d3 00:07:28.445 [2024-07-24 23:51:24.303368] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a254837a266, Actual=88010a2d4837a266 00:07:28.445 [2024-07-24 23:51:24.303661] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.445 [2024-07-24 23:51:24.303945] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.445 [2024-07-24 23:51:24.304225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:07:28.445 [2024-07-24 23:51:24.304506] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:07:28.445 [2024-07-24 23:51:24.304792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f8fbc92ab5c1fb20 00:07:28.445 [2024-07-24 23:51:24.305040] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=2e374322adf6c76d 00:07:28.445 passed 00:07:28.445 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-24 23:51:24.305300] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd44, Actual=fd4c 00:07:28.445 [2024-07-24 23:51:24.305587] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe29, Actual=fe21 00:07:28.445 [2024-07-24 23:51:24.305893] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.445 [2024-07-24 23:51:24.306182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.445 [2024-07-24 23:51:24.306470] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=50 00:07:28.445 [2024-07-24 23:51:24.306757] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=50 00:07:28.445 [2024-07-24 23:51:24.307047] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=42ec 00:07:28.445 passed 00:07:28.445 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-24 23:51:24.307255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=8c95 00:07:28.445 [2024-07-24 23:51:24.307505] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753e5, Actual=1ab753ed 00:07:28.445 [2024-07-24 23:51:24.307815] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574668, Actual=38574660 00:07:28.445 [2024-07-24 23:51:24.308105] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.445 [2024-07-24 23:51:24.308396] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.445 [2024-07-24 23:51:24.308682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:07:28.445 [2024-07-24 23:51:24.308993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:07:28.445 [2024-07-24 23:51:24.309280] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6b05fbd4 00:07:28.445 [2024-07-24 23:51:24.309485] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=6f6eac0b 00:07:28.445 [2024-07-24 23:51:24.309733] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a77a8ecc20d3, Actual=a576a7728ecc20d3 00:07:28.446 [2024-07-24 23:51:24.310036] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a254837a266, Actual=88010a2d4837a266 00:07:28.446 [2024-07-24 23:51:24.310319] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.446 [2024-07-24 23:51:24.310604] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:07:28.446 [2024-07-24 23:51:24.310894] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:07:28.446 [2024-07-24 23:51:24.311184] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:07:28.446 [2024-07-24 23:51:24.311477] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f8fbc92ab5c1fb20 00:07:28.705 [2024-07-24 23:51:24.311697] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=2e374322adf6c76d 00:07:28.705 passed 00:07:28.705 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:07:28.705 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:28.705 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:28.705 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:28.705 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:28.705 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:28.705 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:28.705 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:28.706 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:28.706 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-24 23:51:24.356289] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd44, Actual=fd4c 00:07:28.706 [2024-07-24 23:51:24.357452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=22ab, Actual=22a3 00:07:28.706 [2024-07-24 23:51:24.358573] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=80 00:07:28.706 [2024-07-24 23:51:24.359672] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=80 00:07:28.706 [2024-07-24 23:51:24.360786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=55 00:07:28.706 [2024-07-24 23:51:24.361923] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=55 00:07:28.706 [2024-07-24 23:51:24.363029] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=42ec 00:07:28.706 [2024-07-24 23:51:24.364347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f141, Actual=83f5 00:07:28.706 [2024-07-24 23:51:24.365468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753e5, Actual=1ab753ed 00:07:28.706 [2024-07-24 23:51:24.366583] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=b5935d33, Actual=b5935d3b 00:07:28.706 [2024-07-24 23:51:24.367848] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=80 00:07:28.706 [2024-07-24 23:51:24.368949] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=80 00:07:28.706 [2024-07-24 23:51:24.370095] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=80000005d 00:07:28.706 [2024-07-24 23:51:24.371190] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=80000005d 00:07:28.706 [2024-07-24 23:51:24.372315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=6b05fbd4 00:07:28.706 [2024-07-24 23:51:24.373425] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5a3d6598, Actual=d048ff3 00:07:28.706 [2024-07-24 23:51:24.374680] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a77a8ecc20d3, Actual=a576a7728ecc20d3 00:07:28.706 [2024-07-24 23:51:24.375791] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=383413d4961f2eeb, Actual=383413dc961f2eeb 00:07:28.706 [2024-07-24 23:51:24.376913] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=80 00:07:28.706 [2024-07-24 23:51:24.378043] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=80 00:07:28.706 [2024-07-24 23:51:24.379153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=8005d 00:07:28.706 [2024-07-24 23:51:24.380255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=8005d 00:07:28.706 [2024-07-24 23:51:24.381513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=f8fbc92ab5c1fb20 00:07:28.706 passed 00:07:28.706 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-24 23:51:24.382611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bdcaeff86fabb130, Actual=1bfca6f78a6ad43b 00:07:28.706 [2024-07-24 23:51:24.382967] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd44, Actual=fd4c 00:07:28.706 [2024-07-24 23:51:24.383244] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=412a, Actual=4122 00:07:28.706 [2024-07-24 23:51:24.383541] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:07:28.706 [2024-07-24 23:51:24.383820] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:07:28.706 [2024-07-24 23:51:24.384094] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=51 00:07:28.706 [2024-07-24 23:51:24.384363] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=51 00:07:28.706 [2024-07-24 23:51:24.384613] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=42ec 00:07:28.706 [2024-07-24 23:51:24.384896] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=e074 00:07:28.706 [2024-07-24 23:51:24.385163] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753e5, Actual=1ab753ed 00:07:28.706 [2024-07-24 23:51:24.385430] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=741368c6, Actual=741368ce 00:07:28.706 [2024-07-24 23:51:24.385700] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:07:28.706 [2024-07-24 23:51:24.385991] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:07:28.706 [2024-07-24 23:51:24.386267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800000059 00:07:28.706 [2024-07-24 23:51:24.386538] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800000059 00:07:28.706 [2024-07-24 23:51:24.386813] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=6b05fbd4 00:07:28.706 [2024-07-24 23:51:24.387080] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=cc84ba06 00:07:28.706 [2024-07-24 23:51:24.387345] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a77a8ecc20d3, Actual=a576a7728ecc20d3 00:07:28.706 [2024-07-24 23:51:24.387608] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=cdd61c47b0f0aab4, Actual=cdd61c4fb0f0aab4 00:07:28.706 [2024-07-24 23:51:24.387884] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:07:28.706 [2024-07-24 23:51:24.388138] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:07:28.706 [2024-07-24 23:51:24.388410] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:07:28.706 [2024-07-24 23:51:24.388680] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:07:28.706 [2024-07-24 23:51:24.388977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=f8fbc92ab5c1fb20 00:07:28.706 passed 00:07:28.706 Test: dix_sec_0_md_8_error ...passed 00:07:28.706 Test: dix_sec_512_md_0_error ...passed 00:07:28.706 Test: dix_sec_512_md_16_error ...passed 00:07:28.706 Test: dix_sec_4096_md_0_8_error ...passed 00:07:28.706 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-07-24 23:51:24.389233] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=ee1ea964ac855064 00:07:28.706 [2024-07-24 23:51:24.389272] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 555:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed 00:07:28.706 [2024-07-24 23:51:24.389301] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:28.706 [2024-07-24 23:51:24.389322] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:07:28.706 [2024-07-24 23:51:24.389347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:07:28.706 [2024-07-24 23:51:24.389393] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:28.706 [2024-07-24 23:51:24.389415] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:28.706 [2024-07-24 23:51:24.389432] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:28.706 [2024-07-24 23:51:24.389453] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:28.706 passed 00:07:28.706 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:28.706 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:28.706 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:28.706 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:28.706 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:28.706 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:28.706 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:28.706 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:28.706 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-24 23:51:24.433972] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd44, Actual=fd4c 00:07:28.706 [2024-07-24 23:51:24.435231] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=22ab, Actual=22a3 00:07:28.706 [2024-07-24 23:51:24.436340] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=80 00:07:28.706 [2024-07-24 23:51:24.437593] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=80 00:07:28.706 [2024-07-24 23:51:24.438700] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=55 00:07:28.706 [2024-07-24 23:51:24.439792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=55 00:07:28.706 [2024-07-24 23:51:24.440921] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=42ec 00:07:28.706 [2024-07-24 23:51:24.442050] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f141, Actual=83f5 00:07:28.706 [2024-07-24 23:51:24.443156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753e5, Actual=1ab753ed 00:07:28.706 [2024-07-24 23:51:24.444242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=b5935d33, Actual=b5935d3b 00:07:28.706 [2024-07-24 23:51:24.445366] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=80 00:07:28.706 [2024-07-24 23:51:24.446454] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=80 00:07:28.707 [2024-07-24 23:51:24.447698] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=80000005d 00:07:28.707 [2024-07-24 23:51:24.448810] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=80000005d 00:07:28.707 [2024-07-24 23:51:24.449922] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=6b05fbd4 00:07:28.707 [2024-07-24 23:51:24.451008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5a3d6598, Actual=d048ff3 00:07:28.707 [2024-07-24 23:51:24.452122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a77a8ecc20d3, Actual=a576a7728ecc20d3 00:07:28.707 [2024-07-24 23:51:24.453274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=383413d4961f2eeb, Actual=383413dc961f2eeb 00:07:28.707 [2024-07-24 23:51:24.454372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=80 00:07:28.707 [2024-07-24 23:51:24.455476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=80 00:07:28.707 [2024-07-24 23:51:24.456572] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=8005d 00:07:28.707 [2024-07-24 23:51:24.457852] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=8005d 00:07:28.707 [2024-07-24 23:51:24.458662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=f8fbc92ab5c1fb20 00:07:28.707 passed 00:07:28.707 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-24 23:51:24.459566] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bdcaeff86fabb130, Actual=1bfca6f78a6ad43b 00:07:28.707 [2024-07-24 23:51:24.460041] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd44, Actual=fd4c 00:07:28.707 [2024-07-24 23:51:24.460382] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=412a, Actual=4122 00:07:28.707 [2024-07-24 23:51:24.460578] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:07:28.707 [2024-07-24 23:51:24.460785] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:07:28.707 [2024-07-24 23:51:24.461087] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=51 00:07:28.707 [2024-07-24 23:51:24.461334] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=51 00:07:28.707 [2024-07-24 23:51:24.461545] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=42ec 00:07:28.707 [2024-07-24 23:51:24.461743] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=e074 00:07:28.707 [2024-07-24 23:51:24.461937] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753e5, Actual=1ab753ed 00:07:28.707 [2024-07-24 23:51:24.462134] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=741368c6, Actual=741368ce 00:07:28.707 [2024-07-24 23:51:24.462327] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:07:28.707 [2024-07-24 23:51:24.462513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:07:28.707 [2024-07-24 23:51:24.462709] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800000059 00:07:28.707 [2024-07-24 23:51:24.462916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800000059 00:07:28.707 [2024-07-24 23:51:24.463130] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=6b05fbd4 00:07:28.707 [2024-07-24 23:51:24.463323] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=cc84ba06 00:07:28.707 [2024-07-24 23:51:24.463510] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a77a8ecc20d3, Actual=a576a7728ecc20d3 00:07:28.707 [2024-07-24 23:51:24.463690] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=cdd61c47b0f0aab4, Actual=cdd61c4fb0f0aab4 00:07:28.707 [2024-07-24 23:51:24.463909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:07:28.707 [2024-07-24 23:51:24.464098] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:07:28.707 [2024-07-24 23:51:24.464306] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:07:28.707 [2024-07-24 23:51:24.464494] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:07:28.707 [2024-07-24 23:51:24.464676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=f8fbc92ab5c1fb20 00:07:28.707 passed 00:07:28.707 Test: set_md_interleave_iovs_test ...[2024-07-24 23:51:24.464876] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=ee1ea964ac855064 00:07:28.707 passed 00:07:28.707 Test: set_md_interleave_iovs_split_test ...passed 00:07:28.707 Test: dif_generate_stream_pi_16_test ...passed 00:07:28.707 Test: dif_generate_stream_test ...passed 00:07:28.707 Test: set_md_interleave_iovs_alignment_test ...passed 00:07:28.707 Test: dif_generate_split_test ...passed 00:07:28.707 Test: set_md_interleave_iovs_multi_segments_test ...[2024-07-24 23:51:24.470635] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1857:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:07:28.707 passed 00:07:28.707 Test: dif_verify_split_test ...passed 00:07:28.707 Test: dif_verify_stream_multi_segments_test ...passed 00:07:28.707 Test: update_crc32c_pi_16_test ...passed 00:07:28.707 Test: update_crc32c_test ...passed 00:07:28.707 Test: dif_update_crc32c_split_test ...passed 00:07:28.707 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:07:28.707 Test: get_range_with_md_test ...passed 00:07:28.707 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:07:28.707 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:07:28.707 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:28.707 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:07:28.707 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:07:28.707 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:28.707 Test: dif_generate_and_verify_unmap_test ...passed 00:07:28.707 Test: dif_pi_format_check_test ...passed 00:07:28.707 Test: dif_type_check_test ...passed 00:07:28.707 00:07:28.707 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.707 suites 1 1 n/a 0 0 00:07:28.707 tests 86 86 86 0 0 00:07:28.707 asserts 3605 3605 3605 0 n/a 00:07:28.707 00:07:28.707 Elapsed time = 0.334 seconds 00:07:28.707 23:51:24 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:07:28.707 00:07:28.707 00:07:28.707 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.707 http://cunit.sourceforge.net/ 00:07:28.707 00:07:28.707 00:07:28.707 Suite: iov 00:07:28.707 Test: test_single_iov ...passed 00:07:28.707 Test: test_simple_iov ...passed 00:07:28.707 Test: test_complex_iov ...passed 00:07:28.707 Test: test_iovs_to_buf ...passed 00:07:28.707 Test: test_buf_to_iovs ...passed 00:07:28.707 Test: test_memset ...passed 00:07:28.707 Test: test_iov_one ...passed 00:07:28.707 Test: test_iov_xfer ...passed 00:07:28.707 00:07:28.707 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.707 suites 1 1 n/a 0 0 00:07:28.707 tests 8 8 8 0 0 00:07:28.707 asserts 156 156 156 0 n/a 00:07:28.707 00:07:28.707 Elapsed time = 0.000 seconds 00:07:28.707 23:51:24 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:07:28.707 00:07:28.707 00:07:28.707 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.707 http://cunit.sourceforge.net/ 00:07:28.707 00:07:28.707 00:07:28.707 Suite: math 00:07:28.707 Test: test_serial_number_arithmetic ...passed 00:07:28.707 Suite: erase 00:07:28.707 Test: test_memset_s ...passed 00:07:28.707 00:07:28.707 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.707 suites 2 2 n/a 0 0 00:07:28.707 tests 2 2 2 0 0 00:07:28.707 asserts 18 18 18 0 n/a 00:07:28.707 00:07:28.707 Elapsed time = 0.000 seconds 00:07:28.707 23:51:24 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:07:28.966 00:07:28.967 00:07:28.967 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.967 http://cunit.sourceforge.net/ 00:07:28.967 00:07:28.967 00:07:28.967 Suite: pipe 00:07:28.967 Test: test_create_destroy ...passed 00:07:28.967 Test: test_write_get_buffer ...passed 00:07:28.967 Test: test_write_advance ...passed 00:07:28.967 Test: test_read_get_buffer ...passed 00:07:28.967 Test: test_read_advance ...passed 00:07:28.967 Test: test_data ...passed 00:07:28.967 00:07:28.967 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.967 suites 1 1 n/a 0 0 00:07:28.967 tests 6 6 6 0 0 00:07:28.967 asserts 251 251 251 0 n/a 00:07:28.967 00:07:28.967 Elapsed time = 0.000 seconds 00:07:28.967 23:51:24 unittest.unittest_util -- unit/unittest.sh@146 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:07:28.967 00:07:28.967 00:07:28.967 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.967 http://cunit.sourceforge.net/ 00:07:28.967 00:07:28.967 00:07:28.967 Suite: xor 00:07:28.967 Test: test_xor_gen ...passed 00:07:28.967 00:07:28.967 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.967 suites 1 1 n/a 0 0 00:07:28.967 tests 1 1 1 0 0 00:07:28.967 asserts 17 17 17 0 n/a 00:07:28.967 00:07:28.967 Elapsed time = 0.007 seconds 00:07:28.967 00:07:28.967 real 0m0.672s 00:07:28.967 user 0m0.468s 00:07:28.967 sys 0m0.201s 00:07:28.967 23:51:24 unittest.unittest_util -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.967 23:51:24 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:07:28.967 ************************************ 00:07:28.967 END TEST unittest_util 00:07:28.967 ************************************ 00:07:28.967 23:51:24 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:28.967 23:51:24 unittest -- unit/unittest.sh@285 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:28.967 23:51:24 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.967 23:51:24 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.967 23:51:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:28.967 ************************************ 00:07:28.967 START TEST unittest_vhost 00:07:28.967 ************************************ 00:07:28.967 23:51:24 unittest.unittest_vhost -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:28.967 00:07:28.967 00:07:28.967 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.967 http://cunit.sourceforge.net/ 00:07:28.967 00:07:28.967 00:07:28.967 Suite: vhost_suite 00:07:28.967 Test: desc_to_iov_test ...passed 00:07:28.967 Test: create_controller_test ...[2024-07-24 23:51:24.690600] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:07:28.967 [2024-07-24 23:51:24.696298] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:28.967 [2024-07-24 23:51:24.696433] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:07:28.967 [2024-07-24 23:51:24.696587] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:28.967 [2024-07-24 23:51:24.696676] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:07:28.967 [2024-07-24 23:51:24.696720] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:07:28.967 [2024-07-24 23:51:24.697459] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1781:vhost_user_dev_init: *ERROR*: Resulting socket path for controller is too long: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 00:07:28.967 passed 00:07:28.967 Test: session_find_by_vid_test ...[2024-07-24 23:51:24.698991] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 137:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:07:28.967 passed 00:07:28.967 Test: remove_controller_test ...passed 00:07:28.967 Test: vq_avail_ring_get_test ...passed 00:07:28.967 Test: vq_packed_ring_test ...passed 00:07:28.967 Test: vhost_blk_construct_test ...[2024-07-24 23:51:24.702440] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1866:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:07:28.967 passed 00:07:28.967 00:07:28.967 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.967 suites 1 1 n/a 0 0 00:07:28.967 tests 7 7 7 0 0 00:07:28.967 asserts 147 147 147 0 n/a 00:07:28.967 00:07:28.967 Elapsed time = 0.018 seconds 00:07:28.967 00:07:28.967 real 0m0.057s 00:07:28.967 user 0m0.035s 00:07:28.967 sys 0m0.022s 00:07:28.967 ************************************ 00:07:28.967 END TEST unittest_vhost 00:07:28.967 ************************************ 00:07:28.967 23:51:24 unittest.unittest_vhost -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.967 23:51:24 unittest.unittest_vhost -- common/autotest_common.sh@10 -- # set +x 00:07:28.967 23:51:24 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:28.967 23:51:24 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.967 23:51:24 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.967 23:51:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:28.967 ************************************ 00:07:28.967 START TEST unittest_dma 00:07:28.967 ************************************ 00:07:28.967 23:51:24 unittest.unittest_dma -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:28.967 00:07:28.967 00:07:28.967 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.967 http://cunit.sourceforge.net/ 00:07:28.967 00:07:28.967 00:07:28.967 Suite: dma_suite 00:07:28.967 Test: test_dma ...passed 00:07:28.967 00:07:28.967 [2024-07-24 23:51:24.796080] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:07:28.967 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.967 suites 1 1 n/a 0 0 00:07:28.967 tests 1 1 1 0 0 00:07:28.967 asserts 54 54 54 0 n/a 00:07:28.967 00:07:28.967 Elapsed time = 0.001 seconds 00:07:28.967 00:07:28.967 real 0m0.026s 00:07:28.967 user 0m0.011s 00:07:28.967 sys 0m0.016s 00:07:28.967 23:51:24 unittest.unittest_dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.967 ************************************ 00:07:28.967 END TEST unittest_dma 00:07:28.967 ************************************ 00:07:28.967 23:51:24 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:07:29.227 23:51:24 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:07:29.227 23:51:24 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.227 23:51:24 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.227 23:51:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:29.227 ************************************ 00:07:29.227 START TEST unittest_init 00:07:29.227 ************************************ 00:07:29.227 23:51:24 unittest.unittest_init -- common/autotest_common.sh@1125 -- # unittest_init 00:07:29.227 23:51:24 unittest.unittest_init -- unit/unittest.sh@150 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:07:29.227 00:07:29.227 00:07:29.227 CUnit - A unit testing framework for C - Version 2.1-3 00:07:29.227 http://cunit.sourceforge.net/ 00:07:29.227 00:07:29.227 00:07:29.227 Suite: subsystem_suite 00:07:29.227 Test: subsystem_sort_test_depends_on_single ...passed 00:07:29.227 Test: subsystem_sort_test_depends_on_multiple ...passed 00:07:29.227 Test: subsystem_sort_test_missing_dependency ...passed 00:07:29.227 00:07:29.227 [2024-07-24 23:51:24.874902] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:07:29.227 [2024-07-24 23:51:24.875135] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:07:29.227 Run Summary: Type Total Ran Passed Failed Inactive 00:07:29.227 suites 1 1 n/a 0 0 00:07:29.227 tests 3 3 3 0 0 00:07:29.227 asserts 20 20 20 0 n/a 00:07:29.227 00:07:29.227 Elapsed time = 0.000 seconds 00:07:29.227 00:07:29.227 real 0m0.040s 00:07:29.227 user 0m0.020s 00:07:29.227 sys 0m0.021s 00:07:29.227 23:51:24 unittest.unittest_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.227 ************************************ 00:07:29.227 END TEST unittest_init 00:07:29.227 ************************************ 00:07:29.227 23:51:24 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:07:29.227 23:51:24 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:07:29.227 23:51:24 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.227 23:51:24 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.227 23:51:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:29.227 ************************************ 00:07:29.227 START TEST unittest_keyring 00:07:29.227 ************************************ 00:07:29.227 23:51:24 unittest.unittest_keyring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:07:29.227 00:07:29.227 00:07:29.227 CUnit - A unit testing framework for C - Version 2.1-3 00:07:29.227 http://cunit.sourceforge.net/ 00:07:29.227 00:07:29.227 00:07:29.227 Suite: keyring 00:07:29.227 Test: test_keyring_add_remove ...[2024-07-24 23:51:24.963781] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' apassed 00:07:29.227 Test: test_keyring_get_put ...passed 00:07:29.227 00:07:29.227 Run Summary: Type Total Ran Passed Failed Inactive 00:07:29.227 suites 1 1 n/a 0 0 00:07:29.227 tests 2 2 2 0 0 00:07:29.227 asserts 44 44 44 0 n/a 00:07:29.227 00:07:29.227 Elapsed time = 0.001 seconds 00:07:29.227 lready exists 00:07:29.227 [2024-07-24 23:51:24.964325] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:07:29.227 [2024-07-24 23:51:24.964398] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:07:29.227 00:07:29.227 real 0m0.031s 00:07:29.227 user 0m0.016s 00:07:29.227 sys 0m0.015s 00:07:29.227 23:51:24 unittest.unittest_keyring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.227 23:51:24 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:07:29.227 ************************************ 00:07:29.227 END TEST unittest_keyring 00:07:29.227 ************************************ 00:07:29.227 23:51:25 unittest -- unit/unittest.sh@292 -- # '[' yes = yes ']' 00:07:29.227 23:51:25 unittest -- unit/unittest.sh@292 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:07:29.227 23:51:25 unittest -- unit/unittest.sh@293 -- # hostname 00:07:29.227 23:51:25 unittest -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2404-cloud-1720510786-2314 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:07:29.486 geninfo: WARNING: invalid characters removed from testname! 00:08:08.274 23:51:58 unittest -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:08:08.274 23:52:03 unittest -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:10.806 23:52:06 unittest -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:14.093 23:52:09 unittest -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:16.626 23:52:12 unittest -- unit/unittest.sh@298 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:19.163 23:52:15 unittest -- unit/unittest.sh@299 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:22.452 23:52:17 unittest -- unit/unittest.sh@300 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:24.356 23:52:19 unittest -- unit/unittest.sh@301 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:24.356 23:52:19 unittest -- unit/unittest.sh@302 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:24.925 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:24.925 Found 326 entries. 00:08:24.925 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:08:24.925 Writing .css and .png files. 00:08:24.925 Generating output. 00:08:24.925 Processing file include/linux/virtio_ring.h 00:08:25.183 Processing file include/spdk/base64.h 00:08:25.183 Processing file include/spdk/util.h 00:08:25.183 Processing file include/spdk/bdev_module.h 00:08:25.183 Processing file include/spdk/mmio.h 00:08:25.183 Processing file include/spdk/histogram_data.h 00:08:25.183 Processing file include/spdk/nvme.h 00:08:25.183 Processing file include/spdk/trace.h 00:08:25.183 Processing file include/spdk/nvmf_transport.h 00:08:25.183 Processing file include/spdk/nvme_spec.h 00:08:25.183 Processing file include/spdk/endian.h 00:08:25.183 Processing file include/spdk/thread.h 00:08:25.443 Processing file include/spdk_internal/rdma_utils.h 00:08:25.443 Processing file include/spdk_internal/sock.h 00:08:25.443 Processing file include/spdk_internal/virtio.h 00:08:25.443 Processing file include/spdk_internal/utf.h 00:08:25.443 Processing file include/spdk_internal/nvme_tcp.h 00:08:25.443 Processing file include/spdk_internal/sgl.h 00:08:25.443 Processing file lib/accel/accel_sw.c 00:08:25.443 Processing file lib/accel/accel.c 00:08:25.443 Processing file lib/accel/accel_rpc.c 00:08:25.702 Processing file lib/bdev/scsi_nvme.c 00:08:25.702 Processing file lib/bdev/bdev_rpc.c 00:08:25.702 Processing file lib/bdev/bdev_zone.c 00:08:25.702 Processing file lib/bdev/bdev.c 00:08:25.702 Processing file lib/bdev/part.c 00:08:25.961 Processing file lib/blob/zeroes.c 00:08:25.961 Processing file lib/blob/request.c 00:08:25.961 Processing file lib/blob/blob_bs_dev.c 00:08:25.961 Processing file lib/blob/blobstore.h 00:08:25.961 Processing file lib/blob/blobstore.c 00:08:26.221 Processing file lib/blobfs/tree.c 00:08:26.221 Processing file lib/blobfs/blobfs.c 00:08:26.221 Processing file lib/conf/conf.c 00:08:26.221 Processing file lib/dma/dma.c 00:08:26.480 Processing file lib/env_dpdk/pci_vmd.c 00:08:26.480 Processing file lib/env_dpdk/pci_virtio.c 00:08:26.480 Processing file lib/env_dpdk/pci_ioat.c 00:08:26.480 Processing file lib/env_dpdk/sigbus_handler.c 00:08:26.480 Processing file lib/env_dpdk/env.c 00:08:26.480 Processing file lib/env_dpdk/init.c 00:08:26.480 Processing file lib/env_dpdk/pci_event.c 00:08:26.480 Processing file lib/env_dpdk/threads.c 00:08:26.480 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:08:26.480 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:08:26.480 Processing file lib/env_dpdk/pci.c 00:08:26.480 Processing file lib/env_dpdk/pci_dpdk.c 00:08:26.480 Processing file lib/env_dpdk/pci_idxd.c 00:08:26.480 Processing file lib/env_dpdk/memory.c 00:08:26.739 Processing file lib/event/scheduler_static.c 00:08:26.739 Processing file lib/event/app_rpc.c 00:08:26.739 Processing file lib/event/log_rpc.c 00:08:26.739 Processing file lib/event/reactor.c 00:08:26.739 Processing file lib/event/app.c 00:08:27.312 Processing file lib/ftl/ftl_p2l.c 00:08:27.313 Processing file lib/ftl/ftl_band.h 00:08:27.313 Processing file lib/ftl/ftl_writer.h 00:08:27.313 Processing file lib/ftl/ftl_band_ops.c 00:08:27.313 Processing file lib/ftl/ftl_l2p_flat.c 00:08:27.313 Processing file lib/ftl/ftl_core.c 00:08:27.313 Processing file lib/ftl/ftl_nv_cache_io.h 00:08:27.313 Processing file lib/ftl/ftl_debug.h 00:08:27.313 Processing file lib/ftl/ftl_layout.c 00:08:27.313 Processing file lib/ftl/ftl_sb.c 00:08:27.313 Processing file lib/ftl/ftl_debug.c 00:08:27.313 Processing file lib/ftl/ftl_core.h 00:08:27.313 Processing file lib/ftl/ftl_writer.c 00:08:27.313 Processing file lib/ftl/ftl_reloc.c 00:08:27.313 Processing file lib/ftl/ftl_nv_cache.h 00:08:27.313 Processing file lib/ftl/ftl_trace.c 00:08:27.313 Processing file lib/ftl/ftl_io.h 00:08:27.313 Processing file lib/ftl/ftl_l2p_cache.c 00:08:27.313 Processing file lib/ftl/ftl_l2p.c 00:08:27.313 Processing file lib/ftl/ftl_band.c 00:08:27.313 Processing file lib/ftl/ftl_rq.c 00:08:27.313 Processing file lib/ftl/ftl_nv_cache.c 00:08:27.313 Processing file lib/ftl/ftl_io.c 00:08:27.313 Processing file lib/ftl/ftl_init.c 00:08:27.313 Processing file lib/ftl/base/ftl_base_dev.c 00:08:27.313 Processing file lib/ftl/base/ftl_base_bdev.c 00:08:27.572 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:08:27.572 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:08:27.572 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:08:27.572 Processing file lib/ftl/mngt/ftl_mngt.c 00:08:27.572 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:08:27.572 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:08:27.572 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:08:27.572 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:08:27.572 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:08:27.572 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:08:27.572 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:08:27.572 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:08:27.572 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:08:27.572 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:08:27.572 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:08:27.831 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:08:27.831 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:08:27.831 Processing file lib/ftl/upgrade/ftl_chunk_upgrade.c 00:08:27.831 Processing file lib/ftl/upgrade/ftl_p2l_upgrade.c 00:08:27.831 Processing file lib/ftl/upgrade/ftl_trim_upgrade.c 00:08:27.831 Processing file lib/ftl/upgrade/ftl_band_upgrade.c 00:08:27.831 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:08:27.831 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:08:27.831 Processing file lib/ftl/utils/ftl_property.c 00:08:27.831 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:08:27.831 Processing file lib/ftl/utils/ftl_addr_utils.h 00:08:27.831 Processing file lib/ftl/utils/ftl_conf.c 00:08:27.831 Processing file lib/ftl/utils/ftl_bitmap.c 00:08:27.831 Processing file lib/ftl/utils/ftl_property.h 00:08:27.831 Processing file lib/ftl/utils/ftl_md.c 00:08:27.831 Processing file lib/ftl/utils/ftl_mempool.c 00:08:27.831 Processing file lib/ftl/utils/ftl_df.h 00:08:28.090 Processing file lib/idxd/idxd_user.c 00:08:28.090 Processing file lib/idxd/idxd.c 00:08:28.090 Processing file lib/idxd/idxd_internal.h 00:08:28.090 Processing file lib/idxd/idxd_kernel.c 00:08:28.090 Processing file lib/init/json_config.c 00:08:28.090 Processing file lib/init/rpc.c 00:08:28.090 Processing file lib/init/subsystem.c 00:08:28.090 Processing file lib/init/subsystem_rpc.c 00:08:28.349 Processing file lib/ioat/ioat_internal.h 00:08:28.349 Processing file lib/ioat/ioat.c 00:08:28.608 Processing file lib/iscsi/iscsi.h 00:08:28.608 Processing file lib/iscsi/iscsi_subsystem.c 00:08:28.608 Processing file lib/iscsi/iscsi.c 00:08:28.608 Processing file lib/iscsi/init_grp.c 00:08:28.608 Processing file lib/iscsi/conn.c 00:08:28.608 Processing file lib/iscsi/task.h 00:08:28.608 Processing file lib/iscsi/task.c 00:08:28.608 Processing file lib/iscsi/param.c 00:08:28.608 Processing file lib/iscsi/md5.c 00:08:28.608 Processing file lib/iscsi/portal_grp.c 00:08:28.608 Processing file lib/iscsi/tgt_node.c 00:08:28.608 Processing file lib/iscsi/iscsi_rpc.c 00:08:28.608 Processing file lib/json/json_write.c 00:08:28.608 Processing file lib/json/json_parse.c 00:08:28.608 Processing file lib/json/json_util.c 00:08:28.867 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:08:28.867 Processing file lib/jsonrpc/jsonrpc_client.c 00:08:28.867 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:08:28.867 Processing file lib/jsonrpc/jsonrpc_server.c 00:08:28.867 Processing file lib/keyring/keyring.c 00:08:28.867 Processing file lib/keyring/keyring_rpc.c 00:08:28.867 Processing file lib/log/log.c 00:08:28.867 Processing file lib/log/log_deprecated.c 00:08:28.867 Processing file lib/log/log_flags.c 00:08:29.127 Processing file lib/lvol/lvol.c 00:08:29.127 Processing file lib/nbd/nbd.c 00:08:29.127 Processing file lib/nbd/nbd_rpc.c 00:08:29.127 Processing file lib/notify/notify_rpc.c 00:08:29.127 Processing file lib/notify/notify.c 00:08:30.064 Processing file lib/nvme/nvme_pcie_internal.h 00:08:30.064 Processing file lib/nvme/nvme_pcie.c 00:08:30.064 Processing file lib/nvme/nvme_poll_group.c 00:08:30.064 Processing file lib/nvme/nvme_ns.c 00:08:30.064 Processing file lib/nvme/nvme_fabric.c 00:08:30.064 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:08:30.064 Processing file lib/nvme/nvme_io_msg.c 00:08:30.064 Processing file lib/nvme/nvme_cuse.c 00:08:30.064 Processing file lib/nvme/nvme_zns.c 00:08:30.064 Processing file lib/nvme/nvme_opal.c 00:08:30.064 Processing file lib/nvme/nvme_transport.c 00:08:30.064 Processing file lib/nvme/nvme_ns_cmd.c 00:08:30.064 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:08:30.064 Processing file lib/nvme/nvme.c 00:08:30.064 Processing file lib/nvme/nvme_discovery.c 00:08:30.064 Processing file lib/nvme/nvme_ctrlr.c 00:08:30.064 Processing file lib/nvme/nvme_qpair.c 00:08:30.064 Processing file lib/nvme/nvme_internal.h 00:08:30.064 Processing file lib/nvme/nvme_tcp.c 00:08:30.064 Processing file lib/nvme/nvme_quirks.c 00:08:30.064 Processing file lib/nvme/nvme_auth.c 00:08:30.064 Processing file lib/nvme/nvme_pcie_common.c 00:08:30.064 Processing file lib/nvme/nvme_rdma.c 00:08:30.064 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:08:30.633 Processing file lib/nvmf/ctrlr_discovery.c 00:08:30.633 Processing file lib/nvmf/transport.c 00:08:30.633 Processing file lib/nvmf/auth.c 00:08:30.633 Processing file lib/nvmf/ctrlr_bdev.c 00:08:30.633 Processing file lib/nvmf/ctrlr.c 00:08:30.633 Processing file lib/nvmf/nvmf_internal.h 00:08:30.633 Processing file lib/nvmf/nvmf.c 00:08:30.633 Processing file lib/nvmf/rdma.c 00:08:30.633 Processing file lib/nvmf/subsystem.c 00:08:30.633 Processing file lib/nvmf/tcp.c 00:08:30.633 Processing file lib/nvmf/nvmf_rpc.c 00:08:30.633 Processing file lib/rdma_provider/rdma_provider_verbs.c 00:08:30.633 Processing file lib/rdma_provider/common.c 00:08:30.633 Processing file lib/rdma_utils/rdma_utils.c 00:08:30.891 Processing file lib/rpc/rpc.c 00:08:30.891 Processing file lib/scsi/task.c 00:08:30.891 Processing file lib/scsi/lun.c 00:08:30.891 Processing file lib/scsi/port.c 00:08:30.891 Processing file lib/scsi/scsi_bdev.c 00:08:30.891 Processing file lib/scsi/scsi_pr.c 00:08:30.891 Processing file lib/scsi/scsi_rpc.c 00:08:30.891 Processing file lib/scsi/dev.c 00:08:30.891 Processing file lib/scsi/scsi.c 00:08:31.150 Processing file lib/sock/sock.c 00:08:31.150 Processing file lib/sock/sock_rpc.c 00:08:31.150 Processing file lib/thread/iobuf.c 00:08:31.150 Processing file lib/thread/thread.c 00:08:31.409 Processing file lib/trace/trace.c 00:08:31.409 Processing file lib/trace/trace_flags.c 00:08:31.409 Processing file lib/trace/trace_rpc.c 00:08:31.409 Processing file lib/trace_parser/trace.cpp 00:08:31.409 Processing file lib/ublk/ublk.c 00:08:31.409 Processing file lib/ublk/ublk_rpc.c 00:08:31.409 Processing file lib/ut/ut.c 00:08:31.667 Processing file lib/ut_mock/mock.c 00:08:31.927 Processing file lib/util/uuid.c 00:08:31.927 Processing file lib/util/base64.c 00:08:31.927 Processing file lib/util/net.c 00:08:31.927 Processing file lib/util/crc32_ieee.c 00:08:31.927 Processing file lib/util/strerror_tls.c 00:08:31.927 Processing file lib/util/crc64.c 00:08:31.927 Processing file lib/util/cpuset.c 00:08:31.927 Processing file lib/util/xor.c 00:08:31.927 Processing file lib/util/file.c 00:08:31.927 Processing file lib/util/hexlify.c 00:08:31.927 Processing file lib/util/crc32.c 00:08:31.927 Processing file lib/util/crc32c.c 00:08:31.927 Processing file lib/util/fd.c 00:08:31.927 Processing file lib/util/string.c 00:08:31.927 Processing file lib/util/dif.c 00:08:31.927 Processing file lib/util/fd_group.c 00:08:31.927 Processing file lib/util/bit_array.c 00:08:31.927 Processing file lib/util/zipf.c 00:08:31.927 Processing file lib/util/math.c 00:08:31.927 Processing file lib/util/pipe.c 00:08:31.927 Processing file lib/util/crc16.c 00:08:31.927 Processing file lib/util/iov.c 00:08:32.185 Processing file lib/vfio_user/host/vfio_user_pci.c 00:08:32.185 Processing file lib/vfio_user/host/vfio_user.c 00:08:32.443 Processing file lib/vhost/rte_vhost_user.c 00:08:32.444 Processing file lib/vhost/vhost_scsi.c 00:08:32.444 Processing file lib/vhost/vhost_blk.c 00:08:32.444 Processing file lib/vhost/vhost_internal.h 00:08:32.444 Processing file lib/vhost/vhost_rpc.c 00:08:32.444 Processing file lib/vhost/vhost.c 00:08:32.444 Processing file lib/virtio/virtio_vfio_user.c 00:08:32.444 Processing file lib/virtio/virtio.c 00:08:32.444 Processing file lib/virtio/virtio_pci.c 00:08:32.444 Processing file lib/virtio/virtio_vhost_user.c 00:08:32.444 Processing file lib/vmd/vmd.c 00:08:32.444 Processing file lib/vmd/led.c 00:08:32.702 Processing file module/accel/dsa/accel_dsa.c 00:08:32.702 Processing file module/accel/dsa/accel_dsa_rpc.c 00:08:32.702 Processing file module/accel/error/accel_error_rpc.c 00:08:32.702 Processing file module/accel/error/accel_error.c 00:08:32.702 Processing file module/accel/iaa/accel_iaa_rpc.c 00:08:32.702 Processing file module/accel/iaa/accel_iaa.c 00:08:32.961 Processing file module/accel/ioat/accel_ioat.c 00:08:32.961 Processing file module/accel/ioat/accel_ioat_rpc.c 00:08:32.961 Processing file module/bdev/aio/bdev_aio_rpc.c 00:08:32.961 Processing file module/bdev/aio/bdev_aio.c 00:08:32.961 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:08:32.961 Processing file module/bdev/delay/vbdev_delay.c 00:08:32.961 Processing file module/bdev/error/vbdev_error_rpc.c 00:08:32.961 Processing file module/bdev/error/vbdev_error.c 00:08:33.220 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:08:33.220 Processing file module/bdev/ftl/bdev_ftl.c 00:08:33.220 Processing file module/bdev/gpt/gpt.c 00:08:33.220 Processing file module/bdev/gpt/vbdev_gpt.c 00:08:33.220 Processing file module/bdev/gpt/gpt.h 00:08:33.220 Processing file module/bdev/iscsi/bdev_iscsi.c 00:08:33.220 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:08:33.479 Processing file module/bdev/lvol/vbdev_lvol.c 00:08:33.479 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:08:33.479 Processing file module/bdev/malloc/bdev_malloc.c 00:08:33.479 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:08:33.479 Processing file module/bdev/null/bdev_null.c 00:08:33.479 Processing file module/bdev/null/bdev_null_rpc.c 00:08:34.045 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:08:34.045 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:08:34.045 Processing file module/bdev/nvme/nvme_rpc.c 00:08:34.045 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:08:34.045 Processing file module/bdev/nvme/bdev_mdns_client.c 00:08:34.045 Processing file module/bdev/nvme/vbdev_opal.c 00:08:34.045 Processing file module/bdev/nvme/bdev_nvme.c 00:08:34.045 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:08:34.046 Processing file module/bdev/passthru/vbdev_passthru.c 00:08:34.305 Processing file module/bdev/raid/raid1.c 00:08:34.305 Processing file module/bdev/raid/bdev_raid.c 00:08:34.305 Processing file module/bdev/raid/raid5f.c 00:08:34.305 Processing file module/bdev/raid/raid0.c 00:08:34.305 Processing file module/bdev/raid/bdev_raid_rpc.c 00:08:34.305 Processing file module/bdev/raid/bdev_raid_sb.c 00:08:34.305 Processing file module/bdev/raid/concat.c 00:08:34.305 Processing file module/bdev/raid/bdev_raid.h 00:08:34.305 Processing file module/bdev/split/vbdev_split.c 00:08:34.305 Processing file module/bdev/split/vbdev_split_rpc.c 00:08:34.564 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:08:34.564 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:08:34.564 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:08:34.564 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:08:34.564 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:08:34.564 Processing file module/blob/bdev/blob_bdev.c 00:08:34.823 Processing file module/blobfs/bdev/blobfs_bdev.c 00:08:34.823 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:08:34.823 Processing file module/env_dpdk/env_dpdk_rpc.c 00:08:34.823 Processing file module/event/subsystems/accel/accel.c 00:08:34.823 Processing file module/event/subsystems/bdev/bdev.c 00:08:34.823 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:08:34.823 Processing file module/event/subsystems/iobuf/iobuf.c 00:08:35.086 Processing file module/event/subsystems/iscsi/iscsi.c 00:08:35.086 Processing file module/event/subsystems/keyring/keyring.c 00:08:35.086 Processing file module/event/subsystems/nbd/nbd.c 00:08:35.086 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:08:35.086 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:08:35.086 Processing file module/event/subsystems/scheduler/scheduler.c 00:08:35.343 Processing file module/event/subsystems/scsi/scsi.c 00:08:35.343 Processing file module/event/subsystems/sock/sock.c 00:08:35.343 Processing file module/event/subsystems/ublk/ublk.c 00:08:35.343 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:08:35.343 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:08:35.601 Processing file module/event/subsystems/vmd/vmd.c 00:08:35.601 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:08:35.601 Processing file module/keyring/file/keyring.c 00:08:35.601 Processing file module/keyring/file/keyring_rpc.c 00:08:35.601 Processing file module/keyring/linux/keyring_rpc.c 00:08:35.601 Processing file module/keyring/linux/keyring.c 00:08:35.601 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:08:35.859 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:08:35.859 Processing file module/scheduler/gscheduler/gscheduler.c 00:08:35.859 Processing file module/sock/posix/posix.c 00:08:35.859 Writing directory view page. 00:08:35.859 Overall coverage rate: 00:08:35.859 lines......: 38.2% (41086 of 107454 lines) 00:08:35.859 functions..: 41.9% (3741 of 8939 functions) 00:08:35.859 00:08:35.859 00:08:35.859 ===================== 00:08:35.859 All unit tests passed 00:08:35.859 ===================== 00:08:35.859 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:35.859 23:52:31 unittest -- unit/unittest.sh@305 -- # set +x 00:08:35.859 00:08:35.859 00:08:35.859 ************************************ 00:08:35.859 END TEST unittest 00:08:35.859 ************************************ 00:08:35.859 00:08:35.859 real 3m43.507s 00:08:35.859 user 3m15.687s 00:08:35.859 sys 0m17.601s 00:08:35.859 23:52:31 unittest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.859 23:52:31 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:36.118 23:52:31 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:08:36.118 23:52:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:08:36.118 23:52:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:08:36.118 23:52:31 -- spdk/autotest.sh@162 -- # timing_enter lib 00:08:36.118 23:52:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:36.118 23:52:31 -- common/autotest_common.sh@10 -- # set +x 00:08:36.118 23:52:31 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:08:36.118 23:52:31 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:36.118 23:52:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.118 23:52:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.118 23:52:31 -- common/autotest_common.sh@10 -- # set +x 00:08:36.118 ************************************ 00:08:36.118 START TEST env 00:08:36.118 ************************************ 00:08:36.118 23:52:31 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:36.118 * Looking for test storage... 00:08:36.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:36.118 23:52:31 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:36.118 23:52:31 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.118 23:52:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.118 23:52:31 env -- common/autotest_common.sh@10 -- # set +x 00:08:36.118 ************************************ 00:08:36.118 START TEST env_memory 00:08:36.118 ************************************ 00:08:36.118 23:52:31 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:36.118 00:08:36.118 00:08:36.118 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.118 http://cunit.sourceforge.net/ 00:08:36.118 00:08:36.118 00:08:36.118 Suite: memory 00:08:36.118 Test: alloc and free memory map ...[2024-07-24 23:52:31.911877] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:36.118 passed 00:08:36.118 Test: mem map translation ...[2024-07-24 23:52:31.974852] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:36.118 [2024-07-24 23:52:31.974945] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:36.118 [2024-07-24 23:52:31.975057] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:36.118 [2024-07-24 23:52:31.975094] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:36.390 passed 00:08:36.390 Test: mem map registration ...[2024-07-24 23:52:32.076725] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:36.390 [2024-07-24 23:52:32.076834] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:36.390 passed 00:08:36.390 Test: mem map adjacent registrations ...passed 00:08:36.390 00:08:36.390 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.390 suites 1 1 n/a 0 0 00:08:36.390 tests 4 4 4 0 0 00:08:36.390 asserts 152 152 152 0 n/a 00:08:36.390 00:08:36.390 Elapsed time = 0.342 seconds 00:08:36.390 00:08:36.390 real 0m0.370s 00:08:36.390 user 0m0.352s 00:08:36.390 sys 0m0.018s 00:08:36.390 23:52:32 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.390 23:52:32 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:36.390 ************************************ 00:08:36.390 END TEST env_memory 00:08:36.390 ************************************ 00:08:36.665 23:52:32 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:36.665 23:52:32 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.665 23:52:32 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.665 23:52:32 env -- common/autotest_common.sh@10 -- # set +x 00:08:36.665 ************************************ 00:08:36.665 START TEST env_vtophys 00:08:36.665 ************************************ 00:08:36.665 23:52:32 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:36.665 EAL: lib.eal log level changed from notice to debug 00:08:36.665 EAL: Detected lcore 0 as core 0 on socket 0 00:08:36.665 EAL: Detected lcore 1 as core 0 on socket 0 00:08:36.665 EAL: Detected lcore 2 as core 0 on socket 0 00:08:36.665 EAL: Detected lcore 3 as core 0 on socket 0 00:08:36.665 EAL: Detected lcore 4 as core 0 on socket 0 00:08:36.665 EAL: Detected lcore 5 as core 0 on socket 0 00:08:36.665 EAL: Detected lcore 6 as core 0 on socket 0 00:08:36.665 EAL: Detected lcore 7 as core 0 on socket 0 00:08:36.665 EAL: Detected lcore 8 as core 0 on socket 0 00:08:36.665 EAL: Detected lcore 9 as core 0 on socket 0 00:08:36.665 EAL: Maximum logical cores by configuration: 128 00:08:36.665 EAL: Detected CPU lcores: 10 00:08:36.665 EAL: Detected NUMA nodes: 1 00:08:36.665 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:36.665 EAL: Checking presence of .so 'librte_eal.so.24' 00:08:36.665 EAL: Checking presence of .so 'librte_eal.so' 00:08:36.665 EAL: Detected static linkage of DPDK 00:08:36.665 EAL: No shared files mode enabled, IPC will be disabled 00:08:36.665 EAL: Selected IOVA mode 'PA' 00:08:36.665 EAL: Probing VFIO support... 00:08:36.665 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:36.665 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:36.665 EAL: Ask a virtual area of 0x2e000 bytes 00:08:36.666 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:36.666 EAL: Setting up physically contiguous memory... 00:08:36.666 EAL: Setting maximum number of open files to 1048576 00:08:36.666 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:36.666 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:36.666 EAL: Ask a virtual area of 0x61000 bytes 00:08:36.666 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:36.666 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:36.666 EAL: Ask a virtual area of 0x400000000 bytes 00:08:36.666 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:36.666 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:36.666 EAL: Ask a virtual area of 0x61000 bytes 00:08:36.666 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:36.666 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:36.666 EAL: Ask a virtual area of 0x400000000 bytes 00:08:36.666 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:36.666 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:36.666 EAL: Ask a virtual area of 0x61000 bytes 00:08:36.666 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:36.666 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:36.666 EAL: Ask a virtual area of 0x400000000 bytes 00:08:36.666 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:36.666 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:36.666 EAL: Ask a virtual area of 0x61000 bytes 00:08:36.666 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:36.666 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:36.666 EAL: Ask a virtual area of 0x400000000 bytes 00:08:36.666 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:36.666 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:36.666 EAL: Hugepages will be freed exactly as allocated. 00:08:36.666 EAL: No shared files mode enabled, IPC is disabled 00:08:36.666 EAL: No shared files mode enabled, IPC is disabled 00:08:36.666 EAL: TSC frequency is ~2200000 KHz 00:08:36.666 EAL: Main lcore 0 is ready (tid=73db2d714a80;cpuset=[0]) 00:08:36.666 EAL: Trying to obtain current memory policy. 00:08:36.666 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:36.666 EAL: Restoring previous memory policy: 0 00:08:36.666 EAL: request: mp_malloc_sync 00:08:36.666 EAL: No shared files mode enabled, IPC is disabled 00:08:36.666 EAL: Heap on socket 0 was expanded by 2MB 00:08:36.666 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:36.666 EAL: Mem event callback 'spdk:(nil)' registered 00:08:36.666 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:36.666 00:08:36.666 00:08:36.666 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.666 http://cunit.sourceforge.net/ 00:08:36.666 00:08:36.666 00:08:36.666 Suite: components_suite 00:08:36.666 Test: vtophys_malloc_test ...passed 00:08:36.666 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:36.666 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:36.666 EAL: Restoring previous memory policy: 4 00:08:36.666 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.666 EAL: request: mp_malloc_sync 00:08:36.666 EAL: No shared files mode enabled, IPC is disabled 00:08:36.666 EAL: Heap on socket 0 was expanded by 4MB 00:08:36.666 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.666 EAL: request: mp_malloc_sync 00:08:36.666 EAL: No shared files mode enabled, IPC is disabled 00:08:36.666 EAL: Heap on socket 0 was shrunk by 4MB 00:08:36.923 EAL: Trying to obtain current memory policy. 00:08:36.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:36.923 EAL: Restoring previous memory policy: 4 00:08:36.923 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.923 EAL: request: mp_malloc_sync 00:08:36.923 EAL: No shared files mode enabled, IPC is disabled 00:08:36.923 EAL: Heap on socket 0 was expanded by 6MB 00:08:36.923 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.923 EAL: request: mp_malloc_sync 00:08:36.923 EAL: No shared files mode enabled, IPC is disabled 00:08:36.923 EAL: Heap on socket 0 was shrunk by 6MB 00:08:36.923 EAL: Trying to obtain current memory policy. 00:08:36.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:36.923 EAL: Restoring previous memory policy: 4 00:08:36.923 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.923 EAL: request: mp_malloc_sync 00:08:36.923 EAL: No shared files mode enabled, IPC is disabled 00:08:36.923 EAL: Heap on socket 0 was expanded by 10MB 00:08:36.923 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.923 EAL: request: mp_malloc_sync 00:08:36.923 EAL: No shared files mode enabled, IPC is disabled 00:08:36.923 EAL: Heap on socket 0 was shrunk by 10MB 00:08:36.923 EAL: Trying to obtain current memory policy. 00:08:36.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:36.923 EAL: Restoring previous memory policy: 4 00:08:36.923 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.923 EAL: request: mp_malloc_sync 00:08:36.923 EAL: No shared files mode enabled, IPC is disabled 00:08:36.923 EAL: Heap on socket 0 was expanded by 18MB 00:08:36.923 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.923 EAL: request: mp_malloc_sync 00:08:36.923 EAL: No shared files mode enabled, IPC is disabled 00:08:36.923 EAL: Heap on socket 0 was shrunk by 18MB 00:08:36.923 EAL: Trying to obtain current memory policy. 00:08:36.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:36.923 EAL: Restoring previous memory policy: 4 00:08:36.923 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.924 EAL: request: mp_malloc_sync 00:08:36.924 EAL: No shared files mode enabled, IPC is disabled 00:08:36.924 EAL: Heap on socket 0 was expanded by 34MB 00:08:36.924 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.924 EAL: request: mp_malloc_sync 00:08:36.924 EAL: No shared files mode enabled, IPC is disabled 00:08:36.924 EAL: Heap on socket 0 was shrunk by 34MB 00:08:36.924 EAL: Trying to obtain current memory policy. 00:08:36.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:36.924 EAL: Restoring previous memory policy: 4 00:08:36.924 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.924 EAL: request: mp_malloc_sync 00:08:36.924 EAL: No shared files mode enabled, IPC is disabled 00:08:36.924 EAL: Heap on socket 0 was expanded by 66MB 00:08:37.181 EAL: Calling mem event callback 'spdk:(nil)' 00:08:37.181 EAL: request: mp_malloc_sync 00:08:37.181 EAL: No shared files mode enabled, IPC is disabled 00:08:37.181 EAL: Heap on socket 0 was shrunk by 66MB 00:08:37.181 EAL: Trying to obtain current memory policy. 00:08:37.181 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:37.181 EAL: Restoring previous memory policy: 4 00:08:37.181 EAL: Calling mem event callback 'spdk:(nil)' 00:08:37.181 EAL: request: mp_malloc_sync 00:08:37.181 EAL: No shared files mode enabled, IPC is disabled 00:08:37.181 EAL: Heap on socket 0 was expanded by 130MB 00:08:37.440 EAL: Calling mem event callback 'spdk:(nil)' 00:08:37.440 EAL: request: mp_malloc_sync 00:08:37.440 EAL: No shared files mode enabled, IPC is disabled 00:08:37.440 EAL: Heap on socket 0 was shrunk by 130MB 00:08:37.440 EAL: Trying to obtain current memory policy. 00:08:37.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:37.440 EAL: Restoring previous memory policy: 4 00:08:37.440 EAL: Calling mem event callback 'spdk:(nil)' 00:08:37.440 EAL: request: mp_malloc_sync 00:08:37.440 EAL: No shared files mode enabled, IPC is disabled 00:08:37.440 EAL: Heap on socket 0 was expanded by 258MB 00:08:38.006 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.006 EAL: request: mp_malloc_sync 00:08:38.006 EAL: No shared files mode enabled, IPC is disabled 00:08:38.006 EAL: Heap on socket 0 was shrunk by 258MB 00:08:38.264 EAL: Trying to obtain current memory policy. 00:08:38.264 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:38.264 EAL: Restoring previous memory policy: 4 00:08:38.264 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.264 EAL: request: mp_malloc_sync 00:08:38.264 EAL: No shared files mode enabled, IPC is disabled 00:08:38.264 EAL: Heap on socket 0 was expanded by 514MB 00:08:38.848 EAL: Calling mem event callback 'spdk:(nil)' 00:08:39.106 EAL: request: mp_malloc_sync 00:08:39.107 EAL: No shared files mode enabled, IPC is disabled 00:08:39.107 EAL: Heap on socket 0 was shrunk by 514MB 00:08:39.672 EAL: Trying to obtain current memory policy. 00:08:39.672 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:39.672 EAL: Restoring previous memory policy: 4 00:08:39.672 EAL: Calling mem event callback 'spdk:(nil)' 00:08:39.672 EAL: request: mp_malloc_sync 00:08:39.672 EAL: No shared files mode enabled, IPC is disabled 00:08:39.672 EAL: Heap on socket 0 was expanded by 1026MB 00:08:41.048 EAL: Calling mem event callback 'spdk:(nil)' 00:08:41.306 EAL: request: mp_malloc_sync 00:08:41.306 EAL: No shared files mode enabled, IPC is disabled 00:08:41.306 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:42.682 passed 00:08:42.682 00:08:42.682 Run Summary: Type Total Ran Passed Failed Inactive 00:08:42.682 suites 1 1 n/a 0 0 00:08:42.682 tests 2 2 2 0 0 00:08:42.682 asserts 5537 5537 5537 0 n/a 00:08:42.682 00:08:42.682 Elapsed time = 5.607 seconds 00:08:42.682 EAL: Calling mem event callback 'spdk:(nil)' 00:08:42.682 EAL: request: mp_malloc_sync 00:08:42.682 EAL: No shared files mode enabled, IPC is disabled 00:08:42.682 EAL: Heap on socket 0 was shrunk by 2MB 00:08:42.682 EAL: No shared files mode enabled, IPC is disabled 00:08:42.682 EAL: No shared files mode enabled, IPC is disabled 00:08:42.682 EAL: No shared files mode enabled, IPC is disabled 00:08:42.682 00:08:42.682 real 0m5.896s 00:08:42.682 user 0m5.109s 00:08:42.682 sys 0m0.654s 00:08:42.682 23:52:38 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.682 23:52:38 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:42.682 ************************************ 00:08:42.682 END TEST env_vtophys 00:08:42.682 ************************************ 00:08:42.682 23:52:38 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:42.682 23:52:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:42.682 23:52:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.682 23:52:38 env -- common/autotest_common.sh@10 -- # set +x 00:08:42.682 ************************************ 00:08:42.682 START TEST env_pci 00:08:42.682 ************************************ 00:08:42.682 23:52:38 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:42.682 00:08:42.682 00:08:42.682 CUnit - A unit testing framework for C - Version 2.1-3 00:08:42.682 http://cunit.sourceforge.net/ 00:08:42.682 00:08:42.682 00:08:42.682 Suite: pci 00:08:42.682 Test: pci_hook ...[2024-07-24 23:52:38.260146] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68811 has claimed it 00:08:42.682 passed 00:08:42.682 00:08:42.682 EAL: Cannot find device (10000:00:01.0) 00:08:42.682 EAL: Failed to attach device on primary process 00:08:42.682 Run Summary: Type Total Ran Passed Failed Inactive 00:08:42.682 suites 1 1 n/a 0 0 00:08:42.682 tests 1 1 1 0 0 00:08:42.682 asserts 25 25 25 0 n/a 00:08:42.682 00:08:42.682 Elapsed time = 0.008 seconds 00:08:42.682 00:08:42.682 real 0m0.083s 00:08:42.682 user 0m0.041s 00:08:42.682 sys 0m0.042s 00:08:42.682 23:52:38 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.682 23:52:38 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:42.682 ************************************ 00:08:42.682 END TEST env_pci 00:08:42.682 ************************************ 00:08:42.682 23:52:38 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:42.683 23:52:38 env -- env/env.sh@15 -- # uname 00:08:42.683 23:52:38 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:42.683 23:52:38 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:42.683 23:52:38 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:42.683 23:52:38 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:42.683 23:52:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.683 23:52:38 env -- common/autotest_common.sh@10 -- # set +x 00:08:42.683 ************************************ 00:08:42.683 START TEST env_dpdk_post_init 00:08:42.683 ************************************ 00:08:42.683 23:52:38 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:42.683 EAL: Detected CPU lcores: 10 00:08:42.683 EAL: Detected NUMA nodes: 1 00:08:42.683 EAL: Detected static linkage of DPDK 00:08:42.683 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:42.683 EAL: Selected IOVA mode 'PA' 00:08:42.683 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:42.941 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:08:42.941 Starting DPDK initialization... 00:08:42.941 Starting SPDK post initialization... 00:08:42.941 SPDK NVMe probe 00:08:42.941 Attaching to 0000:00:10.0 00:08:42.941 Attached to 0000:00:10.0 00:08:42.941 Cleaning up... 00:08:42.941 00:08:42.941 real 0m0.239s 00:08:42.941 user 0m0.074s 00:08:42.941 sys 0m0.066s 00:08:42.941 23:52:38 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.941 23:52:38 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:42.941 ************************************ 00:08:42.941 END TEST env_dpdk_post_init 00:08:42.941 ************************************ 00:08:42.942 23:52:38 env -- env/env.sh@26 -- # uname 00:08:42.942 23:52:38 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:42.942 23:52:38 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:42.942 23:52:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:42.942 23:52:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.942 23:52:38 env -- common/autotest_common.sh@10 -- # set +x 00:08:42.942 ************************************ 00:08:42.942 START TEST env_mem_callbacks 00:08:42.942 ************************************ 00:08:42.942 23:52:38 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:42.942 EAL: Detected CPU lcores: 10 00:08:42.942 EAL: Detected NUMA nodes: 1 00:08:42.942 EAL: Detected static linkage of DPDK 00:08:42.942 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:42.942 EAL: Selected IOVA mode 'PA' 00:08:43.201 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:43.201 00:08:43.201 00:08:43.201 CUnit - A unit testing framework for C - Version 2.1-3 00:08:43.201 http://cunit.sourceforge.net/ 00:08:43.201 00:08:43.201 00:08:43.201 Suite: memory 00:08:43.201 Test: test ... 00:08:43.201 register 0x200000200000 2097152 00:08:43.201 malloc 3145728 00:08:43.201 register 0x200000400000 4194304 00:08:43.201 buf 0x2000004fffc0 len 3145728 PASSED 00:08:43.201 malloc 64 00:08:43.201 buf 0x2000004ffec0 len 64 PASSED 00:08:43.201 malloc 4194304 00:08:43.201 register 0x200000800000 6291456 00:08:43.201 buf 0x2000009fffc0 len 4194304 PASSED 00:08:43.201 free 0x2000004fffc0 3145728 00:08:43.201 free 0x2000004ffec0 64 00:08:43.201 unregister 0x200000400000 4194304 PASSED 00:08:43.201 free 0x2000009fffc0 4194304 00:08:43.201 unregister 0x200000800000 6291456 PASSED 00:08:43.201 malloc 8388608 00:08:43.201 register 0x200000400000 10485760 00:08:43.201 buf 0x2000005fffc0 len 8388608 PASSED 00:08:43.201 free 0x2000005fffc0 8388608 00:08:43.201 unregister 0x200000400000 10485760 PASSED 00:08:43.201 passed 00:08:43.201 00:08:43.201 Run Summary: Type Total Ran Passed Failed Inactive 00:08:43.201 suites 1 1 n/a 0 0 00:08:43.201 tests 1 1 1 0 0 00:08:43.201 asserts 15 15 15 0 n/a 00:08:43.201 00:08:43.201 Elapsed time = 0.055 seconds 00:08:43.201 00:08:43.201 real 0m0.261s 00:08:43.201 user 0m0.097s 00:08:43.201 sys 0m0.065s 00:08:43.201 23:52:38 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.201 23:52:38 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:43.201 ************************************ 00:08:43.201 END TEST env_mem_callbacks 00:08:43.201 ************************************ 00:08:43.201 ************************************ 00:08:43.201 END TEST env 00:08:43.201 ************************************ 00:08:43.201 00:08:43.201 real 0m7.211s 00:08:43.201 user 0m5.778s 00:08:43.201 sys 0m1.092s 00:08:43.201 23:52:38 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.201 23:52:38 env -- common/autotest_common.sh@10 -- # set +x 00:08:43.201 23:52:39 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:43.201 23:52:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:43.201 23:52:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.201 23:52:39 -- common/autotest_common.sh@10 -- # set +x 00:08:43.201 ************************************ 00:08:43.201 START TEST rpc 00:08:43.201 ************************************ 00:08:43.201 23:52:39 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:43.460 * Looking for test storage... 00:08:43.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:43.460 23:52:39 rpc -- rpc/rpc.sh@65 -- # spdk_pid=68930 00:08:43.460 23:52:39 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:43.460 23:52:39 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:43.460 23:52:39 rpc -- rpc/rpc.sh@67 -- # waitforlisten 68930 00:08:43.460 23:52:39 rpc -- common/autotest_common.sh@831 -- # '[' -z 68930 ']' 00:08:43.460 23:52:39 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.460 23:52:39 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:43.460 23:52:39 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.460 23:52:39 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:43.460 23:52:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.460 [2024-07-24 23:52:39.194469] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:08:43.460 [2024-07-24 23:52:39.194681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68930 ] 00:08:43.719 [2024-07-24 23:52:39.369512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.719 [2024-07-24 23:52:39.536276] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:43.719 [2024-07-24 23:52:39.536348] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68930' to capture a snapshot of events at runtime. 00:08:43.719 [2024-07-24 23:52:39.536383] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.719 [2024-07-24 23:52:39.536394] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.719 [2024-07-24 23:52:39.536429] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68930 for offline analysis/debug. 00:08:43.719 [2024-07-24 23:52:39.536471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.286 23:52:40 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:44.286 23:52:40 rpc -- common/autotest_common.sh@864 -- # return 0 00:08:44.286 23:52:40 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:44.286 23:52:40 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:44.286 23:52:40 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:44.286 23:52:40 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:44.286 23:52:40 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:44.286 23:52:40 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.286 23:52:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.545 ************************************ 00:08:44.545 START TEST rpc_integrity 00:08:44.545 ************************************ 00:08:44.545 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:08:44.545 23:52:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:44.545 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.545 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.545 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.545 23:52:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:44.545 23:52:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:44.545 23:52:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:44.545 23:52:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:44.545 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.545 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.545 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.545 23:52:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:44.545 23:52:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:44.545 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.545 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.545 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.545 23:52:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:44.545 { 00:08:44.545 "name": "Malloc0", 00:08:44.545 "aliases": [ 00:08:44.545 "6cfcd19e-3ea5-4277-8d24-be766c9c5182" 00:08:44.545 ], 00:08:44.545 "product_name": "Malloc disk", 00:08:44.545 "block_size": 512, 00:08:44.545 "num_blocks": 16384, 00:08:44.545 "uuid": "6cfcd19e-3ea5-4277-8d24-be766c9c5182", 00:08:44.545 "assigned_rate_limits": { 00:08:44.545 "rw_ios_per_sec": 0, 00:08:44.545 "rw_mbytes_per_sec": 0, 00:08:44.545 "r_mbytes_per_sec": 0, 00:08:44.545 "w_mbytes_per_sec": 0 00:08:44.545 }, 00:08:44.545 "claimed": false, 00:08:44.545 "zoned": false, 00:08:44.545 "supported_io_types": { 00:08:44.545 "read": true, 00:08:44.545 "write": true, 00:08:44.545 "unmap": true, 00:08:44.545 "flush": true, 00:08:44.545 "reset": true, 00:08:44.545 "nvme_admin": false, 00:08:44.545 "nvme_io": false, 00:08:44.545 "nvme_io_md": false, 00:08:44.545 "write_zeroes": true, 00:08:44.545 "zcopy": true, 00:08:44.545 "get_zone_info": false, 00:08:44.545 "zone_management": false, 00:08:44.545 "zone_append": false, 00:08:44.545 "compare": false, 00:08:44.545 "compare_and_write": false, 00:08:44.545 "abort": true, 00:08:44.545 "seek_hole": false, 00:08:44.545 "seek_data": false, 00:08:44.545 "copy": true, 00:08:44.545 "nvme_iov_md": false 00:08:44.545 }, 00:08:44.545 "memory_domains": [ 00:08:44.545 { 00:08:44.545 "dma_device_id": "system", 00:08:44.545 "dma_device_type": 1 00:08:44.545 }, 00:08:44.545 { 00:08:44.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.545 "dma_device_type": 2 00:08:44.545 } 00:08:44.545 ], 00:08:44.545 "driver_specific": {} 00:08:44.545 } 00:08:44.545 ]' 00:08:44.545 23:52:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:44.545 23:52:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:44.545 23:52:40 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:44.545 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.545 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.545 [2024-07-24 23:52:40.237088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:44.545 [2024-07-24 23:52:40.237205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.545 [2024-07-24 23:52:40.237256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007280 00:08:44.545 [2024-07-24 23:52:40.237270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.545 [2024-07-24 23:52:40.239988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.545 [2024-07-24 23:52:40.240050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:44.545 Passthru0 00:08:44.545 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.545 23:52:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:44.545 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.545 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.545 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.545 23:52:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:44.545 { 00:08:44.545 "name": "Malloc0", 00:08:44.545 "aliases": [ 00:08:44.545 "6cfcd19e-3ea5-4277-8d24-be766c9c5182" 00:08:44.545 ], 00:08:44.545 "product_name": "Malloc disk", 00:08:44.545 "block_size": 512, 00:08:44.545 "num_blocks": 16384, 00:08:44.545 "uuid": "6cfcd19e-3ea5-4277-8d24-be766c9c5182", 00:08:44.545 "assigned_rate_limits": { 00:08:44.545 "rw_ios_per_sec": 0, 00:08:44.545 "rw_mbytes_per_sec": 0, 00:08:44.545 "r_mbytes_per_sec": 0, 00:08:44.545 "w_mbytes_per_sec": 0 00:08:44.545 }, 00:08:44.545 "claimed": true, 00:08:44.545 "claim_type": "exclusive_write", 00:08:44.545 "zoned": false, 00:08:44.545 "supported_io_types": { 00:08:44.545 "read": true, 00:08:44.545 "write": true, 00:08:44.545 "unmap": true, 00:08:44.545 "flush": true, 00:08:44.545 "reset": true, 00:08:44.545 "nvme_admin": false, 00:08:44.545 "nvme_io": false, 00:08:44.545 "nvme_io_md": false, 00:08:44.545 "write_zeroes": true, 00:08:44.545 "zcopy": true, 00:08:44.545 "get_zone_info": false, 00:08:44.546 "zone_management": false, 00:08:44.546 "zone_append": false, 00:08:44.546 "compare": false, 00:08:44.546 "compare_and_write": false, 00:08:44.546 "abort": true, 00:08:44.546 "seek_hole": false, 00:08:44.546 "seek_data": false, 00:08:44.546 "copy": true, 00:08:44.546 "nvme_iov_md": false 00:08:44.546 }, 00:08:44.546 "memory_domains": [ 00:08:44.546 { 00:08:44.546 "dma_device_id": "system", 00:08:44.546 "dma_device_type": 1 00:08:44.546 }, 00:08:44.546 { 00:08:44.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.546 "dma_device_type": 2 00:08:44.546 } 00:08:44.546 ], 00:08:44.546 "driver_specific": {} 00:08:44.546 }, 00:08:44.546 { 00:08:44.546 "name": "Passthru0", 00:08:44.546 "aliases": [ 00:08:44.546 "1cc5af38-f32f-5a55-86e7-741bb711bea2" 00:08:44.546 ], 00:08:44.546 "product_name": "passthru", 00:08:44.546 "block_size": 512, 00:08:44.546 "num_blocks": 16384, 00:08:44.546 "uuid": "1cc5af38-f32f-5a55-86e7-741bb711bea2", 00:08:44.546 "assigned_rate_limits": { 00:08:44.546 "rw_ios_per_sec": 0, 00:08:44.546 "rw_mbytes_per_sec": 0, 00:08:44.546 "r_mbytes_per_sec": 0, 00:08:44.546 "w_mbytes_per_sec": 0 00:08:44.546 }, 00:08:44.546 "claimed": false, 00:08:44.546 "zoned": false, 00:08:44.546 "supported_io_types": { 00:08:44.546 "read": true, 00:08:44.546 "write": true, 00:08:44.546 "unmap": true, 00:08:44.546 "flush": true, 00:08:44.546 "reset": true, 00:08:44.546 "nvme_admin": false, 00:08:44.546 "nvme_io": false, 00:08:44.546 "nvme_io_md": false, 00:08:44.546 "write_zeroes": true, 00:08:44.546 "zcopy": true, 00:08:44.546 "get_zone_info": false, 00:08:44.546 "zone_management": false, 00:08:44.546 "zone_append": false, 00:08:44.546 "compare": false, 00:08:44.546 "compare_and_write": false, 00:08:44.546 "abort": true, 00:08:44.546 "seek_hole": false, 00:08:44.546 "seek_data": false, 00:08:44.546 "copy": true, 00:08:44.546 "nvme_iov_md": false 00:08:44.546 }, 00:08:44.546 "memory_domains": [ 00:08:44.546 { 00:08:44.546 "dma_device_id": "system", 00:08:44.546 "dma_device_type": 1 00:08:44.546 }, 00:08:44.546 { 00:08:44.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.546 "dma_device_type": 2 00:08:44.546 } 00:08:44.546 ], 00:08:44.546 "driver_specific": { 00:08:44.546 "passthru": { 00:08:44.546 "name": "Passthru0", 00:08:44.546 "base_bdev_name": "Malloc0" 00:08:44.546 } 00:08:44.546 } 00:08:44.546 } 00:08:44.546 ]' 00:08:44.546 23:52:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:44.546 23:52:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:44.546 23:52:40 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:44.546 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.546 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.546 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.546 23:52:40 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:44.546 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.546 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.546 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.546 23:52:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:44.546 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.546 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.546 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.546 23:52:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:44.546 23:52:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:44.546 23:52:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:44.546 00:08:44.546 real 0m0.185s 00:08:44.546 user 0m0.053s 00:08:44.546 sys 0m0.039s 00:08:44.546 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.546 23:52:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.546 ************************************ 00:08:44.546 END TEST rpc_integrity 00:08:44.546 ************************************ 00:08:44.546 23:52:40 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:44.546 23:52:40 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:44.546 23:52:40 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.546 23:52:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.546 ************************************ 00:08:44.546 START TEST rpc_plugins 00:08:44.546 ************************************ 00:08:44.546 23:52:40 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:08:44.546 23:52:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:44.546 23:52:40 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.546 23:52:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:44.546 23:52:40 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.546 23:52:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:44.546 23:52:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:44.546 23:52:40 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.546 23:52:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:44.805 23:52:40 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.805 23:52:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:44.805 { 00:08:44.805 "name": "Malloc1", 00:08:44.805 "aliases": [ 00:08:44.805 "aa431902-c4fd-4a94-8485-a0f41134c662" 00:08:44.805 ], 00:08:44.805 "product_name": "Malloc disk", 00:08:44.805 "block_size": 4096, 00:08:44.805 "num_blocks": 256, 00:08:44.805 "uuid": "aa431902-c4fd-4a94-8485-a0f41134c662", 00:08:44.805 "assigned_rate_limits": { 00:08:44.805 "rw_ios_per_sec": 0, 00:08:44.805 "rw_mbytes_per_sec": 0, 00:08:44.805 "r_mbytes_per_sec": 0, 00:08:44.805 "w_mbytes_per_sec": 0 00:08:44.805 }, 00:08:44.805 "claimed": false, 00:08:44.805 "zoned": false, 00:08:44.805 "supported_io_types": { 00:08:44.805 "read": true, 00:08:44.805 "write": true, 00:08:44.805 "unmap": true, 00:08:44.805 "flush": true, 00:08:44.805 "reset": true, 00:08:44.805 "nvme_admin": false, 00:08:44.805 "nvme_io": false, 00:08:44.805 "nvme_io_md": false, 00:08:44.805 "write_zeroes": true, 00:08:44.805 "zcopy": true, 00:08:44.805 "get_zone_info": false, 00:08:44.805 "zone_management": false, 00:08:44.805 "zone_append": false, 00:08:44.805 "compare": false, 00:08:44.805 "compare_and_write": false, 00:08:44.805 "abort": true, 00:08:44.805 "seek_hole": false, 00:08:44.805 "seek_data": false, 00:08:44.805 "copy": true, 00:08:44.805 "nvme_iov_md": false 00:08:44.805 }, 00:08:44.805 "memory_domains": [ 00:08:44.805 { 00:08:44.805 "dma_device_id": "system", 00:08:44.805 "dma_device_type": 1 00:08:44.805 }, 00:08:44.805 { 00:08:44.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.805 "dma_device_type": 2 00:08:44.805 } 00:08:44.805 ], 00:08:44.805 "driver_specific": {} 00:08:44.805 } 00:08:44.805 ]' 00:08:44.805 23:52:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:44.805 23:52:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:44.805 23:52:40 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:44.805 23:52:40 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.805 23:52:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:44.805 23:52:40 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.805 23:52:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:44.805 23:52:40 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.805 23:52:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:44.805 23:52:40 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.805 23:52:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:44.805 23:52:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:44.805 23:52:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:44.805 00:08:44.805 real 0m0.078s 00:08:44.805 user 0m0.026s 00:08:44.805 sys 0m0.018s 00:08:44.805 23:52:40 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.805 23:52:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:44.805 ************************************ 00:08:44.805 END TEST rpc_plugins 00:08:44.805 ************************************ 00:08:44.805 23:52:40 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:44.805 23:52:40 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:44.805 23:52:40 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.805 23:52:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.805 ************************************ 00:08:44.805 START TEST rpc_trace_cmd_test 00:08:44.805 ************************************ 00:08:44.805 23:52:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:08:44.805 23:52:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:44.805 23:52:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:44.805 23:52:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.805 23:52:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.805 23:52:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.805 23:52:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:44.805 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68930", 00:08:44.805 "tpoint_group_mask": "0x8", 00:08:44.805 "iscsi_conn": { 00:08:44.805 "mask": "0x2", 00:08:44.805 "tpoint_mask": "0x0" 00:08:44.805 }, 00:08:44.805 "scsi": { 00:08:44.805 "mask": "0x4", 00:08:44.805 "tpoint_mask": "0x0" 00:08:44.805 }, 00:08:44.805 "bdev": { 00:08:44.805 "mask": "0x8", 00:08:44.805 "tpoint_mask": "0xffffffffffffffff" 00:08:44.805 }, 00:08:44.805 "nvmf_rdma": { 00:08:44.805 "mask": "0x10", 00:08:44.805 "tpoint_mask": "0x0" 00:08:44.805 }, 00:08:44.805 "nvmf_tcp": { 00:08:44.805 "mask": "0x20", 00:08:44.805 "tpoint_mask": "0x0" 00:08:44.805 }, 00:08:44.805 "ftl": { 00:08:44.805 "mask": "0x40", 00:08:44.805 "tpoint_mask": "0x0" 00:08:44.806 }, 00:08:44.806 "blobfs": { 00:08:44.806 "mask": "0x80", 00:08:44.806 "tpoint_mask": "0x0" 00:08:44.806 }, 00:08:44.806 "dsa": { 00:08:44.806 "mask": "0x200", 00:08:44.806 "tpoint_mask": "0x0" 00:08:44.806 }, 00:08:44.806 "thread": { 00:08:44.806 "mask": "0x400", 00:08:44.806 "tpoint_mask": "0x0" 00:08:44.806 }, 00:08:44.806 "nvme_pcie": { 00:08:44.806 "mask": "0x800", 00:08:44.806 "tpoint_mask": "0x0" 00:08:44.806 }, 00:08:44.806 "iaa": { 00:08:44.806 "mask": "0x1000", 00:08:44.806 "tpoint_mask": "0x0" 00:08:44.806 }, 00:08:44.806 "nvme_tcp": { 00:08:44.806 "mask": "0x2000", 00:08:44.806 "tpoint_mask": "0x0" 00:08:44.806 }, 00:08:44.806 "bdev_nvme": { 00:08:44.806 "mask": "0x4000", 00:08:44.806 "tpoint_mask": "0x0" 00:08:44.806 }, 00:08:44.806 "sock": { 00:08:44.806 "mask": "0x8000", 00:08:44.806 "tpoint_mask": "0x0" 00:08:44.806 } 00:08:44.806 }' 00:08:44.806 23:52:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:44.806 23:52:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:08:44.806 23:52:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:44.806 23:52:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:44.806 23:52:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:44.806 23:52:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:44.806 23:52:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:44.806 23:52:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:44.806 23:52:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:44.806 23:52:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:44.806 00:08:44.806 real 0m0.070s 00:08:44.806 user 0m0.033s 00:08:44.806 sys 0m0.030s 00:08:44.806 23:52:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.806 ************************************ 00:08:44.806 END TEST rpc_trace_cmd_test 00:08:44.806 ************************************ 00:08:44.806 23:52:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.806 23:52:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:44.806 23:52:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:44.806 23:52:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:44.806 23:52:40 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:44.806 23:52:40 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.806 23:52:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.806 ************************************ 00:08:44.806 START TEST rpc_daemon_integrity 00:08:44.806 ************************************ 00:08:44.806 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:08:44.806 23:52:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:44.806 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.806 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.806 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.806 23:52:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:44.806 23:52:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:44.806 23:52:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:44.806 23:52:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:44.806 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.806 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:45.098 { 00:08:45.098 "name": "Malloc2", 00:08:45.098 "aliases": [ 00:08:45.098 "ae19d143-6dbc-445c-a975-fbe710b242f0" 00:08:45.098 ], 00:08:45.098 "product_name": "Malloc disk", 00:08:45.098 "block_size": 512, 00:08:45.098 "num_blocks": 16384, 00:08:45.098 "uuid": "ae19d143-6dbc-445c-a975-fbe710b242f0", 00:08:45.098 "assigned_rate_limits": { 00:08:45.098 "rw_ios_per_sec": 0, 00:08:45.098 "rw_mbytes_per_sec": 0, 00:08:45.098 "r_mbytes_per_sec": 0, 00:08:45.098 "w_mbytes_per_sec": 0 00:08:45.098 }, 00:08:45.098 "claimed": false, 00:08:45.098 "zoned": false, 00:08:45.098 "supported_io_types": { 00:08:45.098 "read": true, 00:08:45.098 "write": true, 00:08:45.098 "unmap": true, 00:08:45.098 "flush": true, 00:08:45.098 "reset": true, 00:08:45.098 "nvme_admin": false, 00:08:45.098 "nvme_io": false, 00:08:45.098 "nvme_io_md": false, 00:08:45.098 "write_zeroes": true, 00:08:45.098 "zcopy": true, 00:08:45.098 "get_zone_info": false, 00:08:45.098 "zone_management": false, 00:08:45.098 "zone_append": false, 00:08:45.098 "compare": false, 00:08:45.098 "compare_and_write": false, 00:08:45.098 "abort": true, 00:08:45.098 "seek_hole": false, 00:08:45.098 "seek_data": false, 00:08:45.098 "copy": true, 00:08:45.098 "nvme_iov_md": false 00:08:45.098 }, 00:08:45.098 "memory_domains": [ 00:08:45.098 { 00:08:45.098 "dma_device_id": "system", 00:08:45.098 "dma_device_type": 1 00:08:45.098 }, 00:08:45.098 { 00:08:45.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.098 "dma_device_type": 2 00:08:45.098 } 00:08:45.098 ], 00:08:45.098 "driver_specific": {} 00:08:45.098 } 00:08:45.098 ]' 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.098 [2024-07-24 23:52:40.729627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:45.098 [2024-07-24 23:52:40.729738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.098 [2024-07-24 23:52:40.729788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:08:45.098 [2024-07-24 23:52:40.729804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.098 [2024-07-24 23:52:40.732372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.098 [2024-07-24 23:52:40.732427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:45.098 Passthru0 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:45.098 { 00:08:45.098 "name": "Malloc2", 00:08:45.098 "aliases": [ 00:08:45.098 "ae19d143-6dbc-445c-a975-fbe710b242f0" 00:08:45.098 ], 00:08:45.098 "product_name": "Malloc disk", 00:08:45.098 "block_size": 512, 00:08:45.098 "num_blocks": 16384, 00:08:45.098 "uuid": "ae19d143-6dbc-445c-a975-fbe710b242f0", 00:08:45.098 "assigned_rate_limits": { 00:08:45.098 "rw_ios_per_sec": 0, 00:08:45.098 "rw_mbytes_per_sec": 0, 00:08:45.098 "r_mbytes_per_sec": 0, 00:08:45.098 "w_mbytes_per_sec": 0 00:08:45.098 }, 00:08:45.098 "claimed": true, 00:08:45.098 "claim_type": "exclusive_write", 00:08:45.098 "zoned": false, 00:08:45.098 "supported_io_types": { 00:08:45.098 "read": true, 00:08:45.098 "write": true, 00:08:45.098 "unmap": true, 00:08:45.098 "flush": true, 00:08:45.098 "reset": true, 00:08:45.098 "nvme_admin": false, 00:08:45.098 "nvme_io": false, 00:08:45.098 "nvme_io_md": false, 00:08:45.098 "write_zeroes": true, 00:08:45.098 "zcopy": true, 00:08:45.098 "get_zone_info": false, 00:08:45.098 "zone_management": false, 00:08:45.098 "zone_append": false, 00:08:45.098 "compare": false, 00:08:45.098 "compare_and_write": false, 00:08:45.098 "abort": true, 00:08:45.098 "seek_hole": false, 00:08:45.098 "seek_data": false, 00:08:45.098 "copy": true, 00:08:45.098 "nvme_iov_md": false 00:08:45.098 }, 00:08:45.098 "memory_domains": [ 00:08:45.098 { 00:08:45.098 "dma_device_id": "system", 00:08:45.098 "dma_device_type": 1 00:08:45.098 }, 00:08:45.098 { 00:08:45.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.098 "dma_device_type": 2 00:08:45.098 } 00:08:45.098 ], 00:08:45.098 "driver_specific": {} 00:08:45.098 }, 00:08:45.098 { 00:08:45.098 "name": "Passthru0", 00:08:45.098 "aliases": [ 00:08:45.098 "b766c187-f3a6-5cfe-aaf7-7d325b52c39a" 00:08:45.098 ], 00:08:45.098 "product_name": "passthru", 00:08:45.098 "block_size": 512, 00:08:45.098 "num_blocks": 16384, 00:08:45.098 "uuid": "b766c187-f3a6-5cfe-aaf7-7d325b52c39a", 00:08:45.098 "assigned_rate_limits": { 00:08:45.098 "rw_ios_per_sec": 0, 00:08:45.098 "rw_mbytes_per_sec": 0, 00:08:45.098 "r_mbytes_per_sec": 0, 00:08:45.098 "w_mbytes_per_sec": 0 00:08:45.098 }, 00:08:45.098 "claimed": false, 00:08:45.098 "zoned": false, 00:08:45.098 "supported_io_types": { 00:08:45.098 "read": true, 00:08:45.098 "write": true, 00:08:45.098 "unmap": true, 00:08:45.098 "flush": true, 00:08:45.098 "reset": true, 00:08:45.098 "nvme_admin": false, 00:08:45.098 "nvme_io": false, 00:08:45.098 "nvme_io_md": false, 00:08:45.098 "write_zeroes": true, 00:08:45.098 "zcopy": true, 00:08:45.098 "get_zone_info": false, 00:08:45.098 "zone_management": false, 00:08:45.098 "zone_append": false, 00:08:45.098 "compare": false, 00:08:45.098 "compare_and_write": false, 00:08:45.098 "abort": true, 00:08:45.098 "seek_hole": false, 00:08:45.098 "seek_data": false, 00:08:45.098 "copy": true, 00:08:45.098 "nvme_iov_md": false 00:08:45.098 }, 00:08:45.098 "memory_domains": [ 00:08:45.098 { 00:08:45.098 "dma_device_id": "system", 00:08:45.098 "dma_device_type": 1 00:08:45.098 }, 00:08:45.098 { 00:08:45.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.098 "dma_device_type": 2 00:08:45.098 } 00:08:45.098 ], 00:08:45.098 "driver_specific": { 00:08:45.098 "passthru": { 00:08:45.098 "name": "Passthru0", 00:08:45.098 "base_bdev_name": "Malloc2" 00:08:45.098 } 00:08:45.098 } 00:08:45.098 } 00:08:45.098 ]' 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.098 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.099 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.099 23:52:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:45.099 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.099 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.099 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.099 23:52:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:45.099 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.099 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.099 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.099 23:52:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:45.099 23:52:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:45.099 23:52:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:45.099 00:08:45.099 real 0m0.188s 00:08:45.099 user 0m0.048s 00:08:45.099 sys 0m0.053s 00:08:45.099 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.099 23:52:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.099 ************************************ 00:08:45.099 END TEST rpc_daemon_integrity 00:08:45.099 ************************************ 00:08:45.099 23:52:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:45.099 23:52:40 rpc -- rpc/rpc.sh@84 -- # killprocess 68930 00:08:45.099 23:52:40 rpc -- common/autotest_common.sh@950 -- # '[' -z 68930 ']' 00:08:45.099 23:52:40 rpc -- common/autotest_common.sh@954 -- # kill -0 68930 00:08:45.099 23:52:40 rpc -- common/autotest_common.sh@955 -- # uname 00:08:45.099 23:52:40 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:45.099 23:52:40 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68930 00:08:45.099 23:52:40 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:45.099 23:52:40 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:45.099 killing process with pid 68930 00:08:45.099 23:52:40 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68930' 00:08:45.099 23:52:40 rpc -- common/autotest_common.sh@969 -- # kill 68930 00:08:45.099 23:52:40 rpc -- common/autotest_common.sh@974 -- # wait 68930 00:08:47.002 00:08:47.002 real 0m3.708s 00:08:47.002 user 0m3.888s 00:08:47.002 sys 0m0.779s 00:08:47.002 23:52:42 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.002 23:52:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.002 ************************************ 00:08:47.002 END TEST rpc 00:08:47.002 ************************************ 00:08:47.002 23:52:42 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:47.002 23:52:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:47.002 23:52:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.002 23:52:42 -- common/autotest_common.sh@10 -- # set +x 00:08:47.002 ************************************ 00:08:47.002 START TEST skip_rpc 00:08:47.002 ************************************ 00:08:47.002 23:52:42 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:47.002 * Looking for test storage... 00:08:47.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:47.002 23:52:42 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:47.002 23:52:42 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:47.002 23:52:42 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:47.002 23:52:42 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:47.002 23:52:42 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.002 23:52:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.002 ************************************ 00:08:47.002 START TEST skip_rpc 00:08:47.002 ************************************ 00:08:47.002 23:52:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:08:47.002 23:52:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69140 00:08:47.002 23:52:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:47.002 23:52:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:47.002 23:52:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:47.261 [2024-07-24 23:52:42.948132] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:08:47.261 [2024-07-24 23:52:42.948313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69140 ] 00:08:47.261 [2024-07-24 23:52:43.120557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.519 [2024-07-24 23:52:43.274044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69140 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69140 ']' 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69140 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69140 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69140' 00:08:52.785 killing process with pid 69140 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69140 00:08:52.785 23:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69140 00:08:54.161 00:08:54.161 real 0m6.821s 00:08:54.161 user 0m6.401s 00:08:54.161 sys 0m0.346s 00:08:54.161 23:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.161 23:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.161 ************************************ 00:08:54.161 END TEST skip_rpc 00:08:54.161 ************************************ 00:08:54.161 23:52:49 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:54.161 23:52:49 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:54.161 23:52:49 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.161 23:52:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.161 ************************************ 00:08:54.161 START TEST skip_rpc_with_json 00:08:54.161 ************************************ 00:08:54.161 23:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:08:54.161 23:52:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:54.161 23:52:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69239 00:08:54.161 23:52:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:54.161 23:52:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69239 00:08:54.161 23:52:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:54.161 23:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69239 ']' 00:08:54.161 23:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.161 23:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.161 23:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.161 23:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.161 23:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:54.161 [2024-07-24 23:52:49.816921] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:08:54.161 [2024-07-24 23:52:49.817117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69239 ] 00:08:54.161 [2024-07-24 23:52:49.988009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.420 [2024-07-24 23:52:50.148561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.987 23:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.987 23:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:08:54.987 23:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:54.987 23:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.987 23:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:54.987 [2024-07-24 23:52:50.768268] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:54.987 request: 00:08:54.987 { 00:08:54.987 "trtype": "tcp", 00:08:54.987 "method": "nvmf_get_transports", 00:08:54.987 "req_id": 1 00:08:54.987 } 00:08:54.987 Got JSON-RPC error response 00:08:54.987 response: 00:08:54.987 { 00:08:54.987 "code": -19, 00:08:54.987 "message": "No such device" 00:08:54.987 } 00:08:54.987 23:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:54.987 23:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:54.987 23:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.987 23:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:54.988 [2024-07-24 23:52:50.780439] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.988 23:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.988 23:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:54.988 23:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.988 23:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:55.246 23:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.246 23:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:55.246 { 00:08:55.246 "subsystems": [ 00:08:55.246 { 00:08:55.246 "subsystem": "scheduler", 00:08:55.246 "config": [ 00:08:55.246 { 00:08:55.246 "method": "framework_set_scheduler", 00:08:55.246 "params": { 00:08:55.246 "name": "static" 00:08:55.246 } 00:08:55.246 } 00:08:55.246 ] 00:08:55.246 }, 00:08:55.246 { 00:08:55.246 "subsystem": "vmd", 00:08:55.246 "config": [] 00:08:55.246 }, 00:08:55.246 { 00:08:55.246 "subsystem": "sock", 00:08:55.246 "config": [ 00:08:55.246 { 00:08:55.246 "method": "sock_set_default_impl", 00:08:55.246 "params": { 00:08:55.246 "impl_name": "posix" 00:08:55.246 } 00:08:55.246 }, 00:08:55.246 { 00:08:55.246 "method": "sock_impl_set_options", 00:08:55.246 "params": { 00:08:55.246 "impl_name": "ssl", 00:08:55.246 "recv_buf_size": 4096, 00:08:55.246 "send_buf_size": 4096, 00:08:55.246 "enable_recv_pipe": true, 00:08:55.246 "enable_quickack": false, 00:08:55.246 "enable_placement_id": 0, 00:08:55.246 "enable_zerocopy_send_server": true, 00:08:55.246 "enable_zerocopy_send_client": false, 00:08:55.246 "zerocopy_threshold": 0, 00:08:55.246 "tls_version": 0, 00:08:55.246 "enable_ktls": false 00:08:55.246 } 00:08:55.246 }, 00:08:55.246 { 00:08:55.246 "method": "sock_impl_set_options", 00:08:55.246 "params": { 00:08:55.246 "impl_name": "posix", 00:08:55.246 "recv_buf_size": 2097152, 00:08:55.246 "send_buf_size": 2097152, 00:08:55.246 "enable_recv_pipe": true, 00:08:55.246 "enable_quickack": false, 00:08:55.246 "enable_placement_id": 0, 00:08:55.246 "enable_zerocopy_send_server": true, 00:08:55.246 "enable_zerocopy_send_client": false, 00:08:55.246 "zerocopy_threshold": 0, 00:08:55.246 "tls_version": 0, 00:08:55.246 "enable_ktls": false 00:08:55.246 } 00:08:55.246 } 00:08:55.246 ] 00:08:55.246 }, 00:08:55.246 { 00:08:55.246 "subsystem": "iobuf", 00:08:55.246 "config": [ 00:08:55.246 { 00:08:55.246 "method": "iobuf_set_options", 00:08:55.246 "params": { 00:08:55.246 "small_pool_count": 8192, 00:08:55.246 "large_pool_count": 1024, 00:08:55.246 "small_bufsize": 8192, 00:08:55.246 "large_bufsize": 135168 00:08:55.246 } 00:08:55.246 } 00:08:55.246 ] 00:08:55.246 }, 00:08:55.246 { 00:08:55.246 "subsystem": "keyring", 00:08:55.246 "config": [] 00:08:55.246 }, 00:08:55.246 { 00:08:55.246 "subsystem": "accel", 00:08:55.246 "config": [ 00:08:55.246 { 00:08:55.246 "method": "accel_set_options", 00:08:55.246 "params": { 00:08:55.246 "small_cache_size": 128, 00:08:55.246 "large_cache_size": 16, 00:08:55.246 "task_count": 2048, 00:08:55.246 "sequence_count": 2048, 00:08:55.246 "buf_count": 2048 00:08:55.246 } 00:08:55.247 } 00:08:55.247 ] 00:08:55.247 }, 00:08:55.247 { 00:08:55.247 "subsystem": "bdev", 00:08:55.247 "config": [ 00:08:55.247 { 00:08:55.247 "method": "bdev_set_options", 00:08:55.247 "params": { 00:08:55.247 "bdev_io_pool_size": 65535, 00:08:55.247 "bdev_io_cache_size": 256, 00:08:55.247 "bdev_auto_examine": true, 00:08:55.247 "iobuf_small_cache_size": 128, 00:08:55.247 "iobuf_large_cache_size": 16 00:08:55.247 } 00:08:55.247 }, 00:08:55.247 { 00:08:55.247 "method": "bdev_raid_set_options", 00:08:55.247 "params": { 00:08:55.247 "process_window_size_kb": 1024, 00:08:55.247 "process_max_bandwidth_mb_sec": 0 00:08:55.247 } 00:08:55.247 }, 00:08:55.247 { 00:08:55.247 "method": "bdev_nvme_set_options", 00:08:55.247 "params": { 00:08:55.247 "action_on_timeout": "none", 00:08:55.247 "timeout_us": 0, 00:08:55.247 "timeout_admin_us": 0, 00:08:55.247 "keep_alive_timeout_ms": 10000, 00:08:55.247 "arbitration_burst": 0, 00:08:55.247 "low_priority_weight": 0, 00:08:55.247 "medium_priority_weight": 0, 00:08:55.247 "high_priority_weight": 0, 00:08:55.247 "nvme_adminq_poll_period_us": 10000, 00:08:55.247 "nvme_ioq_poll_period_us": 0, 00:08:55.247 "io_queue_requests": 0, 00:08:55.247 "delay_cmd_submit": true, 00:08:55.247 "transport_retry_count": 4, 00:08:55.247 "bdev_retry_count": 3, 00:08:55.247 "transport_ack_timeout": 0, 00:08:55.247 "ctrlr_loss_timeout_sec": 0, 00:08:55.247 "reconnect_delay_sec": 0, 00:08:55.247 "fast_io_fail_timeout_sec": 0, 00:08:55.247 "disable_auto_failback": false, 00:08:55.247 "generate_uuids": false, 00:08:55.247 "transport_tos": 0, 00:08:55.247 "nvme_error_stat": false, 00:08:55.247 "rdma_srq_size": 0, 00:08:55.247 "io_path_stat": false, 00:08:55.247 "allow_accel_sequence": false, 00:08:55.247 "rdma_max_cq_size": 0, 00:08:55.247 "rdma_cm_event_timeout_ms": 0, 00:08:55.247 "dhchap_digests": [ 00:08:55.247 "sha256", 00:08:55.247 "sha384", 00:08:55.247 "sha512" 00:08:55.247 ], 00:08:55.247 "dhchap_dhgroups": [ 00:08:55.247 "null", 00:08:55.247 "ffdhe2048", 00:08:55.247 "ffdhe3072", 00:08:55.247 "ffdhe4096", 00:08:55.247 "ffdhe6144", 00:08:55.247 "ffdhe8192" 00:08:55.247 ] 00:08:55.247 } 00:08:55.247 }, 00:08:55.247 { 00:08:55.247 "method": "bdev_nvme_set_hotplug", 00:08:55.247 "params": { 00:08:55.247 "period_us": 100000, 00:08:55.247 "enable": false 00:08:55.247 } 00:08:55.247 }, 00:08:55.247 { 00:08:55.247 "method": "bdev_iscsi_set_options", 00:08:55.247 "params": { 00:08:55.247 "timeout_sec": 30 00:08:55.247 } 00:08:55.247 }, 00:08:55.247 { 00:08:55.247 "method": "bdev_wait_for_examine" 00:08:55.247 } 00:08:55.247 ] 00:08:55.247 }, 00:08:55.247 { 00:08:55.247 "subsystem": "nvmf", 00:08:55.247 "config": [ 00:08:55.247 { 00:08:55.247 "method": "nvmf_set_config", 00:08:55.247 "params": { 00:08:55.247 "discovery_filter": "match_any", 00:08:55.247 "admin_cmd_passthru": { 00:08:55.247 "identify_ctrlr": false 00:08:55.247 } 00:08:55.247 } 00:08:55.247 }, 00:08:55.247 { 00:08:55.247 "method": "nvmf_set_max_subsystems", 00:08:55.247 "params": { 00:08:55.247 "max_subsystems": 1024 00:08:55.247 } 00:08:55.247 }, 00:08:55.247 { 00:08:55.247 "method": "nvmf_set_crdt", 00:08:55.247 "params": { 00:08:55.247 "crdt1": 0, 00:08:55.247 "crdt2": 0, 00:08:55.247 "crdt3": 0 00:08:55.247 } 00:08:55.247 }, 00:08:55.247 { 00:08:55.247 "method": "nvmf_create_transport", 00:08:55.247 "params": { 00:08:55.247 "trtype": "TCP", 00:08:55.247 "max_queue_depth": 128, 00:08:55.247 "max_io_qpairs_per_ctrlr": 127, 00:08:55.247 "in_capsule_data_size": 4096, 00:08:55.247 "max_io_size": 131072, 00:08:55.247 "io_unit_size": 131072, 00:08:55.247 "max_aq_depth": 128, 00:08:55.247 "num_shared_buffers": 511, 00:08:55.247 "buf_cache_size": 4294967295, 00:08:55.247 "dif_insert_or_strip": false, 00:08:55.247 "zcopy": false, 00:08:55.247 "c2h_success": true, 00:08:55.247 "sock_priority": 0, 00:08:55.247 "abort_timeout_sec": 1, 00:08:55.247 "ack_timeout": 0, 00:08:55.247 "data_wr_pool_size": 0 00:08:55.247 } 00:08:55.247 } 00:08:55.247 ] 00:08:55.247 }, 00:08:55.247 { 00:08:55.247 "subsystem": "nbd", 00:08:55.247 "config": [] 00:08:55.247 }, 00:08:55.247 { 00:08:55.247 "subsystem": "ublk", 00:08:55.247 "config": [] 00:08:55.247 }, 00:08:55.247 { 00:08:55.247 "subsystem": "vhost_blk", 00:08:55.247 "config": [] 00:08:55.247 }, 00:08:55.247 { 00:08:55.247 "subsystem": "scsi", 00:08:55.247 "config": null 00:08:55.247 }, 00:08:55.247 { 00:08:55.247 "subsystem": "iscsi", 00:08:55.247 "config": [ 00:08:55.247 { 00:08:55.247 "method": "iscsi_set_options", 00:08:55.247 "params": { 00:08:55.247 "node_base": "iqn.2016-06.io.spdk", 00:08:55.247 "max_sessions": 128, 00:08:55.247 "max_connections_per_session": 2, 00:08:55.247 "max_queue_depth": 64, 00:08:55.247 "default_time2wait": 2, 00:08:55.247 "default_time2retain": 20, 00:08:55.247 "first_burst_length": 8192, 00:08:55.247 "immediate_data": true, 00:08:55.247 "allow_duplicated_isid": false, 00:08:55.247 "error_recovery_level": 0, 00:08:55.247 "nop_timeout": 60, 00:08:55.247 "nop_in_interval": 30, 00:08:55.247 "disable_chap": false, 00:08:55.247 "require_chap": false, 00:08:55.247 "mutual_chap": false, 00:08:55.247 "chap_group": 0, 00:08:55.247 "max_large_datain_per_connection": 64, 00:08:55.247 "max_r2t_per_connection": 4, 00:08:55.247 "pdu_pool_size": 36864, 00:08:55.247 "immediate_data_pool_size": 16384, 00:08:55.247 "data_out_pool_size": 2048 00:08:55.247 } 00:08:55.247 } 00:08:55.247 ] 00:08:55.247 }, 00:08:55.247 { 00:08:55.247 "subsystem": "vhost_scsi", 00:08:55.247 "config": [] 00:08:55.247 } 00:08:55.247 ] 00:08:55.247 } 00:08:55.247 23:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:55.247 23:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69239 00:08:55.247 23:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69239 ']' 00:08:55.247 23:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69239 00:08:55.247 23:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:08:55.247 23:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:55.247 23:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69239 00:08:55.247 23:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:55.247 23:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:55.247 killing process with pid 69239 00:08:55.247 23:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69239' 00:08:55.247 23:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69239 00:08:55.247 23:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69239 00:08:57.150 23:52:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69284 00:08:57.150 23:52:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:57.150 23:52:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:02.416 23:52:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69284 00:09:02.416 23:52:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69284 ']' 00:09:02.416 23:52:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69284 00:09:02.416 23:52:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:09:02.416 23:52:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:02.416 23:52:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69284 00:09:02.416 23:52:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:02.416 23:52:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:02.416 killing process with pid 69284 00:09:02.416 23:52:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69284' 00:09:02.416 23:52:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69284 00:09:02.416 23:52:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69284 00:09:03.789 23:52:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:03.789 23:52:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:03.789 00:09:03.789 real 0m9.907s 00:09:03.789 user 0m9.529s 00:09:03.789 sys 0m0.790s 00:09:03.789 23:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.789 23:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:03.789 ************************************ 00:09:03.789 END TEST skip_rpc_with_json 00:09:03.789 ************************************ 00:09:04.047 23:52:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:04.047 23:52:59 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:04.047 23:52:59 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.047 23:52:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.047 ************************************ 00:09:04.047 START TEST skip_rpc_with_delay 00:09:04.047 ************************************ 00:09:04.047 23:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:09:04.047 23:52:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:04.047 23:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:09:04.047 23:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:04.047 23:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:04.047 23:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:04.048 23:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:04.048 23:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:04.048 23:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:04.048 23:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:04.048 23:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:04.048 23:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:04.048 23:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:04.048 [2024-07-24 23:52:59.786470] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:04.048 [2024-07-24 23:52:59.786661] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:09:04.048 23:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:09:04.048 23:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:04.048 23:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:04.048 23:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:04.048 00:09:04.048 real 0m0.145s 00:09:04.048 user 0m0.078s 00:09:04.048 sys 0m0.067s 00:09:04.048 23:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.048 23:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:04.048 ************************************ 00:09:04.048 END TEST skip_rpc_with_delay 00:09:04.048 ************************************ 00:09:04.048 23:52:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:04.048 23:52:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:04.048 23:52:59 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:04.048 23:52:59 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:04.048 23:52:59 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.048 23:52:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.048 ************************************ 00:09:04.048 START TEST exit_on_failed_rpc_init 00:09:04.048 ************************************ 00:09:04.048 23:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:09:04.048 23:52:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69407 00:09:04.048 23:52:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69407 00:09:04.048 23:52:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:04.048 23:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69407 ']' 00:09:04.048 23:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.048 23:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:04.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.048 23:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.048 23:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:04.048 23:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:04.306 [2024-07-24 23:52:59.987251] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:09:04.306 [2024-07-24 23:52:59.987450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69407 ] 00:09:04.306 [2024-07-24 23:53:00.161680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.564 [2024-07-24 23:53:00.322040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.130 23:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:05.130 23:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:09:05.130 23:53:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:05.130 23:53:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:05.130 23:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:09:05.130 23:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:05.130 23:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:05.130 23:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.130 23:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:05.130 23:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.130 23:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:05.130 23:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.130 23:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:05.130 23:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:05.130 23:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:05.388 [2024-07-24 23:53:01.048605] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:09:05.388 [2024-07-24 23:53:01.048821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69425 ] 00:09:05.388 [2024-07-24 23:53:01.227943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.645 [2024-07-24 23:53:01.441615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.646 [2024-07-24 23:53:01.441743] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:05.646 [2024-07-24 23:53:01.441777] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:05.646 [2024-07-24 23:53:01.441809] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:06.211 23:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:09:06.211 23:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:06.211 23:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:09:06.211 23:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:09:06.211 23:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:09:06.211 23:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:06.211 23:53:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:06.211 23:53:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69407 00:09:06.211 23:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69407 ']' 00:09:06.211 23:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69407 00:09:06.211 23:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:09:06.211 23:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:06.211 23:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69407 00:09:06.211 23:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:06.211 23:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:06.211 killing process with pid 69407 00:09:06.211 23:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69407' 00:09:06.211 23:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69407 00:09:06.211 23:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69407 00:09:08.750 00:09:08.750 real 0m4.096s 00:09:08.750 user 0m4.779s 00:09:08.750 sys 0m0.602s 00:09:08.750 23:53:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.750 ************************************ 00:09:08.750 END TEST exit_on_failed_rpc_init 00:09:08.750 ************************************ 00:09:08.750 23:53:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:08.750 23:53:04 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:08.750 00:09:08.750 real 0m21.269s 00:09:08.750 user 0m20.881s 00:09:08.750 sys 0m1.998s 00:09:08.750 23:53:04 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.750 23:53:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.750 ************************************ 00:09:08.750 END TEST skip_rpc 00:09:08.750 ************************************ 00:09:08.750 23:53:04 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:08.750 23:53:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:08.750 23:53:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.750 23:53:04 -- common/autotest_common.sh@10 -- # set +x 00:09:08.750 ************************************ 00:09:08.750 START TEST rpc_client 00:09:08.750 ************************************ 00:09:08.750 23:53:04 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:08.750 * Looking for test storage... 00:09:08.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:08.750 23:53:04 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:08.750 OK 00:09:08.750 23:53:04 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:08.750 00:09:08.750 real 0m0.138s 00:09:08.750 user 0m0.054s 00:09:08.750 sys 0m0.091s 00:09:08.750 23:53:04 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.750 23:53:04 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:08.750 ************************************ 00:09:08.750 END TEST rpc_client 00:09:08.750 ************************************ 00:09:08.750 23:53:04 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:08.750 23:53:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:08.750 23:53:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.750 23:53:04 -- common/autotest_common.sh@10 -- # set +x 00:09:08.750 ************************************ 00:09:08.750 START TEST json_config 00:09:08.750 ************************************ 00:09:08.750 23:53:04 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:08.750 23:53:04 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:08.750 23:53:04 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:08.750 23:53:04 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.750 23:53:04 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.750 23:53:04 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.750 23:53:04 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.750 23:53:04 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.750 23:53:04 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.750 23:53:04 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.750 23:53:04 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.750 23:53:04 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.750 23:53:04 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.750 23:53:04 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bc9a5b99-1c11-456e-aaab-d1f1a68fbb44 00:09:08.750 23:53:04 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=bc9a5b99-1c11-456e-aaab-d1f1a68fbb44 00:09:08.750 23:53:04 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.751 23:53:04 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.751 23:53:04 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:08.751 23:53:04 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.751 23:53:04 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:08.751 23:53:04 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.751 23:53:04 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.751 23:53:04 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.751 23:53:04 json_config -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:08.751 23:53:04 json_config -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:08.751 23:53:04 json_config -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:08.751 23:53:04 json_config -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:08.751 23:53:04 json_config -- paths/export.sh@6 -- # export PATH 00:09:08.751 23:53:04 json_config -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:08.751 23:53:04 json_config -- nvmf/common.sh@47 -- # : 0 00:09:08.751 23:53:04 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:08.751 23:53:04 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:08.751 23:53:04 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.751 23:53:04 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.751 23:53:04 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.751 23:53:04 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:08.751 23:53:04 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:08.751 23:53:04 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:08.751 23:53:04 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:08.751 23:53:04 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:08.751 23:53:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:08.751 23:53:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:08.751 23:53:04 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:08.751 23:53:04 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:08.751 23:53:04 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:08.751 23:53:04 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:08.751 23:53:04 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:08.751 23:53:04 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:08.751 23:53:04 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:08.751 23:53:04 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:09:08.751 23:53:04 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:08.751 23:53:04 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:08.751 23:53:04 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:08.751 INFO: JSON configuration test init 00:09:08.751 23:53:04 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:09:08.751 23:53:04 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:09:08.751 23:53:04 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:09:08.751 23:53:04 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:08.751 23:53:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:08.751 23:53:04 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:09:08.751 23:53:04 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:08.751 23:53:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:08.751 23:53:04 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:09:08.751 23:53:04 json_config -- json_config/common.sh@9 -- # local app=target 00:09:08.751 23:53:04 json_config -- json_config/common.sh@10 -- # shift 00:09:08.751 23:53:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:08.751 23:53:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:08.751 23:53:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:08.751 23:53:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:08.751 23:53:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:08.751 23:53:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=69572 00:09:08.751 23:53:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:08.751 Waiting for target to run... 00:09:08.751 23:53:04 json_config -- json_config/common.sh@25 -- # waitforlisten 69572 /var/tmp/spdk_tgt.sock 00:09:08.751 23:53:04 json_config -- common/autotest_common.sh@831 -- # '[' -z 69572 ']' 00:09:08.751 23:53:04 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:08.751 23:53:04 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:08.751 23:53:04 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:08.751 23:53:04 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:08.751 23:53:04 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.751 23:53:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:08.751 [2024-07-24 23:53:04.457580] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:09:08.751 [2024-07-24 23:53:04.457758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69572 ] 00:09:09.017 [2024-07-24 23:53:04.813109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.276 [2024-07-24 23:53:04.986239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.535 23:53:05 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.535 23:53:05 json_config -- common/autotest_common.sh@864 -- # return 0 00:09:09.535 00:09:09.535 23:53:05 json_config -- json_config/common.sh@26 -- # echo '' 00:09:09.535 23:53:05 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:09:09.535 23:53:05 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:09:09.535 23:53:05 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:09.535 23:53:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:09.535 23:53:05 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:09:09.535 23:53:05 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:09:09.535 23:53:05 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:09.535 23:53:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:09.794 23:53:05 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:09.794 23:53:05 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:09:09.794 23:53:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:10.729 23:53:06 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:09:10.729 23:53:06 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:10.729 23:53:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:10.729 23:53:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:10.729 23:53:06 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:10.730 23:53:06 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:10.730 23:53:06 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:10.730 23:53:06 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:09:10.730 23:53:06 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:09:10.730 23:53:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:10.730 23:53:06 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:09:10.730 23:53:06 json_config -- json_config/json_config.sh@48 -- # local get_types 00:09:10.730 23:53:06 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:09:10.730 23:53:06 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:09:10.730 23:53:06 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:09:10.730 23:53:06 json_config -- json_config/json_config.sh@51 -- # sort 00:09:10.730 23:53:06 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:09:10.730 23:53:06 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:09:10.730 23:53:06 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:09:10.730 23:53:06 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:09:10.730 23:53:06 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:10.730 23:53:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:10.988 23:53:06 json_config -- json_config/json_config.sh@59 -- # return 0 00:09:10.988 23:53:06 json_config -- json_config/json_config.sh@282 -- # [[ 1 -eq 1 ]] 00:09:10.988 23:53:06 json_config -- json_config/json_config.sh@283 -- # create_bdev_subsystem_config 00:09:10.988 23:53:06 json_config -- json_config/json_config.sh@109 -- # timing_enter create_bdev_subsystem_config 00:09:10.988 23:53:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:10.988 23:53:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:10.988 23:53:06 json_config -- json_config/json_config.sh@111 -- # expected_notifications=() 00:09:10.988 23:53:06 json_config -- json_config/json_config.sh@111 -- # local expected_notifications 00:09:10.988 23:53:06 json_config -- json_config/json_config.sh@115 -- # expected_notifications+=($(get_notifications)) 00:09:10.988 23:53:06 json_config -- json_config/json_config.sh@115 -- # get_notifications 00:09:10.988 23:53:06 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:09:10.988 23:53:06 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:10.988 23:53:06 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:10.988 23:53:06 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:09:10.988 23:53:06 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:10.988 23:53:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:11.246 23:53:06 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:09:11.246 23:53:06 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:11.246 23:53:06 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:11.246 23:53:06 json_config -- json_config/json_config.sh@117 -- # [[ 1 -eq 1 ]] 00:09:11.246 23:53:06 json_config -- json_config/json_config.sh@118 -- # local lvol_store_base_bdev=Nvme0n1 00:09:11.246 23:53:06 json_config -- json_config/json_config.sh@120 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:09:11.247 23:53:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:09:11.505 Nvme0n1p0 Nvme0n1p1 00:09:11.505 23:53:07 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_split_create Malloc0 3 00:09:11.505 23:53:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:09:11.764 [2024-07-24 23:53:07.412698] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:11.764 [2024-07-24 23:53:07.412846] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:11.764 00:09:11.764 23:53:07 json_config -- json_config/json_config.sh@122 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:09:11.764 23:53:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:09:12.023 Malloc3 00:09:12.023 23:53:07 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:12.023 23:53:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:12.023 [2024-07-24 23:53:07.873615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:12.023 [2024-07-24 23:53:07.873728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.023 [2024-07-24 23:53:07.873763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:09:12.023 [2024-07-24 23:53:07.873777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.023 [2024-07-24 23:53:07.876589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.023 [2024-07-24 23:53:07.876649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:12.023 PTBdevFromMalloc3 00:09:12.023 23:53:07 json_config -- json_config/json_config.sh@125 -- # tgt_rpc bdev_null_create Null0 32 512 00:09:12.023 23:53:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:09:12.282 Null0 00:09:12.282 23:53:08 json_config -- json_config/json_config.sh@127 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:09:12.282 23:53:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:09:12.541 Malloc0 00:09:12.541 23:53:08 json_config -- json_config/json_config.sh@128 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:09:12.541 23:53:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:09:12.800 Malloc1 00:09:12.800 23:53:08 json_config -- json_config/json_config.sh@141 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:09:12.800 23:53:08 json_config -- json_config/json_config.sh@144 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:09:13.059 102400+0 records in 00:09:13.059 102400+0 records out 00:09:13.059 104857600 bytes (105 MB, 100 MiB) copied, 0.28266 s, 371 MB/s 00:09:13.059 23:53:08 json_config -- json_config/json_config.sh@145 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:09:13.059 23:53:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:09:13.318 aio_disk 00:09:13.319 23:53:09 json_config -- json_config/json_config.sh@146 -- # expected_notifications+=(bdev_register:aio_disk) 00:09:13.319 23:53:09 json_config -- json_config/json_config.sh@151 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:13.319 23:53:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:13.578 64bd1eef-b444-484f-b967-8841baade3f0 00:09:13.578 23:53:09 json_config -- json_config/json_config.sh@158 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:09:13.578 23:53:09 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:09:13.578 23:53:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:09:13.837 23:53:09 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:09:13.837 23:53:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:09:14.097 23:53:09 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:14.097 23:53:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:14.356 23:53:10 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:14.356 23:53:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:14.356 23:53:10 json_config -- json_config/json_config.sh@161 -- # [[ 0 -eq 1 ]] 00:09:14.356 23:53:10 json_config -- json_config/json_config.sh@176 -- # [[ 0 -eq 1 ]] 00:09:14.356 23:53:10 json_config -- json_config/json_config.sh@182 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:423a1a36-da86-423d-bdd3-ef8e122afac7 bdev_register:75588d6d-3cdf-4e28-8274-0cc8aa53f920 bdev_register:b15b6ce5-fbcf-4594-bcab-c17b37d46b0c bdev_register:d756913a-201e-47b5-a891-a8e3bbdd4f21 00:09:14.356 23:53:10 json_config -- json_config/json_config.sh@71 -- # local events_to_check 00:09:14.356 23:53:10 json_config -- json_config/json_config.sh@72 -- # local recorded_events 00:09:14.356 23:53:10 json_config -- json_config/json_config.sh@75 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:09:14.356 23:53:10 json_config -- json_config/json_config.sh@75 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:423a1a36-da86-423d-bdd3-ef8e122afac7 bdev_register:75588d6d-3cdf-4e28-8274-0cc8aa53f920 bdev_register:b15b6ce5-fbcf-4594-bcab-c17b37d46b0c bdev_register:d756913a-201e-47b5-a891-a8e3bbdd4f21 00:09:14.356 23:53:10 json_config -- json_config/json_config.sh@75 -- # sort 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@76 -- # recorded_events=($(get_notifications | sort)) 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@76 -- # get_notifications 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@76 -- # sort 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:09:14.616 23:53:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p1 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p0 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc3 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:PTBdevFromMalloc3 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Null0 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p2 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p1 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p0 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc1 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:aio_disk 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:423a1a36-da86-423d-bdd3-ef8e122afac7 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:75588d6d-3cdf-4e28-8274-0cc8aa53f920 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:b15b6ce5-fbcf-4594-bcab-c17b37d46b0c 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:d756913a-201e-47b5-a891-a8e3bbdd4f21 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@78 -- # [[ bdev_register:423a1a36-da86-423d-bdd3-ef8e122afac7 bdev_register:75588d6d-3cdf-4e28-8274-0cc8aa53f920 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:b15b6ce5-fbcf-4594-bcab-c17b37d46b0c bdev_register:d756913a-201e-47b5-a891-a8e3bbdd4f21 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\4\2\3\a\1\a\3\6\-\d\a\8\6\-\4\2\3\d\-\b\d\d\3\-\e\f\8\e\1\2\2\a\f\a\c\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\5\5\8\8\d\6\d\-\3\c\d\f\-\4\e\2\8\-\8\2\7\4\-\0\c\c\8\a\a\5\3\f\9\2\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\b\1\5\b\6\c\e\5\-\f\b\c\f\-\4\5\9\4\-\b\c\a\b\-\c\1\7\b\3\7\d\4\6\b\0\c\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\d\7\5\6\9\1\3\a\-\2\0\1\e\-\4\7\b\5\-\a\8\9\1\-\a\8\e\3\b\b\d\d\4\f\2\1 ]] 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@90 -- # cat 00:09:14.616 23:53:10 json_config -- json_config/json_config.sh@90 -- # printf ' %s\n' bdev_register:423a1a36-da86-423d-bdd3-ef8e122afac7 bdev_register:75588d6d-3cdf-4e28-8274-0cc8aa53f920 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:b15b6ce5-fbcf-4594-bcab-c17b37d46b0c bdev_register:d756913a-201e-47b5-a891-a8e3bbdd4f21 00:09:14.616 Expected events matched: 00:09:14.616 bdev_register:423a1a36-da86-423d-bdd3-ef8e122afac7 00:09:14.616 bdev_register:75588d6d-3cdf-4e28-8274-0cc8aa53f920 00:09:14.616 bdev_register:Malloc0 00:09:14.616 bdev_register:Malloc0p0 00:09:14.616 bdev_register:Malloc0p1 00:09:14.616 bdev_register:Malloc0p2 00:09:14.617 bdev_register:Malloc1 00:09:14.617 bdev_register:Malloc3 00:09:14.617 bdev_register:Null0 00:09:14.617 bdev_register:Nvme0n1 00:09:14.617 bdev_register:Nvme0n1p0 00:09:14.617 bdev_register:Nvme0n1p1 00:09:14.617 bdev_register:PTBdevFromMalloc3 00:09:14.617 bdev_register:aio_disk 00:09:14.617 bdev_register:b15b6ce5-fbcf-4594-bcab-c17b37d46b0c 00:09:14.617 bdev_register:d756913a-201e-47b5-a891-a8e3bbdd4f21 00:09:14.617 23:53:10 json_config -- json_config/json_config.sh@184 -- # timing_exit create_bdev_subsystem_config 00:09:14.617 23:53:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:14.617 23:53:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:14.875 23:53:10 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:09:14.875 23:53:10 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:09:14.875 23:53:10 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:09:14.876 23:53:10 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:09:14.876 23:53:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:14.876 23:53:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:14.876 23:53:10 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:09:14.876 23:53:10 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:14.876 23:53:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:15.135 MallocBdevForConfigChangeCheck 00:09:15.135 23:53:10 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:09:15.135 23:53:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:15.135 23:53:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:15.135 23:53:10 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:09:15.135 23:53:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:15.394 23:53:11 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:09:15.394 INFO: shutting down applications... 00:09:15.394 23:53:11 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:09:15.394 23:53:11 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:09:15.394 23:53:11 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:09:15.394 23:53:11 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:15.653 [2024-07-24 23:53:11.371335] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:09:15.911 Calling clear_vhost_scsi_subsystem 00:09:15.911 Calling clear_iscsi_subsystem 00:09:15.911 Calling clear_vhost_blk_subsystem 00:09:15.911 Calling clear_ublk_subsystem 00:09:15.911 Calling clear_nbd_subsystem 00:09:15.911 Calling clear_nvmf_subsystem 00:09:15.911 Calling clear_bdev_subsystem 00:09:15.912 23:53:11 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:09:15.912 23:53:11 json_config -- json_config/json_config.sh@347 -- # count=100 00:09:15.912 23:53:11 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:09:15.912 23:53:11 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:15.912 23:53:11 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:15.912 23:53:11 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:09:16.171 23:53:11 json_config -- json_config/json_config.sh@349 -- # break 00:09:16.171 23:53:11 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:09:16.171 23:53:11 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:09:16.171 23:53:11 json_config -- json_config/common.sh@31 -- # local app=target 00:09:16.171 23:53:11 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:16.171 23:53:11 json_config -- json_config/common.sh@35 -- # [[ -n 69572 ]] 00:09:16.171 23:53:11 json_config -- json_config/common.sh@38 -- # kill -SIGINT 69572 00:09:16.171 23:53:11 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:16.171 23:53:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:16.171 23:53:11 json_config -- json_config/common.sh@41 -- # kill -0 69572 00:09:16.171 23:53:11 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:16.738 23:53:12 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:16.738 23:53:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:16.738 23:53:12 json_config -- json_config/common.sh@41 -- # kill -0 69572 00:09:16.738 23:53:12 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:17.308 23:53:12 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:17.308 23:53:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:17.308 23:53:12 json_config -- json_config/common.sh@41 -- # kill -0 69572 00:09:17.308 23:53:12 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:17.308 23:53:12 json_config -- json_config/common.sh@43 -- # break 00:09:17.308 23:53:12 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:17.308 SPDK target shutdown done 00:09:17.308 INFO: relaunching applications... 00:09:17.308 23:53:12 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:17.308 23:53:12 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:09:17.308 23:53:12 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:17.308 23:53:12 json_config -- json_config/common.sh@9 -- # local app=target 00:09:17.308 23:53:12 json_config -- json_config/common.sh@10 -- # shift 00:09:17.308 23:53:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:17.308 23:53:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:17.308 23:53:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:17.308 23:53:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:17.308 23:53:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:17.308 Waiting for target to run... 00:09:17.308 23:53:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=69818 00:09:17.308 23:53:12 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:17.308 23:53:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:17.308 23:53:12 json_config -- json_config/common.sh@25 -- # waitforlisten 69818 /var/tmp/spdk_tgt.sock 00:09:17.308 23:53:12 json_config -- common/autotest_common.sh@831 -- # '[' -z 69818 ']' 00:09:17.308 23:53:12 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:17.308 23:53:12 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:17.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:17.308 23:53:12 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:17.308 23:53:12 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:17.308 23:53:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:17.308 [2024-07-24 23:53:13.042762] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:09:17.308 [2024-07-24 23:53:13.042970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69818 ] 00:09:17.567 [2024-07-24 23:53:13.372521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.825 [2024-07-24 23:53:13.513158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.392 [2024-07-24 23:53:14.115638] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:18.392 [2024-07-24 23:53:14.115716] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:18.392 [2024-07-24 23:53:14.123635] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:18.392 [2024-07-24 23:53:14.123695] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:18.392 [2024-07-24 23:53:14.131658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:18.392 [2024-07-24 23:53:14.131710] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:18.392 [2024-07-24 23:53:14.131734] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:18.392 [2024-07-24 23:53:14.227900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:18.392 [2024-07-24 23:53:14.227970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.392 [2024-07-24 23:53:14.227996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009680 00:09:18.392 [2024-07-24 23:53:14.228008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.392 [2024-07-24 23:53:14.228488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.392 [2024-07-24 23:53:14.228518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:18.651 00:09:18.651 INFO: Checking if target configuration is the same... 00:09:18.651 23:53:14 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:18.651 23:53:14 json_config -- common/autotest_common.sh@864 -- # return 0 00:09:18.651 23:53:14 json_config -- json_config/common.sh@26 -- # echo '' 00:09:18.651 23:53:14 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:09:18.651 23:53:14 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:18.651 23:53:14 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:18.651 23:53:14 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:09:18.651 23:53:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:18.651 + '[' 2 -ne 2 ']' 00:09:18.651 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:18.651 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:18.651 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:18.651 +++ basename /dev/fd/62 00:09:18.651 ++ mktemp /tmp/62.XXX 00:09:18.651 + tmp_file_1=/tmp/62.Xwo 00:09:18.651 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:18.651 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:18.651 + tmp_file_2=/tmp/spdk_tgt_config.json.O5q 00:09:18.651 + ret=0 00:09:18.651 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:18.910 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:19.169 + diff -u /tmp/62.Xwo /tmp/spdk_tgt_config.json.O5q 00:09:19.169 INFO: JSON config files are the same 00:09:19.169 + echo 'INFO: JSON config files are the same' 00:09:19.169 + rm /tmp/62.Xwo /tmp/spdk_tgt_config.json.O5q 00:09:19.169 + exit 0 00:09:19.169 INFO: changing configuration and checking if this can be detected... 00:09:19.169 23:53:14 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:09:19.169 23:53:14 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:19.169 23:53:14 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:19.169 23:53:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:19.428 23:53:15 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:09:19.428 23:53:15 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:19.428 23:53:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:19.428 + '[' 2 -ne 2 ']' 00:09:19.428 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:19.428 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:19.428 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:19.428 +++ basename /dev/fd/62 00:09:19.428 ++ mktemp /tmp/62.XXX 00:09:19.428 + tmp_file_1=/tmp/62.Xoa 00:09:19.428 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:19.428 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:19.428 + tmp_file_2=/tmp/spdk_tgt_config.json.0MZ 00:09:19.428 + ret=0 00:09:19.428 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:19.687 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:19.687 + diff -u /tmp/62.Xoa /tmp/spdk_tgt_config.json.0MZ 00:09:19.687 + ret=1 00:09:19.687 + echo '=== Start of file: /tmp/62.Xoa ===' 00:09:19.687 + cat /tmp/62.Xoa 00:09:19.687 + echo '=== End of file: /tmp/62.Xoa ===' 00:09:19.687 + echo '' 00:09:19.687 + echo '=== Start of file: /tmp/spdk_tgt_config.json.0MZ ===' 00:09:19.687 + cat /tmp/spdk_tgt_config.json.0MZ 00:09:19.687 + echo '=== End of file: /tmp/spdk_tgt_config.json.0MZ ===' 00:09:19.687 + echo '' 00:09:19.687 + rm /tmp/62.Xoa /tmp/spdk_tgt_config.json.0MZ 00:09:19.687 + exit 1 00:09:19.687 INFO: configuration change detected. 00:09:19.687 23:53:15 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:09:19.687 23:53:15 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:09:19.687 23:53:15 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:09:19.687 23:53:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:19.687 23:53:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:19.687 23:53:15 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:09:19.687 23:53:15 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:09:19.687 23:53:15 json_config -- json_config/json_config.sh@321 -- # [[ -n 69818 ]] 00:09:19.687 23:53:15 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:09:19.687 23:53:15 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:09:19.687 23:53:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:19.687 23:53:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:19.687 23:53:15 json_config -- json_config/json_config.sh@190 -- # [[ 1 -eq 1 ]] 00:09:19.687 23:53:15 json_config -- json_config/json_config.sh@191 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:09:19.687 23:53:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:09:19.946 23:53:15 json_config -- json_config/json_config.sh@192 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:09:19.946 23:53:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:09:20.205 23:53:15 json_config -- json_config/json_config.sh@193 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:09:20.205 23:53:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:09:20.463 23:53:16 json_config -- json_config/json_config.sh@194 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:09:20.463 23:53:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:09:20.721 23:53:16 json_config -- json_config/json_config.sh@197 -- # uname -s 00:09:20.721 23:53:16 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:09:20.721 23:53:16 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:09:20.721 23:53:16 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:09:20.721 23:53:16 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:09:20.721 23:53:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:20.721 23:53:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:20.721 23:53:16 json_config -- json_config/json_config.sh@327 -- # killprocess 69818 00:09:20.721 23:53:16 json_config -- common/autotest_common.sh@950 -- # '[' -z 69818 ']' 00:09:20.721 23:53:16 json_config -- common/autotest_common.sh@954 -- # kill -0 69818 00:09:20.721 23:53:16 json_config -- common/autotest_common.sh@955 -- # uname 00:09:20.721 23:53:16 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:20.721 23:53:16 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69818 00:09:20.721 killing process with pid 69818 00:09:20.721 23:53:16 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:20.721 23:53:16 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:20.721 23:53:16 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69818' 00:09:20.721 23:53:16 json_config -- common/autotest_common.sh@969 -- # kill 69818 00:09:20.721 23:53:16 json_config -- common/autotest_common.sh@974 -- # wait 69818 00:09:21.658 23:53:17 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:21.659 23:53:17 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:09:21.659 23:53:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:21.659 23:53:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.659 23:53:17 json_config -- json_config/json_config.sh@332 -- # return 0 00:09:21.659 INFO: Success 00:09:21.659 23:53:17 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:09:21.659 ************************************ 00:09:21.659 END TEST json_config 00:09:21.659 ************************************ 00:09:21.659 00:09:21.659 real 0m13.022s 00:09:21.659 user 0m18.748s 00:09:21.659 sys 0m2.287s 00:09:21.659 23:53:17 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.659 23:53:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.659 23:53:17 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:21.659 23:53:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:21.659 23:53:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.659 23:53:17 -- common/autotest_common.sh@10 -- # set +x 00:09:21.659 ************************************ 00:09:21.659 START TEST json_config_extra_key 00:09:21.659 ************************************ 00:09:21.659 23:53:17 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:21.659 23:53:17 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bc9a5b99-1c11-456e-aaab-d1f1a68fbb44 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=bc9a5b99-1c11-456e-aaab-d1f1a68fbb44 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:21.659 23:53:17 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.659 23:53:17 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.659 23:53:17 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.659 23:53:17 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:21.659 23:53:17 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:21.659 23:53:17 json_config_extra_key -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:21.659 23:53:17 json_config_extra_key -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:21.659 23:53:17 json_config_extra_key -- paths/export.sh@6 -- # export PATH 00:09:21.659 23:53:17 json_config_extra_key -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:21.659 23:53:17 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:21.659 23:53:17 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:21.659 23:53:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:21.659 23:53:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:21.659 23:53:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:21.659 23:53:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:21.659 23:53:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:21.659 23:53:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:21.659 23:53:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:21.659 INFO: launching applications... 00:09:21.659 23:53:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:21.659 23:53:17 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:21.659 23:53:17 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:21.659 23:53:17 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:21.659 23:53:17 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:21.659 23:53:17 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:21.659 23:53:17 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:21.659 23:53:17 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:21.659 23:53:17 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:21.659 23:53:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:21.659 23:53:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:21.659 Waiting for target to run... 00:09:21.659 23:53:17 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69990 00:09:21.659 23:53:17 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:21.659 23:53:17 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69990 /var/tmp/spdk_tgt.sock 00:09:21.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:21.659 23:53:17 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69990 ']' 00:09:21.659 23:53:17 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:21.659 23:53:17 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:21.659 23:53:17 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.659 23:53:17 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:21.659 23:53:17 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.659 23:53:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:21.659 [2024-07-24 23:53:17.515281] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:09:21.659 [2024-07-24 23:53:17.515473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69990 ] 00:09:22.227 [2024-07-24 23:53:17.862122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.227 [2024-07-24 23:53:18.050166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.794 00:09:22.794 INFO: shutting down applications... 00:09:22.794 23:53:18 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.794 23:53:18 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:09:22.794 23:53:18 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:22.794 23:53:18 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:22.794 23:53:18 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:22.794 23:53:18 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:22.794 23:53:18 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:22.794 23:53:18 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69990 ]] 00:09:22.794 23:53:18 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69990 00:09:22.794 23:53:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:22.794 23:53:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:22.794 23:53:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69990 00:09:22.794 23:53:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:23.362 23:53:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:23.362 23:53:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:23.362 23:53:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69990 00:09:23.362 23:53:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:23.929 23:53:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:23.929 23:53:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:23.929 23:53:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69990 00:09:23.929 23:53:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:24.496 23:53:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:24.496 23:53:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:24.496 23:53:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69990 00:09:24.496 23:53:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:25.062 23:53:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:25.062 23:53:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:25.062 23:53:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69990 00:09:25.062 23:53:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:25.320 23:53:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:25.320 23:53:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:25.320 23:53:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69990 00:09:25.320 23:53:21 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:25.320 23:53:21 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:25.320 23:53:21 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:25.320 SPDK target shutdown done 00:09:25.320 Success 00:09:25.320 23:53:21 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:25.320 23:53:21 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:25.320 00:09:25.320 real 0m3.827s 00:09:25.320 user 0m3.413s 00:09:25.320 sys 0m0.488s 00:09:25.320 ************************************ 00:09:25.320 END TEST json_config_extra_key 00:09:25.320 ************************************ 00:09:25.320 23:53:21 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.320 23:53:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:25.579 23:53:21 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:25.579 23:53:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:25.579 23:53:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.579 23:53:21 -- common/autotest_common.sh@10 -- # set +x 00:09:25.579 ************************************ 00:09:25.579 START TEST alias_rpc 00:09:25.579 ************************************ 00:09:25.579 23:53:21 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:25.579 * Looking for test storage... 00:09:25.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:25.579 23:53:21 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:25.579 23:53:21 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70077 00:09:25.579 23:53:21 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70077 00:09:25.579 23:53:21 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:25.579 23:53:21 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70077 ']' 00:09:25.579 23:53:21 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.579 23:53:21 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:25.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.579 23:53:21 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.579 23:53:21 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:25.579 23:53:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.579 [2024-07-24 23:53:21.394308] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:09:25.579 [2024-07-24 23:53:21.394477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70077 ] 00:09:25.837 [2024-07-24 23:53:21.564000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.096 [2024-07-24 23:53:21.727371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.668 23:53:22 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:26.668 23:53:22 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:26.668 23:53:22 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:26.927 23:53:22 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70077 00:09:26.927 23:53:22 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70077 ']' 00:09:26.927 23:53:22 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70077 00:09:26.927 23:53:22 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:09:26.927 23:53:22 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:26.927 23:53:22 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70077 00:09:26.927 killing process with pid 70077 00:09:26.927 23:53:22 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:26.927 23:53:22 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:26.927 23:53:22 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70077' 00:09:26.927 23:53:22 alias_rpc -- common/autotest_common.sh@969 -- # kill 70077 00:09:26.927 23:53:22 alias_rpc -- common/autotest_common.sh@974 -- # wait 70077 00:09:28.830 ************************************ 00:09:28.830 END TEST alias_rpc 00:09:28.830 ************************************ 00:09:28.830 00:09:28.830 real 0m3.300s 00:09:28.830 user 0m3.426s 00:09:28.830 sys 0m0.466s 00:09:28.830 23:53:24 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.830 23:53:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.830 23:53:24 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:09:28.830 23:53:24 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:28.830 23:53:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:28.830 23:53:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.830 23:53:24 -- common/autotest_common.sh@10 -- # set +x 00:09:28.830 ************************************ 00:09:28.830 START TEST spdkcli_tcp 00:09:28.830 ************************************ 00:09:28.830 23:53:24 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:28.830 * Looking for test storage... 00:09:28.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:28.830 23:53:24 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:28.830 23:53:24 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:28.830 23:53:24 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:28.830 23:53:24 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:28.830 23:53:24 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:28.830 23:53:24 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:28.830 23:53:24 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:28.830 23:53:24 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:28.830 23:53:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:28.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.830 23:53:24 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70171 00:09:28.830 23:53:24 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70171 00:09:28.830 23:53:24 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:28.830 23:53:24 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70171 ']' 00:09:28.830 23:53:24 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.830 23:53:24 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:28.830 23:53:24 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.830 23:53:24 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:28.830 23:53:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:29.088 [2024-07-24 23:53:24.752191] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:09:29.088 [2024-07-24 23:53:24.752373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70171 ] 00:09:29.088 [2024-07-24 23:53:24.923819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:29.346 [2024-07-24 23:53:25.108402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.347 [2024-07-24 23:53:25.108402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.913 23:53:25 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:29.913 23:53:25 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:09:29.913 23:53:25 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70188 00:09:29.913 23:53:25 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:29.913 23:53:25 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:30.172 [ 00:09:30.172 "spdk_get_version", 00:09:30.172 "rpc_get_methods", 00:09:30.172 "keyring_get_keys", 00:09:30.172 "trace_get_info", 00:09:30.172 "trace_get_tpoint_group_mask", 00:09:30.172 "trace_disable_tpoint_group", 00:09:30.172 "trace_enable_tpoint_group", 00:09:30.172 "trace_clear_tpoint_mask", 00:09:30.172 "trace_set_tpoint_mask", 00:09:30.172 "framework_get_pci_devices", 00:09:30.172 "framework_get_config", 00:09:30.172 "framework_get_subsystems", 00:09:30.172 "iobuf_get_stats", 00:09:30.172 "iobuf_set_options", 00:09:30.172 "sock_get_default_impl", 00:09:30.172 "sock_set_default_impl", 00:09:30.172 "sock_impl_set_options", 00:09:30.172 "sock_impl_get_options", 00:09:30.172 "vmd_rescan", 00:09:30.172 "vmd_remove_device", 00:09:30.172 "vmd_enable", 00:09:30.172 "accel_get_stats", 00:09:30.172 "accel_set_options", 00:09:30.172 "accel_set_driver", 00:09:30.172 "accel_crypto_key_destroy", 00:09:30.172 "accel_crypto_keys_get", 00:09:30.172 "accel_crypto_key_create", 00:09:30.172 "accel_assign_opc", 00:09:30.172 "accel_get_module_info", 00:09:30.172 "accel_get_opc_assignments", 00:09:30.172 "notify_get_notifications", 00:09:30.172 "notify_get_types", 00:09:30.172 "bdev_get_histogram", 00:09:30.172 "bdev_enable_histogram", 00:09:30.172 "bdev_set_qos_limit", 00:09:30.172 "bdev_set_qd_sampling_period", 00:09:30.172 "bdev_get_bdevs", 00:09:30.172 "bdev_reset_iostat", 00:09:30.172 "bdev_get_iostat", 00:09:30.172 "bdev_examine", 00:09:30.172 "bdev_wait_for_examine", 00:09:30.172 "bdev_set_options", 00:09:30.172 "scsi_get_devices", 00:09:30.172 "thread_set_cpumask", 00:09:30.172 "framework_get_governor", 00:09:30.172 "framework_get_scheduler", 00:09:30.172 "framework_set_scheduler", 00:09:30.172 "framework_get_reactors", 00:09:30.172 "thread_get_io_channels", 00:09:30.172 "thread_get_pollers", 00:09:30.172 "thread_get_stats", 00:09:30.172 "framework_monitor_context_switch", 00:09:30.172 "spdk_kill_instance", 00:09:30.172 "log_enable_timestamps", 00:09:30.172 "log_get_flags", 00:09:30.172 "log_clear_flag", 00:09:30.172 "log_set_flag", 00:09:30.172 "log_get_level", 00:09:30.172 "log_set_level", 00:09:30.172 "log_get_print_level", 00:09:30.172 "log_set_print_level", 00:09:30.172 "framework_enable_cpumask_locks", 00:09:30.172 "framework_disable_cpumask_locks", 00:09:30.172 "framework_wait_init", 00:09:30.172 "framework_start_init", 00:09:30.172 "virtio_blk_create_transport", 00:09:30.172 "virtio_blk_get_transports", 00:09:30.172 "vhost_controller_set_coalescing", 00:09:30.172 "vhost_get_controllers", 00:09:30.172 "vhost_delete_controller", 00:09:30.172 "vhost_create_blk_controller", 00:09:30.172 "vhost_scsi_controller_remove_target", 00:09:30.172 "vhost_scsi_controller_add_target", 00:09:30.172 "vhost_start_scsi_controller", 00:09:30.172 "vhost_create_scsi_controller", 00:09:30.172 "ublk_recover_disk", 00:09:30.172 "ublk_get_disks", 00:09:30.172 "ublk_stop_disk", 00:09:30.172 "ublk_start_disk", 00:09:30.172 "ublk_destroy_target", 00:09:30.172 "ublk_create_target", 00:09:30.172 "nbd_get_disks", 00:09:30.172 "nbd_stop_disk", 00:09:30.172 "nbd_start_disk", 00:09:30.172 "env_dpdk_get_mem_stats", 00:09:30.172 "nvmf_stop_mdns_prr", 00:09:30.172 "nvmf_publish_mdns_prr", 00:09:30.172 "nvmf_subsystem_get_listeners", 00:09:30.172 "nvmf_subsystem_get_qpairs", 00:09:30.172 "nvmf_subsystem_get_controllers", 00:09:30.172 "nvmf_get_stats", 00:09:30.172 "nvmf_get_transports", 00:09:30.172 "nvmf_create_transport", 00:09:30.172 "nvmf_get_targets", 00:09:30.172 "nvmf_delete_target", 00:09:30.172 "nvmf_create_target", 00:09:30.173 "nvmf_subsystem_allow_any_host", 00:09:30.173 "nvmf_subsystem_remove_host", 00:09:30.173 "nvmf_subsystem_add_host", 00:09:30.173 "nvmf_ns_remove_host", 00:09:30.173 "nvmf_ns_add_host", 00:09:30.173 "nvmf_subsystem_remove_ns", 00:09:30.173 "nvmf_subsystem_add_ns", 00:09:30.173 "nvmf_subsystem_listener_set_ana_state", 00:09:30.173 "nvmf_discovery_get_referrals", 00:09:30.173 "nvmf_discovery_remove_referral", 00:09:30.173 "nvmf_discovery_add_referral", 00:09:30.173 "nvmf_subsystem_remove_listener", 00:09:30.173 "nvmf_subsystem_add_listener", 00:09:30.173 "nvmf_delete_subsystem", 00:09:30.173 "nvmf_create_subsystem", 00:09:30.173 "nvmf_get_subsystems", 00:09:30.173 "nvmf_set_crdt", 00:09:30.173 "nvmf_set_config", 00:09:30.173 "nvmf_set_max_subsystems", 00:09:30.173 "iscsi_get_histogram", 00:09:30.173 "iscsi_enable_histogram", 00:09:30.173 "iscsi_set_options", 00:09:30.173 "iscsi_get_auth_groups", 00:09:30.173 "iscsi_auth_group_remove_secret", 00:09:30.173 "iscsi_auth_group_add_secret", 00:09:30.173 "iscsi_delete_auth_group", 00:09:30.173 "iscsi_create_auth_group", 00:09:30.173 "iscsi_set_discovery_auth", 00:09:30.173 "iscsi_get_options", 00:09:30.173 "iscsi_target_node_request_logout", 00:09:30.173 "iscsi_target_node_set_redirect", 00:09:30.173 "iscsi_target_node_set_auth", 00:09:30.173 "iscsi_target_node_add_lun", 00:09:30.173 "iscsi_get_stats", 00:09:30.173 "iscsi_get_connections", 00:09:30.173 "iscsi_portal_group_set_auth", 00:09:30.173 "iscsi_start_portal_group", 00:09:30.173 "iscsi_delete_portal_group", 00:09:30.173 "iscsi_create_portal_group", 00:09:30.173 "iscsi_get_portal_groups", 00:09:30.173 "iscsi_delete_target_node", 00:09:30.173 "iscsi_target_node_remove_pg_ig_maps", 00:09:30.173 "iscsi_target_node_add_pg_ig_maps", 00:09:30.173 "iscsi_create_target_node", 00:09:30.173 "iscsi_get_target_nodes", 00:09:30.173 "iscsi_delete_initiator_group", 00:09:30.173 "iscsi_initiator_group_remove_initiators", 00:09:30.173 "iscsi_initiator_group_add_initiators", 00:09:30.173 "iscsi_create_initiator_group", 00:09:30.173 "iscsi_get_initiator_groups", 00:09:30.173 "keyring_linux_set_options", 00:09:30.173 "keyring_file_remove_key", 00:09:30.173 "keyring_file_add_key", 00:09:30.173 "iaa_scan_accel_module", 00:09:30.173 "dsa_scan_accel_module", 00:09:30.173 "ioat_scan_accel_module", 00:09:30.173 "accel_error_inject_error", 00:09:30.173 "bdev_iscsi_delete", 00:09:30.173 "bdev_iscsi_create", 00:09:30.173 "bdev_iscsi_set_options", 00:09:30.173 "bdev_virtio_attach_controller", 00:09:30.173 "bdev_virtio_scsi_get_devices", 00:09:30.173 "bdev_virtio_detach_controller", 00:09:30.173 "bdev_virtio_blk_set_hotplug", 00:09:30.173 "bdev_ftl_set_property", 00:09:30.173 "bdev_ftl_get_properties", 00:09:30.173 "bdev_ftl_get_stats", 00:09:30.173 "bdev_ftl_unmap", 00:09:30.173 "bdev_ftl_unload", 00:09:30.173 "bdev_ftl_delete", 00:09:30.173 "bdev_ftl_load", 00:09:30.173 "bdev_ftl_create", 00:09:30.173 "bdev_aio_delete", 00:09:30.173 "bdev_aio_rescan", 00:09:30.173 "bdev_aio_create", 00:09:30.173 "blobfs_create", 00:09:30.173 "blobfs_detect", 00:09:30.173 "blobfs_set_cache_size", 00:09:30.173 "bdev_zone_block_delete", 00:09:30.173 "bdev_zone_block_create", 00:09:30.173 "bdev_delay_delete", 00:09:30.173 "bdev_delay_create", 00:09:30.173 "bdev_delay_update_latency", 00:09:30.173 "bdev_split_delete", 00:09:30.173 "bdev_split_create", 00:09:30.173 "bdev_error_inject_error", 00:09:30.173 "bdev_error_delete", 00:09:30.173 "bdev_error_create", 00:09:30.173 "bdev_raid_set_options", 00:09:30.173 "bdev_raid_remove_base_bdev", 00:09:30.173 "bdev_raid_add_base_bdev", 00:09:30.173 "bdev_raid_delete", 00:09:30.173 "bdev_raid_create", 00:09:30.173 "bdev_raid_get_bdevs", 00:09:30.173 "bdev_lvol_set_parent_bdev", 00:09:30.173 "bdev_lvol_set_parent", 00:09:30.173 "bdev_lvol_check_shallow_copy", 00:09:30.173 "bdev_lvol_start_shallow_copy", 00:09:30.173 "bdev_lvol_grow_lvstore", 00:09:30.173 "bdev_lvol_get_lvols", 00:09:30.173 "bdev_lvol_get_lvstores", 00:09:30.173 "bdev_lvol_delete", 00:09:30.173 "bdev_lvol_set_read_only", 00:09:30.173 "bdev_lvol_resize", 00:09:30.173 "bdev_lvol_decouple_parent", 00:09:30.173 "bdev_lvol_inflate", 00:09:30.173 "bdev_lvol_rename", 00:09:30.173 "bdev_lvol_clone_bdev", 00:09:30.173 "bdev_lvol_clone", 00:09:30.173 "bdev_lvol_snapshot", 00:09:30.173 "bdev_lvol_create", 00:09:30.173 "bdev_lvol_delete_lvstore", 00:09:30.173 "bdev_lvol_rename_lvstore", 00:09:30.173 "bdev_lvol_create_lvstore", 00:09:30.173 "bdev_passthru_delete", 00:09:30.173 "bdev_passthru_create", 00:09:30.173 "bdev_nvme_cuse_unregister", 00:09:30.173 "bdev_nvme_cuse_register", 00:09:30.173 "bdev_opal_new_user", 00:09:30.173 "bdev_opal_set_lock_state", 00:09:30.173 "bdev_opal_delete", 00:09:30.173 "bdev_opal_get_info", 00:09:30.173 "bdev_opal_create", 00:09:30.173 "bdev_nvme_opal_revert", 00:09:30.173 "bdev_nvme_opal_init", 00:09:30.173 "bdev_nvme_send_cmd", 00:09:30.173 "bdev_nvme_get_path_iostat", 00:09:30.173 "bdev_nvme_get_mdns_discovery_info", 00:09:30.173 "bdev_nvme_stop_mdns_discovery", 00:09:30.173 "bdev_nvme_start_mdns_discovery", 00:09:30.173 "bdev_nvme_set_multipath_policy", 00:09:30.173 "bdev_nvme_set_preferred_path", 00:09:30.173 "bdev_nvme_get_io_paths", 00:09:30.173 "bdev_nvme_remove_error_injection", 00:09:30.173 "bdev_nvme_add_error_injection", 00:09:30.173 "bdev_nvme_get_discovery_info", 00:09:30.173 "bdev_nvme_stop_discovery", 00:09:30.173 "bdev_nvme_start_discovery", 00:09:30.173 "bdev_nvme_get_controller_health_info", 00:09:30.173 "bdev_nvme_disable_controller", 00:09:30.173 "bdev_nvme_enable_controller", 00:09:30.173 "bdev_nvme_reset_controller", 00:09:30.173 "bdev_nvme_get_transport_statistics", 00:09:30.173 "bdev_nvme_apply_firmware", 00:09:30.173 "bdev_nvme_detach_controller", 00:09:30.173 "bdev_nvme_get_controllers", 00:09:30.173 "bdev_nvme_attach_controller", 00:09:30.173 "bdev_nvme_set_hotplug", 00:09:30.173 "bdev_nvme_set_options", 00:09:30.173 "bdev_null_resize", 00:09:30.173 "bdev_null_delete", 00:09:30.173 "bdev_null_create", 00:09:30.173 "bdev_malloc_delete", 00:09:30.173 "bdev_malloc_create" 00:09:30.173 ] 00:09:30.173 23:53:26 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:30.173 23:53:26 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:30.173 23:53:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:30.432 23:53:26 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:30.432 23:53:26 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70171 00:09:30.432 23:53:26 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70171 ']' 00:09:30.432 23:53:26 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70171 00:09:30.432 23:53:26 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:09:30.432 23:53:26 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:30.432 23:53:26 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70171 00:09:30.432 killing process with pid 70171 00:09:30.432 23:53:26 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:30.432 23:53:26 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:30.432 23:53:26 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70171' 00:09:30.432 23:53:26 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70171 00:09:30.432 23:53:26 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70171 00:09:32.335 ************************************ 00:09:32.335 END TEST spdkcli_tcp 00:09:32.335 ************************************ 00:09:32.335 00:09:32.335 real 0m3.496s 00:09:32.335 user 0m6.305s 00:09:32.335 sys 0m0.560s 00:09:32.335 23:53:28 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.335 23:53:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:32.335 23:53:28 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:32.335 23:53:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:32.335 23:53:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.335 23:53:28 -- common/autotest_common.sh@10 -- # set +x 00:09:32.335 ************************************ 00:09:32.335 START TEST dpdk_mem_utility 00:09:32.335 ************************************ 00:09:32.335 23:53:28 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:32.593 * Looking for test storage... 00:09:32.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:32.593 23:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:32.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.593 23:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70274 00:09:32.593 23:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:32.593 23:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70274 00:09:32.593 23:53:28 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70274 ']' 00:09:32.593 23:53:28 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.593 23:53:28 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:32.593 23:53:28 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.593 23:53:28 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:32.593 23:53:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:32.593 [2024-07-24 23:53:28.294869] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:09:32.593 [2024-07-24 23:53:28.295206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70274 ] 00:09:32.852 [2024-07-24 23:53:28.469055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.852 [2024-07-24 23:53:28.685062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.790 23:53:29 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:33.790 23:53:29 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:09:33.791 23:53:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:33.791 23:53:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:33.791 23:53:29 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.791 23:53:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:33.791 { 00:09:33.791 "filename": "/tmp/spdk_mem_dump.txt" 00:09:33.791 } 00:09:33.791 23:53:29 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.791 23:53:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:33.791 DPDK memory size 820.000000 MiB in 1 heap(s) 00:09:33.791 1 heaps totaling size 820.000000 MiB 00:09:33.791 size: 820.000000 MiB heap id: 0 00:09:33.791 end heaps---------- 00:09:33.791 8 mempools totaling size 598.116089 MiB 00:09:33.791 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:33.791 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:33.791 size: 84.521057 MiB name: bdev_io_70274 00:09:33.791 size: 51.011292 MiB name: evtpool_70274 00:09:33.791 size: 50.003479 MiB name: msgpool_70274 00:09:33.791 size: 21.763794 MiB name: PDU_Pool 00:09:33.791 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:33.791 size: 0.026123 MiB name: Session_Pool 00:09:33.791 end mempools------- 00:09:33.791 6 memzones totaling size 4.142822 MiB 00:09:33.791 size: 1.000366 MiB name: RG_ring_0_70274 00:09:33.791 size: 1.000366 MiB name: RG_ring_1_70274 00:09:33.791 size: 1.000366 MiB name: RG_ring_4_70274 00:09:33.791 size: 1.000366 MiB name: RG_ring_5_70274 00:09:33.791 size: 0.125366 MiB name: RG_ring_2_70274 00:09:33.791 size: 0.015991 MiB name: RG_ring_3_70274 00:09:33.791 end memzones------- 00:09:33.791 23:53:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:33.791 heap id: 0 total size: 820.000000 MiB number of busy elements: 304 number of free elements: 18 00:09:33.791 list of free elements. size: 18.450562 MiB 00:09:33.791 element at address: 0x200000400000 with size: 1.999451 MiB 00:09:33.791 element at address: 0x200000800000 with size: 1.996887 MiB 00:09:33.791 element at address: 0x200007000000 with size: 1.995972 MiB 00:09:33.791 element at address: 0x20000b200000 with size: 1.995972 MiB 00:09:33.791 element at address: 0x200019100040 with size: 0.999939 MiB 00:09:33.791 element at address: 0x200019500040 with size: 0.999939 MiB 00:09:33.791 element at address: 0x200019600000 with size: 0.999084 MiB 00:09:33.791 element at address: 0x200003e00000 with size: 0.996094 MiB 00:09:33.791 element at address: 0x200032200000 with size: 0.994324 MiB 00:09:33.791 element at address: 0x200018e00000 with size: 0.959656 MiB 00:09:33.791 element at address: 0x200019900040 with size: 0.936401 MiB 00:09:33.791 element at address: 0x200000200000 with size: 0.829956 MiB 00:09:33.791 element at address: 0x20001b000000 with size: 0.563416 MiB 00:09:33.791 element at address: 0x200019200000 with size: 0.487976 MiB 00:09:33.791 element at address: 0x200019a00000 with size: 0.485413 MiB 00:09:33.791 element at address: 0x200013800000 with size: 0.467651 MiB 00:09:33.791 element at address: 0x200028400000 with size: 0.390442 MiB 00:09:33.791 element at address: 0x200003a00000 with size: 0.351990 MiB 00:09:33.791 list of standard malloc elements. size: 199.285034 MiB 00:09:33.791 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:09:33.791 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:09:33.791 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:09:33.791 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:09:33.791 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:09:33.791 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:09:33.791 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:09:33.791 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:09:33.791 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:09:33.791 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:09:33.791 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:09:33.791 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:09:33.791 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:09:33.791 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:09:33.791 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:09:33.791 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:09:33.791 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:09:33.791 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:09:33.791 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:09:33.791 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:09:33.791 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:09:33.791 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:09:33.791 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:09:33.791 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:09:33.791 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:09:33.791 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:09:33.791 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:09:33.791 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:09:33.791 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:09:33.791 element at address: 0x200003aff980 with size: 0.000244 MiB 00:09:33.791 element at address: 0x200003affa80 with size: 0.000244 MiB 00:09:33.791 element at address: 0x200003eff000 with size: 0.000244 MiB 00:09:33.791 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:09:33.791 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:09:33.791 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:09:33.791 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:09:33.791 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:09:33.791 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:09:33.791 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:09:33.791 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:09:33.791 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:09:33.791 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:09:33.791 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:09:33.791 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:09:33.791 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:09:33.791 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:09:33.791 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:09:33.792 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:09:33.792 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:09:33.792 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:09:33.792 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:09:33.792 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:09:33.792 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:09:33.792 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:09:33.792 element at address: 0x200013877b80 with size: 0.000244 MiB 00:09:33.792 element at address: 0x200013877c80 with size: 0.000244 MiB 00:09:33.792 element at address: 0x200013877d80 with size: 0.000244 MiB 00:09:33.792 element at address: 0x200013877e80 with size: 0.000244 MiB 00:09:33.792 element at address: 0x200013877f80 with size: 0.000244 MiB 00:09:33.792 element at address: 0x200013878080 with size: 0.000244 MiB 00:09:33.792 element at address: 0x200013878180 with size: 0.000244 MiB 00:09:33.792 element at address: 0x200013878280 with size: 0.000244 MiB 00:09:33.792 element at address: 0x200013878380 with size: 0.000244 MiB 00:09:33.792 element at address: 0x200013878480 with size: 0.000244 MiB 00:09:33.792 element at address: 0x200013878580 with size: 0.000244 MiB 00:09:33.792 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:09:33.792 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:09:33.792 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x200019abc680 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:09:33.792 element at address: 0x200028463f40 with size: 0.000244 MiB 00:09:33.792 element at address: 0x200028464040 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20002846af80 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20002846b080 with size: 0.000244 MiB 00:09:33.792 element at address: 0x20002846b180 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846b280 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846b380 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846b480 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846b580 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846b680 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846b780 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846b880 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846b980 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846be80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846c080 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846c180 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846c280 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846c380 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846c480 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846c580 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846c680 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846c780 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846c880 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846c980 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846d080 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846d180 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846d280 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846d380 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846d480 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846d580 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846d680 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846d780 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846d880 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846d980 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846da80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846db80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846de80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846df80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846e080 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846e180 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846e280 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846e380 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846e480 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846e580 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846e680 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846e780 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846e880 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846e980 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846f080 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846f180 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846f280 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846f380 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846f480 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846f580 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846f680 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846f780 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846f880 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846f980 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:09:33.793 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:09:33.793 list of memzone associated elements. size: 602.264404 MiB 00:09:33.793 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:09:33.793 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:33.793 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:09:33.793 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:33.793 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:09:33.793 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_70274_0 00:09:33.793 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:09:33.793 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70274_0 00:09:33.793 element at address: 0x200003fff340 with size: 48.003113 MiB 00:09:33.793 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70274_0 00:09:33.793 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:09:33.793 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:33.793 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:09:33.793 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:33.793 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:09:33.793 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70274 00:09:33.793 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:09:33.793 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70274 00:09:33.793 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:09:33.793 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70274 00:09:33.793 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:09:33.793 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:33.793 element at address: 0x200019abc780 with size: 1.008179 MiB 00:09:33.793 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:33.793 element at address: 0x200018efde00 with size: 1.008179 MiB 00:09:33.793 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:33.793 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:09:33.793 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:33.793 element at address: 0x200003eff100 with size: 1.000549 MiB 00:09:33.793 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70274 00:09:33.793 element at address: 0x200003affb80 with size: 1.000549 MiB 00:09:33.793 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70274 00:09:33.793 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:09:33.793 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70274 00:09:33.793 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:09:33.793 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70274 00:09:33.793 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:09:33.793 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70274 00:09:33.793 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:09:33.793 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:33.793 element at address: 0x200013878680 with size: 0.500549 MiB 00:09:33.793 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:33.793 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:09:33.793 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:33.793 element at address: 0x200003adf740 with size: 0.125549 MiB 00:09:33.793 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70274 00:09:33.793 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:09:33.793 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:33.793 element at address: 0x200028464140 with size: 0.023804 MiB 00:09:33.793 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:33.793 element at address: 0x200003adb500 with size: 0.016174 MiB 00:09:33.793 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70274 00:09:33.793 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:09:33.793 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:33.793 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:09:33.793 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70274 00:09:33.793 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:09:33.793 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70274 00:09:33.793 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:09:33.793 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:33.793 23:53:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:33.793 23:53:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70274 00:09:33.793 23:53:29 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70274 ']' 00:09:33.793 23:53:29 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70274 00:09:33.793 23:53:29 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:09:33.793 23:53:29 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:33.793 23:53:29 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70274 00:09:33.793 23:53:29 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:33.794 23:53:29 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:33.794 23:53:29 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70274' 00:09:33.794 killing process with pid 70274 00:09:33.794 23:53:29 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70274 00:09:33.794 23:53:29 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70274 00:09:35.698 00:09:35.698 real 0m3.146s 00:09:35.698 user 0m3.160s 00:09:35.698 sys 0m0.489s 00:09:35.698 ************************************ 00:09:35.698 END TEST dpdk_mem_utility 00:09:35.698 ************************************ 00:09:35.698 23:53:31 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.698 23:53:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:35.698 23:53:31 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:35.698 23:53:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:35.698 23:53:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.698 23:53:31 -- common/autotest_common.sh@10 -- # set +x 00:09:35.698 ************************************ 00:09:35.698 START TEST event 00:09:35.698 ************************************ 00:09:35.698 23:53:31 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:35.698 * Looking for test storage... 00:09:35.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:35.698 23:53:31 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:35.698 23:53:31 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:35.698 23:53:31 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:35.698 23:53:31 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:35.698 23:53:31 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.698 23:53:31 event -- common/autotest_common.sh@10 -- # set +x 00:09:35.698 ************************************ 00:09:35.698 START TEST event_perf 00:09:35.698 ************************************ 00:09:35.698 23:53:31 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:35.698 Running I/O for 1 seconds...[2024-07-24 23:53:31.469639] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:09:35.699 [2024-07-24 23:53:31.469851] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70363 ] 00:09:35.957 [2024-07-24 23:53:31.642551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.957 [2024-07-24 23:53:31.804437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.957 [2024-07-24 23:53:31.804600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.957 [2024-07-24 23:53:31.804699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.957 [2024-07-24 23:53:31.804715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:37.348 Running I/O for 1 seconds... 00:09:37.348 lcore 0: 193263 00:09:37.348 lcore 1: 193263 00:09:37.348 lcore 2: 193262 00:09:37.348 lcore 3: 193262 00:09:37.348 done. 00:09:37.348 00:09:37.348 real 0m1.718s 00:09:37.348 user 0m4.485s 00:09:37.348 sys 0m0.135s 00:09:37.348 23:53:33 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.348 23:53:33 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:37.348 ************************************ 00:09:37.348 END TEST event_perf 00:09:37.348 ************************************ 00:09:37.348 23:53:33 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:37.348 23:53:33 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:37.348 23:53:33 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.348 23:53:33 event -- common/autotest_common.sh@10 -- # set +x 00:09:37.348 ************************************ 00:09:37.348 START TEST event_reactor 00:09:37.348 ************************************ 00:09:37.348 23:53:33 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:37.607 [2024-07-24 23:53:33.232327] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:09:37.607 [2024-07-24 23:53:33.232485] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70408 ] 00:09:37.607 [2024-07-24 23:53:33.379432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.865 [2024-07-24 23:53:33.542172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.242 test_start 00:09:39.242 oneshot 00:09:39.242 tick 100 00:09:39.242 tick 100 00:09:39.242 tick 250 00:09:39.242 tick 100 00:09:39.242 tick 100 00:09:39.242 tick 250 00:09:39.242 tick 100 00:09:39.242 tick 500 00:09:39.242 tick 100 00:09:39.242 tick 100 00:09:39.242 tick 250 00:09:39.242 tick 100 00:09:39.242 tick 100 00:09:39.242 test_end 00:09:39.242 00:09:39.242 real 0m1.692s 00:09:39.242 user 0m1.498s 00:09:39.242 sys 0m0.094s 00:09:39.242 23:53:34 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.242 ************************************ 00:09:39.242 END TEST event_reactor 00:09:39.242 ************************************ 00:09:39.242 23:53:34 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:39.242 23:53:34 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:39.242 23:53:34 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:39.242 23:53:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.242 23:53:34 event -- common/autotest_common.sh@10 -- # set +x 00:09:39.242 ************************************ 00:09:39.242 START TEST event_reactor_perf 00:09:39.242 ************************************ 00:09:39.242 23:53:34 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:39.242 [2024-07-24 23:53:34.986407] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:09:39.242 [2024-07-24 23:53:34.986581] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70450 ] 00:09:39.500 [2024-07-24 23:53:35.158910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.500 [2024-07-24 23:53:35.325393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.877 test_start 00:09:40.877 test_end 00:09:40.877 Performance: 350292 events per second 00:09:40.877 00:09:40.877 real 0m1.732s 00:09:40.877 user 0m1.528s 00:09:40.877 sys 0m0.103s 00:09:40.877 23:53:36 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.877 23:53:36 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:40.877 ************************************ 00:09:40.877 END TEST event_reactor_perf 00:09:40.877 ************************************ 00:09:40.877 23:53:36 event -- event/event.sh@49 -- # uname -s 00:09:40.877 23:53:36 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:40.877 23:53:36 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:40.877 23:53:36 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:40.877 23:53:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.877 23:53:36 event -- common/autotest_common.sh@10 -- # set +x 00:09:40.877 ************************************ 00:09:40.877 START TEST event_scheduler 00:09:40.877 ************************************ 00:09:40.877 23:53:36 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:41.136 * Looking for test storage... 00:09:41.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:41.136 23:53:36 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:41.136 23:53:36 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70513 00:09:41.136 23:53:36 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:41.136 23:53:36 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:41.136 23:53:36 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70513 00:09:41.136 23:53:36 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70513 ']' 00:09:41.136 23:53:36 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.136 23:53:36 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:41.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.136 23:53:36 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.136 23:53:36 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:41.136 23:53:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:41.137 [2024-07-24 23:53:36.893679] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:09:41.137 [2024-07-24 23:53:36.893894] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70513 ] 00:09:41.396 [2024-07-24 23:53:37.069979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.654 [2024-07-24 23:53:37.287088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.654 [2024-07-24 23:53:37.287198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.654 [2024-07-24 23:53:37.287310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.654 [2024-07-24 23:53:37.287334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.221 23:53:37 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:42.221 23:53:37 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:09:42.221 23:53:37 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:42.221 23:53:37 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.221 23:53:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:42.221 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:42.221 POWER: Cannot set governor of lcore 0 to userspace 00:09:42.221 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:42.221 POWER: Cannot set governor of lcore 0 to performance 00:09:42.221 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:42.221 POWER: Cannot set governor of lcore 0 to userspace 00:09:42.221 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:42.221 POWER: Cannot set governor of lcore 0 to userspace 00:09:42.221 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:09:42.221 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:42.221 POWER: Unable to set Power Management Environment for lcore 0 00:09:42.221 [2024-07-24 23:53:37.821149] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:09:42.221 [2024-07-24 23:53:37.821189] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:09:42.221 [2024-07-24 23:53:37.821207] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:09:42.221 [2024-07-24 23:53:37.821500] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:42.221 [2024-07-24 23:53:37.821581] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:42.221 [2024-07-24 23:53:37.821599] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:42.221 23:53:37 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.221 23:53:37 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:42.221 23:53:37 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.222 23:53:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:42.222 [2024-07-24 23:53:38.051900] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:42.222 23:53:38 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.222 23:53:38 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:42.222 23:53:38 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:42.222 23:53:38 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.222 23:53:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:42.222 ************************************ 00:09:42.222 START TEST scheduler_create_thread 00:09:42.222 ************************************ 00:09:42.222 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:09:42.222 23:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:42.222 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.222 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.222 2 00:09:42.222 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.222 23:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:42.222 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.222 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.222 3 00:09:42.222 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.222 23:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:42.222 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.222 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.480 4 00:09:42.480 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.480 23:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:42.480 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.480 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.480 5 00:09:42.480 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.480 23:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:42.480 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.481 6 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.481 7 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.481 8 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.481 9 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.481 10 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.481 23:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:43.854 23:53:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.854 23:53:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:43.854 23:53:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:43.854 23:53:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.855 23:53:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:45.230 23:53:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.230 00:09:45.230 real 0m2.619s 00:09:45.230 user 0m0.019s 00:09:45.230 sys 0m0.010s 00:09:45.230 23:53:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.230 23:53:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:45.230 ************************************ 00:09:45.230 END TEST scheduler_create_thread 00:09:45.230 ************************************ 00:09:45.230 23:53:40 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:45.230 23:53:40 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70513 00:09:45.230 23:53:40 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70513 ']' 00:09:45.230 23:53:40 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70513 00:09:45.230 23:53:40 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:09:45.231 23:53:40 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.231 23:53:40 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70513 00:09:45.231 23:53:40 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:09:45.231 23:53:40 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:09:45.231 killing process with pid 70513 00:09:45.231 23:53:40 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70513' 00:09:45.231 23:53:40 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70513 00:09:45.231 23:53:40 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70513 00:09:45.489 [2024-07-24 23:53:41.162673] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:46.423 00:09:46.423 real 0m5.447s 00:09:46.423 user 0m9.340s 00:09:46.423 sys 0m0.446s 00:09:46.423 23:53:42 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.423 23:53:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:46.423 ************************************ 00:09:46.423 END TEST event_scheduler 00:09:46.423 ************************************ 00:09:46.423 23:53:42 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:46.423 23:53:42 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:46.423 23:53:42 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:46.423 23:53:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.423 23:53:42 event -- common/autotest_common.sh@10 -- # set +x 00:09:46.423 ************************************ 00:09:46.423 START TEST app_repeat 00:09:46.423 ************************************ 00:09:46.423 23:53:42 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:09:46.423 23:53:42 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.423 23:53:42 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.423 23:53:42 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:46.423 23:53:42 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:46.423 23:53:42 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:46.423 23:53:42 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:46.423 23:53:42 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:46.423 23:53:42 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70619 00:09:46.424 23:53:42 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:46.424 23:53:42 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:46.424 Process app_repeat pid: 70619 00:09:46.424 23:53:42 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70619' 00:09:46.424 23:53:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:46.424 spdk_app_start Round 0 00:09:46.424 23:53:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:46.424 23:53:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70619 /var/tmp/spdk-nbd.sock 00:09:46.424 23:53:42 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70619 ']' 00:09:46.424 23:53:42 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:46.424 23:53:42 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:46.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:46.424 23:53:42 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:46.424 23:53:42 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:46.424 23:53:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:46.682 [2024-07-24 23:53:42.299098] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:09:46.682 [2024-07-24 23:53:42.299292] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70619 ] 00:09:46.682 [2024-07-24 23:53:42.471741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:46.944 [2024-07-24 23:53:42.639501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.944 [2024-07-24 23:53:42.639517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.546 23:53:43 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:47.546 23:53:43 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:47.546 23:53:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:47.804 Malloc0 00:09:47.804 23:53:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:48.063 Malloc1 00:09:48.063 23:53:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:48.063 23:53:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.063 23:53:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:48.063 23:53:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:48.063 23:53:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:48.063 23:53:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:48.063 23:53:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:48.063 23:53:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.063 23:53:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:48.063 23:53:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:48.063 23:53:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:48.063 23:53:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:48.063 23:53:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:48.063 23:53:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:48.063 23:53:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:48.063 23:53:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:48.321 /dev/nbd0 00:09:48.321 23:53:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:48.321 23:53:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:48.321 23:53:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:48.321 23:53:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:48.321 23:53:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:48.321 23:53:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:48.321 23:53:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:48.321 23:53:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:48.321 23:53:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:48.321 23:53:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:48.321 23:53:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:48.321 1+0 records in 00:09:48.321 1+0 records out 00:09:48.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00344305 s, 1.2 MB/s 00:09:48.321 23:53:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:48.321 23:53:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:48.321 23:53:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:48.321 23:53:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:48.321 23:53:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:48.321 23:53:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:48.322 23:53:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:48.322 23:53:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:48.580 /dev/nbd1 00:09:48.580 23:53:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:48.580 23:53:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:48.580 23:53:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:48.580 23:53:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:48.580 23:53:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:48.580 23:53:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:48.580 23:53:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:48.580 23:53:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:48.580 23:53:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:48.580 23:53:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:48.580 23:53:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:48.580 1+0 records in 00:09:48.580 1+0 records out 00:09:48.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225301 s, 18.2 MB/s 00:09:48.580 23:53:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:48.580 23:53:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:48.580 23:53:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:48.580 23:53:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:48.580 23:53:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:48.580 23:53:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:48.580 23:53:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:48.580 23:53:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:48.580 23:53:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.580 23:53:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:48.839 23:53:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:48.839 { 00:09:48.839 "nbd_device": "/dev/nbd0", 00:09:48.839 "bdev_name": "Malloc0" 00:09:48.839 }, 00:09:48.839 { 00:09:48.839 "nbd_device": "/dev/nbd1", 00:09:48.839 "bdev_name": "Malloc1" 00:09:48.839 } 00:09:48.839 ]' 00:09:48.839 23:53:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:48.839 23:53:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:48.839 { 00:09:48.839 "nbd_device": "/dev/nbd0", 00:09:48.839 "bdev_name": "Malloc0" 00:09:48.839 }, 00:09:48.839 { 00:09:48.839 "nbd_device": "/dev/nbd1", 00:09:48.839 "bdev_name": "Malloc1" 00:09:48.839 } 00:09:48.839 ]' 00:09:48.839 23:53:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:48.839 /dev/nbd1' 00:09:48.839 23:53:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:48.839 /dev/nbd1' 00:09:48.839 23:53:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:48.839 23:53:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:48.839 23:53:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:48.839 23:53:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:48.839 23:53:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:48.839 23:53:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:48.839 23:53:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:48.839 23:53:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:48.839 23:53:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:48.839 23:53:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:48.839 23:53:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:48.839 23:53:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:48.839 256+0 records in 00:09:48.839 256+0 records out 00:09:48.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00783827 s, 134 MB/s 00:09:48.839 23:53:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:48.839 23:53:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:48.839 256+0 records in 00:09:48.839 256+0 records out 00:09:48.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027466 s, 38.2 MB/s 00:09:48.839 23:53:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:48.839 23:53:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:49.098 256+0 records in 00:09:49.098 256+0 records out 00:09:49.098 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0354737 s, 29.6 MB/s 00:09:49.098 23:53:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:49.098 23:53:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:49.098 23:53:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:49.098 23:53:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:49.098 23:53:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:49.098 23:53:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:49.098 23:53:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:49.098 23:53:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:49.098 23:53:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:49.098 23:53:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:49.098 23:53:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:49.098 23:53:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:49.098 23:53:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:49.098 23:53:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:49.098 23:53:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:49.098 23:53:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:49.098 23:53:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:49.098 23:53:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:49.098 23:53:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:49.356 23:53:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:49.356 23:53:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:49.356 23:53:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:49.356 23:53:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:49.356 23:53:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:49.356 23:53:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:49.356 23:53:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:49.356 23:53:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:49.356 23:53:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:49.356 23:53:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:49.615 23:53:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:49.615 23:53:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:49.615 23:53:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:49.615 23:53:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:49.615 23:53:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:49.615 23:53:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:49.615 23:53:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:49.615 23:53:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:49.615 23:53:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:49.615 23:53:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:49.615 23:53:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:49.875 23:53:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:49.875 23:53:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:49.875 23:53:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:49.875 23:53:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:49.875 23:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:49.875 23:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:49.875 23:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:49.875 23:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:49.875 23:53:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:49.875 23:53:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:49.875 23:53:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:49.875 23:53:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:49.875 23:53:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:50.134 23:53:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:51.511 [2024-07-24 23:53:46.982967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:51.511 [2024-07-24 23:53:47.162942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.511 [2024-07-24 23:53:47.162952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.511 [2024-07-24 23:53:47.326192] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:51.511 [2024-07-24 23:53:47.326312] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:53.413 23:53:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:53.413 spdk_app_start Round 1 00:09:53.413 23:53:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:53.414 23:53:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70619 /var/tmp/spdk-nbd.sock 00:09:53.414 23:53:48 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70619 ']' 00:09:53.414 23:53:48 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:53.414 23:53:48 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:53.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:53.414 23:53:48 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:53.414 23:53:48 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:53.414 23:53:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:53.414 23:53:49 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:53.414 23:53:49 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:53.414 23:53:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:53.672 Malloc0 00:09:53.672 23:53:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:53.931 Malloc1 00:09:54.190 23:53:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:54.190 23:53:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.190 23:53:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:54.190 23:53:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:54.190 23:53:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:54.190 23:53:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:54.190 23:53:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:54.190 23:53:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.190 23:53:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:54.190 23:53:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:54.190 23:53:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:54.190 23:53:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:54.190 23:53:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:54.190 23:53:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:54.190 23:53:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:54.190 23:53:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:54.190 /dev/nbd0 00:09:54.190 23:53:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:54.449 23:53:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:54.449 23:53:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:54.449 23:53:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:54.449 23:53:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:54.449 23:53:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:54.449 23:53:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:54.449 23:53:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:54.449 23:53:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:54.449 23:53:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:54.449 23:53:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:54.449 1+0 records in 00:09:54.449 1+0 records out 00:09:54.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032719 s, 12.5 MB/s 00:09:54.449 23:53:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:54.449 23:53:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:54.449 23:53:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:54.449 23:53:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:54.449 23:53:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:54.449 23:53:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:54.449 23:53:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:54.449 23:53:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:54.708 /dev/nbd1 00:09:54.708 23:53:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:54.708 23:53:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:54.708 23:53:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:54.708 23:53:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:54.708 23:53:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:54.708 23:53:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:54.708 23:53:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:54.708 23:53:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:54.708 23:53:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:54.708 23:53:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:54.708 23:53:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:54.708 1+0 records in 00:09:54.708 1+0 records out 00:09:54.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314177 s, 13.0 MB/s 00:09:54.708 23:53:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:54.708 23:53:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:54.708 23:53:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:54.708 23:53:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:54.708 23:53:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:54.708 23:53:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:54.708 23:53:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:54.708 23:53:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:54.708 23:53:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.708 23:53:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:54.967 { 00:09:54.967 "nbd_device": "/dev/nbd0", 00:09:54.967 "bdev_name": "Malloc0" 00:09:54.967 }, 00:09:54.967 { 00:09:54.967 "nbd_device": "/dev/nbd1", 00:09:54.967 "bdev_name": "Malloc1" 00:09:54.967 } 00:09:54.967 ]' 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:54.967 { 00:09:54.967 "nbd_device": "/dev/nbd0", 00:09:54.967 "bdev_name": "Malloc0" 00:09:54.967 }, 00:09:54.967 { 00:09:54.967 "nbd_device": "/dev/nbd1", 00:09:54.967 "bdev_name": "Malloc1" 00:09:54.967 } 00:09:54.967 ]' 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:54.967 /dev/nbd1' 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:54.967 /dev/nbd1' 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:54.967 256+0 records in 00:09:54.967 256+0 records out 00:09:54.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00448754 s, 234 MB/s 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:54.967 256+0 records in 00:09:54.967 256+0 records out 00:09:54.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282453 s, 37.1 MB/s 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:54.967 256+0 records in 00:09:54.967 256+0 records out 00:09:54.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0317553 s, 33.0 MB/s 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:54.967 23:53:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:55.226 23:53:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:55.226 23:53:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:55.226 23:53:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:55.226 23:53:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:55.226 23:53:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:55.226 23:53:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:55.226 23:53:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:55.226 23:53:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:55.226 23:53:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:55.226 23:53:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:55.485 23:53:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:55.485 23:53:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:55.485 23:53:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:55.485 23:53:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:55.485 23:53:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:55.485 23:53:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:55.485 23:53:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:55.485 23:53:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:55.485 23:53:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:55.485 23:53:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.485 23:53:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:55.743 23:53:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:55.743 23:53:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:55.743 23:53:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:55.743 23:53:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:55.743 23:53:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:55.743 23:53:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:55.743 23:53:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:55.743 23:53:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:55.743 23:53:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:55.743 23:53:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:55.743 23:53:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:55.743 23:53:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:55.743 23:53:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:56.336 23:53:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:57.271 [2024-07-24 23:53:52.980287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:57.529 [2024-07-24 23:53:53.159112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.529 [2024-07-24 23:53:53.159112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.529 [2024-07-24 23:53:53.313488] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:57.529 [2024-07-24 23:53:53.313571] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:59.473 spdk_app_start Round 2 00:09:59.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:59.473 23:53:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:59.473 23:53:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:59.473 23:53:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70619 /var/tmp/spdk-nbd.sock 00:09:59.473 23:53:54 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70619 ']' 00:09:59.473 23:53:54 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:59.473 23:53:54 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.473 23:53:54 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:59.473 23:53:54 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.473 23:53:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:59.473 23:53:55 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:59.473 23:53:55 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:59.473 23:53:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:59.731 Malloc0 00:09:59.731 23:53:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:59.989 Malloc1 00:09:59.989 23:53:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:59.989 23:53:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.989 23:53:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:59.989 23:53:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:59.989 23:53:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:59.989 23:53:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:59.989 23:53:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:59.989 23:53:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.989 23:53:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:59.989 23:53:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:59.989 23:53:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:59.989 23:53:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:59.989 23:53:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:59.989 23:53:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:59.989 23:53:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:59.989 23:53:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:00.246 /dev/nbd0 00:10:00.246 23:53:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:00.246 23:53:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:00.246 23:53:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:00.246 23:53:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:00.246 23:53:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:00.246 23:53:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:00.246 23:53:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:00.246 23:53:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:00.246 23:53:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:00.246 23:53:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:00.246 23:53:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:00.246 1+0 records in 00:10:00.246 1+0 records out 00:10:00.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00079897 s, 5.1 MB/s 00:10:00.246 23:53:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:00.246 23:53:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:00.246 23:53:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:00.246 23:53:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:00.246 23:53:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:00.247 23:53:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:00.247 23:53:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:00.247 23:53:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:00.504 /dev/nbd1 00:10:00.504 23:53:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:00.504 23:53:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:00.504 23:53:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:00.504 23:53:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:00.504 23:53:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:00.504 23:53:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:00.504 23:53:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:00.504 23:53:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:00.504 23:53:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:00.504 23:53:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:00.504 23:53:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:00.504 1+0 records in 00:10:00.504 1+0 records out 00:10:00.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000713566 s, 5.7 MB/s 00:10:00.504 23:53:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:00.504 23:53:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:00.504 23:53:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:00.504 23:53:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:00.504 23:53:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:00.504 23:53:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:00.504 23:53:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:00.504 23:53:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:00.504 23:53:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:00.504 23:53:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:00.762 23:53:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:00.762 { 00:10:00.762 "nbd_device": "/dev/nbd0", 00:10:00.762 "bdev_name": "Malloc0" 00:10:00.762 }, 00:10:00.762 { 00:10:00.762 "nbd_device": "/dev/nbd1", 00:10:00.762 "bdev_name": "Malloc1" 00:10:00.762 } 00:10:00.762 ]' 00:10:00.762 23:53:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:00.762 { 00:10:00.762 "nbd_device": "/dev/nbd0", 00:10:00.762 "bdev_name": "Malloc0" 00:10:00.762 }, 00:10:00.762 { 00:10:00.762 "nbd_device": "/dev/nbd1", 00:10:00.762 "bdev_name": "Malloc1" 00:10:00.762 } 00:10:00.762 ]' 00:10:00.762 23:53:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:00.762 23:53:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:00.762 /dev/nbd1' 00:10:00.762 23:53:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:00.762 /dev/nbd1' 00:10:00.763 23:53:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:00.763 23:53:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:00.763 23:53:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:00.763 23:53:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:00.763 23:53:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:00.763 23:53:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:00.763 23:53:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:00.763 23:53:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:00.763 23:53:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:00.763 23:53:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:00.763 23:53:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:00.763 23:53:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:00.763 256+0 records in 00:10:00.763 256+0 records out 00:10:00.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0096874 s, 108 MB/s 00:10:00.763 23:53:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:00.763 23:53:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:00.763 256+0 records in 00:10:00.763 256+0 records out 00:10:00.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256674 s, 40.9 MB/s 00:10:00.763 23:53:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:00.763 23:53:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:01.021 256+0 records in 00:10:01.021 256+0 records out 00:10:01.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0420758 s, 24.9 MB/s 00:10:01.021 23:53:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:01.021 23:53:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:01.021 23:53:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:01.021 23:53:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:01.021 23:53:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:01.021 23:53:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:01.021 23:53:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:01.021 23:53:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:01.021 23:53:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:01.021 23:53:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:01.021 23:53:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:01.021 23:53:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:01.021 23:53:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:01.021 23:53:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:01.021 23:53:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:01.021 23:53:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:01.021 23:53:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:01.021 23:53:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:01.021 23:53:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:01.280 23:53:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:01.280 23:53:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:01.280 23:53:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:01.280 23:53:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:01.280 23:53:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:01.280 23:53:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:01.280 23:53:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:01.280 23:53:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:01.280 23:53:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:01.280 23:53:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:01.538 23:53:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:01.538 23:53:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:01.538 23:53:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:01.538 23:53:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:01.538 23:53:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:01.538 23:53:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:01.538 23:53:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:01.538 23:53:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:01.538 23:53:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:01.538 23:53:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:01.538 23:53:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:01.796 23:53:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:01.796 23:53:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:01.796 23:53:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:01.796 23:53:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:01.796 23:53:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:01.796 23:53:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:01.796 23:53:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:01.796 23:53:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:01.796 23:53:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:01.796 23:53:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:01.796 23:53:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:01.796 23:53:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:01.796 23:53:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:02.362 23:53:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:03.296 [2024-07-24 23:53:59.100117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:03.554 [2024-07-24 23:53:59.290040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.554 [2024-07-24 23:53:59.290054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.812 [2024-07-24 23:53:59.466002] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:03.812 [2024-07-24 23:53:59.466125] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:05.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:05.240 23:54:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70619 /var/tmp/spdk-nbd.sock 00:10:05.241 23:54:00 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70619 ']' 00:10:05.241 23:54:00 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:05.241 23:54:00 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:05.241 23:54:00 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:05.241 23:54:00 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:05.241 23:54:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:05.499 23:54:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:05.499 23:54:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:10:05.499 23:54:01 event.app_repeat -- event/event.sh@39 -- # killprocess 70619 00:10:05.499 23:54:01 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70619 ']' 00:10:05.499 23:54:01 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70619 00:10:05.499 23:54:01 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:10:05.499 23:54:01 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:05.499 23:54:01 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70619 00:10:05.499 killing process with pid 70619 00:10:05.499 23:54:01 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:05.499 23:54:01 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:05.499 23:54:01 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70619' 00:10:05.499 23:54:01 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70619 00:10:05.499 23:54:01 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70619 00:10:06.876 spdk_app_start is called in Round 0. 00:10:06.876 Shutdown signal received, stop current app iteration 00:10:06.876 Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 reinitialization... 00:10:06.876 spdk_app_start is called in Round 1. 00:10:06.876 Shutdown signal received, stop current app iteration 00:10:06.876 Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 reinitialization... 00:10:06.876 spdk_app_start is called in Round 2. 00:10:06.876 Shutdown signal received, stop current app iteration 00:10:06.876 Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 reinitialization... 00:10:06.876 spdk_app_start is called in Round 3. 00:10:06.876 Shutdown signal received, stop current app iteration 00:10:06.876 23:54:02 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:06.876 23:54:02 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:06.876 00:10:06.876 real 0m20.101s 00:10:06.876 user 0m43.102s 00:10:06.876 sys 0m2.847s 00:10:06.876 ************************************ 00:10:06.876 END TEST app_repeat 00:10:06.876 ************************************ 00:10:06.876 23:54:02 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:06.876 23:54:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:06.876 23:54:02 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:06.876 23:54:02 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:06.876 23:54:02 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:06.876 23:54:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:06.876 23:54:02 event -- common/autotest_common.sh@10 -- # set +x 00:10:06.876 ************************************ 00:10:06.876 START TEST cpu_locks 00:10:06.876 ************************************ 00:10:06.876 23:54:02 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:06.876 * Looking for test storage... 00:10:06.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:06.876 23:54:02 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:06.876 23:54:02 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:06.876 23:54:02 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:06.876 23:54:02 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:06.876 23:54:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:06.876 23:54:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:06.876 23:54:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:06.876 ************************************ 00:10:06.876 START TEST default_locks 00:10:06.876 ************************************ 00:10:06.876 23:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:10:06.876 23:54:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71112 00:10:06.876 23:54:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71112 00:10:06.876 23:54:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:06.876 23:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71112 ']' 00:10:06.876 23:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.876 23:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:06.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.876 23:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.876 23:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:06.876 23:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:06.876 [2024-07-24 23:54:02.577752] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:10:06.876 [2024-07-24 23:54:02.577924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71112 ] 00:10:06.876 [2024-07-24 23:54:02.743971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.135 [2024-07-24 23:54:02.982726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.070 23:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:08.070 23:54:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:10:08.070 23:54:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71112 00:10:08.070 23:54:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:08.070 23:54:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71112 00:10:08.328 23:54:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71112 00:10:08.328 23:54:04 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 71112 ']' 00:10:08.328 23:54:04 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 71112 00:10:08.328 23:54:04 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:10:08.328 23:54:04 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:08.328 23:54:04 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71112 00:10:08.586 23:54:04 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:08.586 23:54:04 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:08.586 killing process with pid 71112 00:10:08.586 23:54:04 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71112' 00:10:08.586 23:54:04 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 71112 00:10:08.586 23:54:04 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 71112 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71112 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71112 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71112 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71112 ']' 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:10.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:10.486 ERROR: process (pid: 71112) is no longer running 00:10:10.486 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71112) - No such process 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:10.486 00:10:10.486 real 0m3.731s 00:10:10.486 user 0m3.832s 00:10:10.486 sys 0m0.680s 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.486 23:54:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:10.486 ************************************ 00:10:10.486 END TEST default_locks 00:10:10.486 ************************************ 00:10:10.486 23:54:06 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:10.486 23:54:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:10.486 23:54:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.486 23:54:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:10.486 ************************************ 00:10:10.486 START TEST default_locks_via_rpc 00:10:10.486 ************************************ 00:10:10.486 23:54:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:10:10.486 23:54:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71176 00:10:10.486 23:54:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71176 00:10:10.486 23:54:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71176 ']' 00:10:10.486 23:54:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:10.486 23:54:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.486 23:54:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:10.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.486 23:54:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.486 23:54:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:10.486 23:54:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.486 [2024-07-24 23:54:06.351318] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:10:10.486 [2024-07-24 23:54:06.351503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71176 ] 00:10:10.744 [2024-07-24 23:54:06.520961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.002 [2024-07-24 23:54:06.697537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.568 23:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:11.568 23:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:11.568 23:54:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:11.568 23:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.568 23:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.568 23:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.568 23:54:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:11.568 23:54:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:11.568 23:54:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:11.568 23:54:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:11.568 23:54:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:11.568 23:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.568 23:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.568 23:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.568 23:54:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71176 00:10:11.568 23:54:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71176 00:10:11.568 23:54:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:12.134 23:54:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71176 00:10:12.134 23:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71176 ']' 00:10:12.134 23:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71176 00:10:12.134 23:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:10:12.134 23:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:12.134 23:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71176 00:10:12.134 23:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:12.135 23:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:12.135 killing process with pid 71176 00:10:12.135 23:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71176' 00:10:12.135 23:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71176 00:10:12.135 23:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71176 00:10:14.039 00:10:14.039 real 0m3.557s 00:10:14.039 user 0m3.681s 00:10:14.039 sys 0m0.594s 00:10:14.039 23:54:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.039 23:54:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.039 ************************************ 00:10:14.039 END TEST default_locks_via_rpc 00:10:14.039 ************************************ 00:10:14.039 23:54:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:14.039 23:54:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:14.039 23:54:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.039 23:54:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:14.039 ************************************ 00:10:14.039 START TEST non_locking_app_on_locked_coremask 00:10:14.039 ************************************ 00:10:14.039 23:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:10:14.039 23:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71249 00:10:14.039 23:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71249 /var/tmp/spdk.sock 00:10:14.039 23:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71249 ']' 00:10:14.039 23:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.039 23:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:14.039 23:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:14.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.039 23:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.039 23:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:14.039 23:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:14.309 [2024-07-24 23:54:09.953773] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:10:14.309 [2024-07-24 23:54:09.953974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71249 ] 00:10:14.309 [2024-07-24 23:54:10.120407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.587 [2024-07-24 23:54:10.305701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.154 23:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:15.154 23:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:15.154 23:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71265 00:10:15.154 23:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:15.154 23:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71265 /var/tmp/spdk2.sock 00:10:15.154 23:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71265 ']' 00:10:15.154 23:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:15.154 23:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:15.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:15.154 23:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:15.154 23:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:15.154 23:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:15.413 [2024-07-24 23:54:11.051066] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:10:15.413 [2024-07-24 23:54:11.051215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71265 ] 00:10:15.413 [2024-07-24 23:54:11.223343] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:15.413 [2024-07-24 23:54:11.223424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.981 [2024-07-24 23:54:11.586086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.358 23:54:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.358 23:54:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:17.358 23:54:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71249 00:10:17.358 23:54:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71249 00:10:17.358 23:54:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:18.294 23:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71249 00:10:18.294 23:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71249 ']' 00:10:18.294 23:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71249 00:10:18.294 23:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:18.294 23:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:18.294 23:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71249 00:10:18.294 23:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:18.294 23:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:18.294 killing process with pid 71249 00:10:18.294 23:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71249' 00:10:18.294 23:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71249 00:10:18.294 23:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71249 00:10:22.485 23:54:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71265 00:10:22.485 23:54:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71265 ']' 00:10:22.485 23:54:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71265 00:10:22.485 23:54:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:22.485 23:54:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:22.485 23:54:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71265 00:10:22.485 23:54:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:22.485 23:54:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:22.485 killing process with pid 71265 00:10:22.485 23:54:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71265' 00:10:22.485 23:54:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71265 00:10:22.485 23:54:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71265 00:10:24.386 00:10:24.386 real 0m10.084s 00:10:24.386 user 0m10.402s 00:10:24.386 sys 0m1.360s 00:10:24.386 23:54:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:24.386 23:54:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:24.386 ************************************ 00:10:24.386 END TEST non_locking_app_on_locked_coremask 00:10:24.386 ************************************ 00:10:24.386 23:54:20 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:24.386 23:54:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:24.386 23:54:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:24.386 23:54:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:24.386 ************************************ 00:10:24.386 START TEST locking_app_on_unlocked_coremask 00:10:24.386 ************************************ 00:10:24.386 23:54:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:10:24.386 23:54:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71396 00:10:24.386 23:54:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71396 /var/tmp/spdk.sock 00:10:24.386 23:54:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:24.386 23:54:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71396 ']' 00:10:24.386 23:54:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.386 23:54:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:24.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.386 23:54:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.386 23:54:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:24.386 23:54:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:24.386 [2024-07-24 23:54:20.091371] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:10:24.386 [2024-07-24 23:54:20.091544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71396 ] 00:10:24.644 [2024-07-24 23:54:20.263528] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:24.644 [2024-07-24 23:54:20.263602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.644 [2024-07-24 23:54:20.435673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.211 23:54:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.211 23:54:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:25.211 23:54:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:25.211 23:54:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71412 00:10:25.211 23:54:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71412 /var/tmp/spdk2.sock 00:10:25.211 23:54:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71412 ']' 00:10:25.211 23:54:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:25.211 23:54:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:25.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:25.211 23:54:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:25.469 23:54:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:25.469 23:54:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:25.469 [2024-07-24 23:54:21.139137] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:10:25.469 [2024-07-24 23:54:21.139285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71412 ] 00:10:25.469 [2024-07-24 23:54:21.311418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.034 [2024-07-24 23:54:21.672452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.411 23:54:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:27.411 23:54:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:27.411 23:54:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71412 00:10:27.411 23:54:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71412 00:10:27.411 23:54:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:27.976 23:54:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71396 00:10:27.976 23:54:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71396 ']' 00:10:27.976 23:54:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71396 00:10:27.976 23:54:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:27.976 23:54:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:27.976 23:54:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71396 00:10:27.976 23:54:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:27.976 23:54:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:27.976 killing process with pid 71396 00:10:27.976 23:54:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71396' 00:10:27.976 23:54:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71396 00:10:27.976 23:54:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71396 00:10:32.166 23:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71412 00:10:32.166 23:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71412 ']' 00:10:32.166 23:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71412 00:10:32.166 23:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:32.166 23:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:32.166 23:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71412 00:10:32.166 23:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:32.166 23:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:32.166 killing process with pid 71412 00:10:32.166 23:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71412' 00:10:32.166 23:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71412 00:10:32.166 23:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71412 00:10:34.701 00:10:34.701 real 0m10.161s 00:10:34.701 user 0m10.556s 00:10:34.701 sys 0m1.350s 00:10:34.701 23:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.701 ************************************ 00:10:34.701 23:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:34.701 END TEST locking_app_on_unlocked_coremask 00:10:34.701 ************************************ 00:10:34.701 23:54:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:34.701 23:54:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:34.701 23:54:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.701 23:54:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:34.701 ************************************ 00:10:34.701 START TEST locking_app_on_locked_coremask 00:10:34.701 ************************************ 00:10:34.701 23:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:10:34.701 23:54:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71542 00:10:34.701 23:54:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71542 /var/tmp/spdk.sock 00:10:34.701 23:54:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:34.701 23:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71542 ']' 00:10:34.701 23:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.701 23:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:34.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.701 23:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.701 23:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:34.701 23:54:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:34.701 [2024-07-24 23:54:30.313367] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:10:34.701 [2024-07-24 23:54:30.314120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71542 ] 00:10:34.701 [2024-07-24 23:54:30.488708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.959 [2024-07-24 23:54:30.692754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.894 23:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:35.894 23:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:35.894 23:54:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71558 00:10:35.894 23:54:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:35.894 23:54:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71558 /var/tmp/spdk2.sock 00:10:35.894 23:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:10:35.894 23:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71558 /var/tmp/spdk2.sock 00:10:35.894 23:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:35.894 23:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:35.894 23:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:35.894 23:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:35.894 23:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71558 /var/tmp/spdk2.sock 00:10:35.894 23:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71558 ']' 00:10:35.894 23:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:35.894 23:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:35.894 23:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:35.894 23:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.894 23:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:35.894 [2024-07-24 23:54:31.542872] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:10:35.894 [2024-07-24 23:54:31.543086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71558 ] 00:10:35.894 [2024-07-24 23:54:31.728768] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71542 has claimed it. 00:10:35.894 [2024-07-24 23:54:31.731935] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:36.462 ERROR: process (pid: 71558) is no longer running 00:10:36.462 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71558) - No such process 00:10:36.462 23:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:36.462 23:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:10:36.462 23:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:10:36.462 23:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:36.462 23:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:36.462 23:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:36.462 23:54:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71542 00:10:36.462 23:54:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:36.462 23:54:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71542 00:10:37.029 23:54:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71542 00:10:37.029 23:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71542 ']' 00:10:37.029 23:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71542 00:10:37.029 23:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:37.029 23:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:37.029 23:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71542 00:10:37.029 killing process with pid 71542 00:10:37.029 23:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:37.029 23:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:37.029 23:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71542' 00:10:37.029 23:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71542 00:10:37.029 23:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71542 00:10:39.583 00:10:39.583 real 0m4.925s 00:10:39.583 user 0m5.360s 00:10:39.583 sys 0m0.879s 00:10:39.583 ************************************ 00:10:39.583 END TEST locking_app_on_locked_coremask 00:10:39.583 ************************************ 00:10:39.583 23:54:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.583 23:54:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:39.583 23:54:35 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:39.583 23:54:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:39.583 23:54:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.583 23:54:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:39.583 ************************************ 00:10:39.583 START TEST locking_overlapped_coremask 00:10:39.583 ************************************ 00:10:39.583 23:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:10:39.583 23:54:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71633 00:10:39.584 23:54:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71633 /var/tmp/spdk.sock 00:10:39.584 23:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71633 ']' 00:10:39.584 23:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.584 23:54:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:39.584 23:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:39.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.584 23:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.584 23:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:39.584 23:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:39.584 [2024-07-24 23:54:35.282902] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:10:39.584 [2024-07-24 23:54:35.283106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71633 ] 00:10:39.842 [2024-07-24 23:54:35.461017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:39.842 [2024-07-24 23:54:35.669599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.842 [2024-07-24 23:54:35.669709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.842 [2024-07-24 23:54:35.669726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.777 23:54:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:40.777 23:54:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:40.777 23:54:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71652 00:10:40.777 23:54:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:40.777 23:54:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71652 /var/tmp/spdk2.sock 00:10:40.777 23:54:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:10:40.777 23:54:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71652 /var/tmp/spdk2.sock 00:10:40.777 23:54:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:40.777 23:54:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:40.777 23:54:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:40.777 23:54:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:40.777 23:54:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71652 /var/tmp/spdk2.sock 00:10:40.777 23:54:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71652 ']' 00:10:40.777 23:54:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:40.777 23:54:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.777 23:54:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:40.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:40.777 23:54:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.777 23:54:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:40.777 [2024-07-24 23:54:36.502568] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:10:40.777 [2024-07-24 23:54:36.502748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71652 ] 00:10:41.035 [2024-07-24 23:54:36.684262] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71633 has claimed it. 00:10:41.035 [2024-07-24 23:54:36.684360] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:41.602 ERROR: process (pid: 71652) is no longer running 00:10:41.602 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71652) - No such process 00:10:41.602 23:54:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.602 23:54:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:10:41.602 23:54:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:10:41.602 23:54:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:41.602 23:54:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:41.602 23:54:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:41.602 23:54:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:41.602 23:54:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:41.602 23:54:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:41.602 23:54:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:41.602 23:54:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71633 00:10:41.602 23:54:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71633 ']' 00:10:41.602 23:54:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71633 00:10:41.602 23:54:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:10:41.602 23:54:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:41.602 23:54:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71633 00:10:41.602 23:54:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:41.602 23:54:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:41.602 23:54:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71633' 00:10:41.602 killing process with pid 71633 00:10:41.602 23:54:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71633 00:10:41.602 23:54:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71633 00:10:43.504 00:10:43.504 real 0m3.991s 00:10:43.504 user 0m10.506s 00:10:43.504 sys 0m0.592s 00:10:43.505 ************************************ 00:10:43.505 END TEST locking_overlapped_coremask 00:10:43.505 ************************************ 00:10:43.505 23:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.505 23:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:43.505 23:54:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:43.505 23:54:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:43.505 23:54:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.505 23:54:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:43.505 ************************************ 00:10:43.505 START TEST locking_overlapped_coremask_via_rpc 00:10:43.505 ************************************ 00:10:43.505 23:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:10:43.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.505 23:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71711 00:10:43.505 23:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71711 /var/tmp/spdk.sock 00:10:43.505 23:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:43.505 23:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71711 ']' 00:10:43.505 23:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.505 23:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:43.505 23:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.505 23:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:43.505 23:54:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.505 [2024-07-24 23:54:39.325284] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:10:43.505 [2024-07-24 23:54:39.325456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71711 ] 00:10:43.763 [2024-07-24 23:54:39.496616] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:43.763 [2024-07-24 23:54:39.496959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:44.022 [2024-07-24 23:54:39.678702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.022 [2024-07-24 23:54:39.678774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.022 [2024-07-24 23:54:39.678790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.590 23:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:44.590 23:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:44.590 23:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71729 00:10:44.590 23:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:44.590 23:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71729 /var/tmp/spdk2.sock 00:10:44.590 23:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71729 ']' 00:10:44.590 23:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:44.590 23:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:44.590 23:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:44.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:44.590 23:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:44.590 23:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.590 [2024-07-24 23:54:40.425378] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:10:44.590 [2024-07-24 23:54:40.425822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71729 ] 00:10:44.849 [2024-07-24 23:54:40.604345] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:44.849 [2024-07-24 23:54:40.607853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:45.418 [2024-07-24 23:54:40.979810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:45.418 [2024-07-24 23:54:40.983956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.418 [2024-07-24 23:54:40.983980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.330 [2024-07-24 23:54:43.117054] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71711 has claimed it. 00:10:47.330 request: 00:10:47.330 { 00:10:47.330 "method": "framework_enable_cpumask_locks", 00:10:47.330 "req_id": 1 00:10:47.330 } 00:10:47.330 Got JSON-RPC error response 00:10:47.330 response: 00:10:47.330 { 00:10:47.330 "code": -32603, 00:10:47.330 "message": "Failed to claim CPU core: 2" 00:10:47.330 } 00:10:47.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71711 /var/tmp/spdk.sock 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71711 ']' 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:47.330 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.610 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:47.610 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:47.610 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71729 /var/tmp/spdk2.sock 00:10:47.610 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71729 ']' 00:10:47.610 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:47.610 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:47.610 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:47.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:47.610 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:47.610 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.867 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:47.867 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:47.867 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:47.867 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:47.867 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:47.867 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:47.867 ************************************ 00:10:47.867 END TEST locking_overlapped_coremask_via_rpc 00:10:47.867 ************************************ 00:10:47.868 00:10:47.868 real 0m4.364s 00:10:47.868 user 0m1.443s 00:10:47.868 sys 0m0.208s 00:10:47.868 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:47.868 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.868 23:54:43 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:47.868 23:54:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71711 ]] 00:10:47.868 23:54:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71711 00:10:47.868 23:54:43 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71711 ']' 00:10:47.868 23:54:43 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71711 00:10:47.868 23:54:43 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:10:47.868 23:54:43 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:47.868 23:54:43 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71711 00:10:47.868 killing process with pid 71711 00:10:47.868 23:54:43 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:47.868 23:54:43 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:47.868 23:54:43 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71711' 00:10:47.868 23:54:43 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71711 00:10:47.868 23:54:43 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71711 00:10:50.396 23:54:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71729 ]] 00:10:50.396 23:54:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71729 00:10:50.396 23:54:45 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71729 ']' 00:10:50.396 23:54:45 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71729 00:10:50.396 23:54:45 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:10:50.396 23:54:45 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:50.396 23:54:45 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71729 00:10:50.396 killing process with pid 71729 00:10:50.396 23:54:45 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:10:50.396 23:54:45 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:10:50.396 23:54:45 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71729' 00:10:50.396 23:54:45 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71729 00:10:50.396 23:54:45 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71729 00:10:52.927 23:54:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:52.927 Process with pid 71711 is not found 00:10:52.927 Process with pid 71729 is not found 00:10:52.927 23:54:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:52.927 23:54:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71711 ]] 00:10:52.927 23:54:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71711 00:10:52.927 23:54:48 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71711 ']' 00:10:52.927 23:54:48 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71711 00:10:52.927 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71711) - No such process 00:10:52.927 23:54:48 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71711 is not found' 00:10:52.927 23:54:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71729 ]] 00:10:52.927 23:54:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71729 00:10:52.927 23:54:48 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71729 ']' 00:10:52.927 23:54:48 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71729 00:10:52.927 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71729) - No such process 00:10:52.927 23:54:48 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71729 is not found' 00:10:52.927 23:54:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:52.927 00:10:52.927 real 0m45.797s 00:10:52.927 user 1m19.403s 00:10:52.927 sys 0m6.781s 00:10:52.927 23:54:48 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.927 23:54:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:52.928 ************************************ 00:10:52.928 END TEST cpu_locks 00:10:52.928 ************************************ 00:10:52.928 ************************************ 00:10:52.928 END TEST event 00:10:52.928 ************************************ 00:10:52.928 00:10:52.928 real 1m16.902s 00:10:52.928 user 2m19.497s 00:10:52.928 sys 0m10.654s 00:10:52.928 23:54:48 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.928 23:54:48 event -- common/autotest_common.sh@10 -- # set +x 00:10:52.928 23:54:48 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:52.928 23:54:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:52.928 23:54:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.928 23:54:48 -- common/autotest_common.sh@10 -- # set +x 00:10:52.928 ************************************ 00:10:52.928 START TEST thread 00:10:52.928 ************************************ 00:10:52.928 23:54:48 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:52.928 * Looking for test storage... 00:10:52.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:52.928 23:54:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:52.928 23:54:48 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:52.928 23:54:48 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.928 23:54:48 thread -- common/autotest_common.sh@10 -- # set +x 00:10:52.928 ************************************ 00:10:52.928 START TEST thread_poller_perf 00:10:52.928 ************************************ 00:10:52.928 23:54:48 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:52.928 [2024-07-24 23:54:48.411678] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:10:52.928 [2024-07-24 23:54:48.412571] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71913 ] 00:10:52.928 [2024-07-24 23:54:48.580231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.186 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:53.186 [2024-07-24 23:54:48.834257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.560 ====================================== 00:10:54.560 busy:2211178692 (cyc) 00:10:54.560 total_run_count: 276000 00:10:54.560 tsc_hz: 2200000000 (cyc) 00:10:54.560 ====================================== 00:10:54.560 poller_cost: 8011 (cyc), 3641 (nsec) 00:10:54.560 00:10:54.560 real 0m1.901s 00:10:54.560 user 0m1.691s 00:10:54.560 sys 0m0.109s 00:10:54.560 23:54:50 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:54.560 ************************************ 00:10:54.560 23:54:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:54.560 END TEST thread_poller_perf 00:10:54.560 ************************************ 00:10:54.560 23:54:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:54.560 23:54:50 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:54.560 23:54:50 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:54.560 23:54:50 thread -- common/autotest_common.sh@10 -- # set +x 00:10:54.560 ************************************ 00:10:54.560 START TEST thread_poller_perf 00:10:54.560 ************************************ 00:10:54.560 23:54:50 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:54.560 [2024-07-24 23:54:50.371302] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:10:54.561 [2024-07-24 23:54:50.371475] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71955 ] 00:10:54.819 [2024-07-24 23:54:50.551820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.158 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:55.158 [2024-07-24 23:54:50.762353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.543 ====================================== 00:10:56.544 busy:2204375084 (cyc) 00:10:56.544 total_run_count: 3584000 00:10:56.544 tsc_hz: 2200000000 (cyc) 00:10:56.544 ====================================== 00:10:56.544 poller_cost: 615 (cyc), 279 (nsec) 00:10:56.544 00:10:56.544 real 0m1.867s 00:10:56.544 user 0m1.639s 00:10:56.544 sys 0m0.127s 00:10:56.544 23:54:52 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.544 ************************************ 00:10:56.544 END TEST thread_poller_perf 00:10:56.544 ************************************ 00:10:56.544 23:54:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:56.544 23:54:52 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:10:56.544 23:54:52 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:56.544 23:54:52 thread -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:56.544 23:54:52 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:56.544 23:54:52 thread -- common/autotest_common.sh@10 -- # set +x 00:10:56.544 ************************************ 00:10:56.544 START TEST thread_spdk_lock 00:10:56.544 ************************************ 00:10:56.544 23:54:52 thread.thread_spdk_lock -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:56.544 [2024-07-24 23:54:52.296467] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:10:56.544 [2024-07-24 23:54:52.296671] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71991 ] 00:10:56.801 [2024-07-24 23:54:52.473113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:57.060 [2024-07-24 23:54:52.678365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.060 [2024-07-24 23:54:52.678379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.628 [2024-07-24 23:54:53.261290] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:57.628 [2024-07-24 23:54:53.261437] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:10:57.628 [2024-07-24 23:54:53.261477] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x5b76f130a380 00:10:57.628 [2024-07-24 23:54:53.270745] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:57.628 [2024-07-24 23:54:53.270847] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:57.628 [2024-07-24 23:54:53.270881] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:57.887 Starting test contend 00:10:57.887 Worker Delay Wait us Hold us Total us 00:10:57.887 0 3 122914 216053 338968 00:10:57.887 1 5 59800 319585 379385 00:10:57.887 PASS test contend 00:10:57.887 Starting test hold_by_poller 00:10:57.887 PASS test hold_by_poller 00:10:57.887 Starting test hold_by_message 00:10:57.887 PASS test hold_by_message 00:10:57.888 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:10:57.888 100014 assertions passed 00:10:57.888 0 assertions failed 00:10:57.888 00:10:57.888 real 0m1.446s 00:10:57.888 user 0m1.824s 00:10:57.888 sys 0m0.113s 00:10:57.888 ************************************ 00:10:57.888 END TEST thread_spdk_lock 00:10:57.888 ************************************ 00:10:57.888 23:54:53 thread.thread_spdk_lock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.888 23:54:53 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:10:57.888 00:10:57.888 real 0m5.463s 00:10:57.888 user 0m5.234s 00:10:57.888 sys 0m0.510s 00:10:57.888 23:54:53 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.888 23:54:53 thread -- common/autotest_common.sh@10 -- # set +x 00:10:57.888 ************************************ 00:10:57.888 END TEST thread 00:10:57.888 ************************************ 00:10:58.146 23:54:53 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:10:58.146 23:54:53 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:58.146 23:54:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:58.146 23:54:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.146 23:54:53 -- common/autotest_common.sh@10 -- # set +x 00:10:58.146 ************************************ 00:10:58.146 START TEST app_cmdline 00:10:58.146 ************************************ 00:10:58.146 23:54:53 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:58.146 * Looking for test storage... 00:10:58.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:58.146 23:54:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:58.146 23:54:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72068 00:10:58.146 23:54:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72068 00:10:58.146 23:54:53 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:58.146 23:54:53 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 72068 ']' 00:10:58.146 23:54:53 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.146 23:54:53 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:58.146 23:54:53 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.146 23:54:53 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:58.146 23:54:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:58.146 [2024-07-24 23:54:53.949158] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:10:58.146 [2024-07-24 23:54:53.949332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72068 ] 00:10:58.405 [2024-07-24 23:54:54.115984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.664 [2024-07-24 23:54:54.322137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.231 23:54:55 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:59.231 23:54:55 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:10:59.231 23:54:55 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:59.489 { 00:10:59.489 "version": "SPDK v24.09-pre git sha1 d005e023b", 00:10:59.489 "fields": { 00:10:59.489 "major": 24, 00:10:59.489 "minor": 9, 00:10:59.489 "patch": 0, 00:10:59.489 "suffix": "-pre", 00:10:59.489 "commit": "d005e023b" 00:10:59.489 } 00:10:59.489 } 00:10:59.489 23:54:55 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:59.489 23:54:55 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:59.489 23:54:55 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:59.489 23:54:55 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:59.489 23:54:55 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:59.489 23:54:55 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:59.489 23:54:55 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.489 23:54:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:59.489 23:54:55 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:59.489 23:54:55 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.489 23:54:55 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:59.489 23:54:55 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:59.489 23:54:55 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:59.489 23:54:55 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:10:59.489 23:54:55 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:59.489 23:54:55 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:59.489 23:54:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:59.489 23:54:55 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:59.489 23:54:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:59.489 23:54:55 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:59.489 23:54:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:59.489 23:54:55 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:59.747 23:54:55 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:59.747 23:54:55 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:59.747 request: 00:10:59.747 { 00:10:59.747 "method": "env_dpdk_get_mem_stats", 00:10:59.747 "req_id": 1 00:10:59.747 } 00:10:59.747 Got JSON-RPC error response 00:10:59.747 response: 00:10:59.747 { 00:10:59.747 "code": -32601, 00:10:59.747 "message": "Method not found" 00:10:59.747 } 00:10:59.747 23:54:55 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:10:59.747 23:54:55 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:59.747 23:54:55 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:59.747 23:54:55 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:59.747 23:54:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72068 00:10:59.747 23:54:55 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 72068 ']' 00:10:59.747 23:54:55 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 72068 00:10:59.747 23:54:55 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:10:59.747 23:54:55 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:59.747 23:54:55 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72068 00:11:00.004 23:54:55 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:00.004 killing process with pid 72068 00:11:00.004 23:54:55 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:00.004 23:54:55 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72068' 00:11:00.004 23:54:55 app_cmdline -- common/autotest_common.sh@969 -- # kill 72068 00:11:00.004 23:54:55 app_cmdline -- common/autotest_common.sh@974 -- # wait 72068 00:11:01.905 00:11:01.905 real 0m3.745s 00:11:01.905 user 0m4.191s 00:11:01.905 sys 0m0.562s 00:11:01.905 23:54:57 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.905 23:54:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:01.905 ************************************ 00:11:01.905 END TEST app_cmdline 00:11:01.905 ************************************ 00:11:01.905 23:54:57 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:01.905 23:54:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:01.905 23:54:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.905 23:54:57 -- common/autotest_common.sh@10 -- # set +x 00:11:01.905 ************************************ 00:11:01.905 START TEST version 00:11:01.905 ************************************ 00:11:01.905 23:54:57 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:01.905 * Looking for test storage... 00:11:01.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:01.905 23:54:57 version -- app/version.sh@17 -- # get_header_version major 00:11:01.905 23:54:57 version -- app/version.sh@14 -- # cut -f2 00:11:01.905 23:54:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:01.905 23:54:57 version -- app/version.sh@14 -- # tr -d '"' 00:11:01.905 23:54:57 version -- app/version.sh@17 -- # major=24 00:11:01.905 23:54:57 version -- app/version.sh@18 -- # get_header_version minor 00:11:01.905 23:54:57 version -- app/version.sh@14 -- # cut -f2 00:11:01.905 23:54:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:01.905 23:54:57 version -- app/version.sh@14 -- # tr -d '"' 00:11:01.905 23:54:57 version -- app/version.sh@18 -- # minor=9 00:11:01.905 23:54:57 version -- app/version.sh@19 -- # get_header_version patch 00:11:01.905 23:54:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:01.905 23:54:57 version -- app/version.sh@14 -- # cut -f2 00:11:01.905 23:54:57 version -- app/version.sh@14 -- # tr -d '"' 00:11:01.905 23:54:57 version -- app/version.sh@19 -- # patch=0 00:11:01.905 23:54:57 version -- app/version.sh@20 -- # get_header_version suffix 00:11:01.905 23:54:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:01.905 23:54:57 version -- app/version.sh@14 -- # cut -f2 00:11:01.905 23:54:57 version -- app/version.sh@14 -- # tr -d '"' 00:11:01.905 23:54:57 version -- app/version.sh@20 -- # suffix=-pre 00:11:01.905 23:54:57 version -- app/version.sh@22 -- # version=24.9 00:11:01.905 23:54:57 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:01.905 23:54:57 version -- app/version.sh@28 -- # version=24.9rc0 00:11:01.905 23:54:57 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:01.905 23:54:57 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:01.905 23:54:57 version -- app/version.sh@30 -- # py_version=24.9rc0 00:11:01.905 23:54:57 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:11:01.905 00:11:01.905 real 0m0.162s 00:11:01.905 user 0m0.103s 00:11:01.905 sys 0m0.098s 00:11:01.905 23:54:57 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.905 23:54:57 version -- common/autotest_common.sh@10 -- # set +x 00:11:01.905 ************************************ 00:11:01.905 END TEST version 00:11:01.905 ************************************ 00:11:02.164 23:54:57 -- spdk/autotest.sh@192 -- # '[' 1 -eq 1 ']' 00:11:02.164 23:54:57 -- spdk/autotest.sh@193 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:02.164 23:54:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:02.164 23:54:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.164 23:54:57 -- common/autotest_common.sh@10 -- # set +x 00:11:02.164 ************************************ 00:11:02.164 START TEST blockdev_general 00:11:02.164 ************************************ 00:11:02.164 23:54:57 blockdev_general -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:02.164 * Looking for test storage... 00:11:02.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:02.164 23:54:57 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@673 -- # uname -s 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@681 -- # test_type=bdev 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@682 -- # crypto_device= 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@683 -- # dek= 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@684 -- # env_ctx= 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@689 -- # [[ bdev == bdev ]] 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72233 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 72233 00:11:02.164 23:54:57 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:11:02.164 23:54:57 blockdev_general -- common/autotest_common.sh@831 -- # '[' -z 72233 ']' 00:11:02.164 23:54:57 blockdev_general -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.164 23:54:57 blockdev_general -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:02.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.164 23:54:57 blockdev_general -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.164 23:54:57 blockdev_general -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:02.164 23:54:57 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:02.164 [2024-07-24 23:54:57.983136] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:11:02.164 [2024-07-24 23:54:57.983346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72233 ] 00:11:02.423 [2024-07-24 23:54:58.153879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.682 [2024-07-24 23:54:58.323513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.248 23:54:58 blockdev_general -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:03.248 23:54:58 blockdev_general -- common/autotest_common.sh@864 -- # return 0 00:11:03.248 23:54:58 blockdev_general -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:11:03.248 23:54:58 blockdev_general -- bdev/blockdev.sh@695 -- # setup_bdev_conf 00:11:03.248 23:54:58 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:11:03.248 23:54:58 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.248 23:54:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:03.859 [2024-07-24 23:54:59.603786] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:03.859 [2024-07-24 23:54:59.603914] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:03.859 00:11:03.859 [2024-07-24 23:54:59.611698] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:03.859 [2024-07-24 23:54:59.611758] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:03.859 00:11:03.859 Malloc0 00:11:03.859 Malloc1 00:11:04.118 Malloc2 00:11:04.118 Malloc3 00:11:04.118 Malloc4 00:11:04.118 Malloc5 00:11:04.118 Malloc6 00:11:04.118 Malloc7 00:11:04.118 Malloc8 00:11:04.118 Malloc9 00:11:04.118 [2024-07-24 23:54:59.963634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:04.118 [2024-07-24 23:54:59.963703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.118 [2024-07-24 23:54:59.963758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c680 00:11:04.118 [2024-07-24 23:54:59.963773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.118 [2024-07-24 23:54:59.966076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.118 [2024-07-24 23:54:59.966118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:04.118 TestPT 00:11:04.377 23:55:00 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.377 23:55:00 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:11:04.377 5000+0 records in 00:11:04.377 5000+0 records out 00:11:04.377 10240000 bytes (10 MB, 9.8 MiB) copied, 0.022046 s, 464 MB/s 00:11:04.377 23:55:00 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:11:04.377 23:55:00 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.377 23:55:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:04.377 AIO0 00:11:04.377 23:55:00 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.377 23:55:00 blockdev_general -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:11:04.377 23:55:00 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.377 23:55:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:04.377 23:55:00 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.377 23:55:00 blockdev_general -- bdev/blockdev.sh@739 -- # cat 00:11:04.377 23:55:00 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:11:04.377 23:55:00 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.377 23:55:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:04.377 23:55:00 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.377 23:55:00 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:11:04.377 23:55:00 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.377 23:55:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:04.377 23:55:00 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.377 23:55:00 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:04.377 23:55:00 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.377 23:55:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:04.377 23:55:00 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.377 23:55:00 blockdev_general -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:11:04.377 23:55:00 blockdev_general -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:11:04.377 23:55:00 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.377 23:55:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:04.377 23:55:00 blockdev_general -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:11:04.637 23:55:00 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.637 23:55:00 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:11:04.637 23:55:00 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r .name 00:11:04.638 23:55:00 blockdev_general -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "0bfc9b7f-5ded-4936-b318-eacdf29806a6"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "0bfc9b7f-5ded-4936-b318-eacdf29806a6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "2a4e0445-be2e-5c5a-aa7f-9a21498bbf7b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "2a4e0445-be2e-5c5a-aa7f-9a21498bbf7b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "33ca91c8-3382-520e-87a2-2b5ccfd121b5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "33ca91c8-3382-520e-87a2-2b5ccfd121b5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "eef0472c-1cca-5e4a-ac07-5a51256f11b6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "eef0472c-1cca-5e4a-ac07-5a51256f11b6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "572e08bd-9190-5187-b8cf-e09203811ba2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "572e08bd-9190-5187-b8cf-e09203811ba2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "474f4b3c-64e9-5a1f-b641-673bdfe8913c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "474f4b3c-64e9-5a1f-b641-673bdfe8913c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "21bd5a47-3777-5704-bcb2-8686aef7cde3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "21bd5a47-3777-5704-bcb2-8686aef7cde3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "be3e02a2-b466-5c80-839e-14e5abc98698"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "be3e02a2-b466-5c80-839e-14e5abc98698",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "cc2aff87-2ee5-55d9-9f6a-791f1ddbac8b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cc2aff87-2ee5-55d9-9f6a-791f1ddbac8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "57161e87-ec21-5ddd-ae8b-52e7caae91d7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "57161e87-ec21-5ddd-ae8b-52e7caae91d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "b4343e04-a77a-52c8-9fba-414b4999b410"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b4343e04-a77a-52c8-9fba-414b4999b410",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "dbb3ab92-d0cf-5ee6-8506-00c4d08da0db"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "dbb3ab92-d0cf-5ee6-8506-00c4d08da0db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "7565b3b9-9f2d-4d6e-bb30-d49d920a396a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7565b3b9-9f2d-4d6e-bb30-d49d920a396a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "7565b3b9-9f2d-4d6e-bb30-d49d920a396a",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "a06a3604-fc06-4076-99a4-490b712b43ce",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "e5ffb729-7fdb-489c-8b20-6ee49f829f70",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "0df86aae-3c06-4ce9-aed1-f8d541228ddd"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "0df86aae-3c06-4ce9-aed1-f8d541228ddd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0df86aae-3c06-4ce9-aed1-f8d541228ddd",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "55cfb7ec-2a70-400d-a3fe-5eefe7cfe473",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "60320ef5-35b3-4c5b-8bfd-0d8c5a40a7e5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "64829fa2-c216-45a1-a1fd-49e50b163965"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "64829fa2-c216-45a1-a1fd-49e50b163965",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "64829fa2-c216-45a1-a1fd-49e50b163965",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "3d6dfa50-e833-4a9e-bd9b-8c823fe842de",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "d9a1395f-f490-4090-86f9-16ced34e15e9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "b2b0b5d5-1436-4f78-84bb-da93eeea4189"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "b2b0b5d5-1436-4f78-84bb-da93eeea4189",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:04.638 23:55:00 blockdev_general -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:11:04.638 23:55:00 blockdev_general -- bdev/blockdev.sh@751 -- # hello_world_bdev=Malloc0 00:11:04.638 23:55:00 blockdev_general -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:11:04.638 23:55:00 blockdev_general -- bdev/blockdev.sh@753 -- # killprocess 72233 00:11:04.638 23:55:00 blockdev_general -- common/autotest_common.sh@950 -- # '[' -z 72233 ']' 00:11:04.639 23:55:00 blockdev_general -- common/autotest_common.sh@954 -- # kill -0 72233 00:11:04.639 23:55:00 blockdev_general -- common/autotest_common.sh@955 -- # uname 00:11:04.639 23:55:00 blockdev_general -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:04.639 23:55:00 blockdev_general -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72233 00:11:04.639 23:55:00 blockdev_general -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:04.639 23:55:00 blockdev_general -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:04.639 23:55:00 blockdev_general -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72233' 00:11:04.639 killing process with pid 72233 00:11:04.639 23:55:00 blockdev_general -- common/autotest_common.sh@969 -- # kill 72233 00:11:04.639 23:55:00 blockdev_general -- common/autotest_common.sh@974 -- # wait 72233 00:11:07.927 23:55:03 blockdev_general -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:07.927 23:55:03 blockdev_general -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:07.927 23:55:03 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:07.927 23:55:03 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.927 23:55:03 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:07.927 ************************************ 00:11:07.927 START TEST bdev_hello_world 00:11:07.927 ************************************ 00:11:07.927 23:55:03 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:07.927 [2024-07-24 23:55:03.209268] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:11:07.927 [2024-07-24 23:55:03.209458] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72314 ] 00:11:07.927 [2024-07-24 23:55:03.381544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.927 [2024-07-24 23:55:03.534836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.185 [2024-07-24 23:55:03.847177] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:08.185 [2024-07-24 23:55:03.847279] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:08.185 [2024-07-24 23:55:03.855120] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:08.185 [2024-07-24 23:55:03.855166] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:08.185 [2024-07-24 23:55:03.863123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:08.185 [2024-07-24 23:55:03.863175] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:08.186 [2024-07-24 23:55:03.863195] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:08.186 [2024-07-24 23:55:04.034794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:08.186 [2024-07-24 23:55:04.034886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.186 [2024-07-24 23:55:04.034924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:11:08.186 [2024-07-24 23:55:04.034938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.186 [2024-07-24 23:55:04.037735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.186 [2024-07-24 23:55:04.037789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:08.752 [2024-07-24 23:55:04.325731] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:08.752 [2024-07-24 23:55:04.325836] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:11:08.752 [2024-07-24 23:55:04.325898] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:08.752 [2024-07-24 23:55:04.325999] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:08.752 [2024-07-24 23:55:04.326105] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:08.752 [2024-07-24 23:55:04.326140] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:08.752 [2024-07-24 23:55:04.326215] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:08.752 00:11:08.752 [2024-07-24 23:55:04.326272] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:10.656 00:11:10.656 real 0m3.078s 00:11:10.656 user 0m2.597s 00:11:10.656 sys 0m0.350s 00:11:10.656 23:55:06 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.656 ************************************ 00:11:10.656 END TEST bdev_hello_world 00:11:10.656 ************************************ 00:11:10.656 23:55:06 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:10.656 23:55:06 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:11:10.656 23:55:06 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:10.656 23:55:06 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:10.656 23:55:06 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:10.656 ************************************ 00:11:10.656 START TEST bdev_bounds 00:11:10.656 ************************************ 00:11:10.656 23:55:06 blockdev_general.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:11:10.656 23:55:06 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72362 00:11:10.656 Process bdevio pid: 72362 00:11:10.656 23:55:06 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:10.656 23:55:06 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72362' 00:11:10.656 23:55:06 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72362 00:11:10.656 23:55:06 blockdev_general.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:10.656 23:55:06 blockdev_general.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 72362 ']' 00:11:10.656 23:55:06 blockdev_general.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.656 23:55:06 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:10.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.656 23:55:06 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.656 23:55:06 blockdev_general.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:10.656 23:55:06 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:10.656 [2024-07-24 23:55:06.343231] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:11:10.656 [2024-07-24 23:55:06.343418] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72362 ] 00:11:10.656 [2024-07-24 23:55:06.516632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:10.915 [2024-07-24 23:55:06.712104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.915 [2024-07-24 23:55:06.712168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.915 [2024-07-24 23:55:06.712189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.482 [2024-07-24 23:55:07.063123] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:11.482 [2024-07-24 23:55:07.063196] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:11.482 [2024-07-24 23:55:07.071067] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:11.482 [2024-07-24 23:55:07.071117] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:11.482 [2024-07-24 23:55:07.079074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:11.482 [2024-07-24 23:55:07.079120] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:11.482 [2024-07-24 23:55:07.079136] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:11.482 [2024-07-24 23:55:07.257276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:11.482 [2024-07-24 23:55:07.257359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.482 [2024-07-24 23:55:07.257391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:11:11.482 [2024-07-24 23:55:07.257406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.482 [2024-07-24 23:55:07.260097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.482 [2024-07-24 23:55:07.260141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:11.740 23:55:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:11.740 23:55:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:11:11.741 23:55:07 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:11.999 I/O targets: 00:11:11.999 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:11:11.999 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:11:11.999 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:11:11.999 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:11:11.999 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:11:11.999 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:11:11.999 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:11:11.999 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:11:11.999 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:11:11.999 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:11:11.999 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:11:11.999 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:11:11.999 raid0: 131072 blocks of 512 bytes (64 MiB) 00:11:11.999 concat0: 131072 blocks of 512 bytes (64 MiB) 00:11:11.999 raid1: 65536 blocks of 512 bytes (32 MiB) 00:11:11.999 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:11:11.999 00:11:11.999 00:11:11.999 CUnit - A unit testing framework for C - Version 2.1-3 00:11:11.999 http://cunit.sourceforge.net/ 00:11:11.999 00:11:11.999 00:11:11.999 Suite: bdevio tests on: AIO0 00:11:11.999 Test: blockdev write read block ...passed 00:11:11.999 Test: blockdev write zeroes read block ...passed 00:11:11.999 Test: blockdev write zeroes read no split ...passed 00:11:11.999 Test: blockdev write zeroes read split ...passed 00:11:11.999 Test: blockdev write zeroes read split partial ...passed 00:11:11.999 Test: blockdev reset ...passed 00:11:11.999 Test: blockdev write read 8 blocks ...passed 00:11:11.999 Test: blockdev write read size > 128k ...passed 00:11:11.999 Test: blockdev write read invalid size ...passed 00:11:11.999 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.999 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.999 Test: blockdev write read max offset ...passed 00:11:11.999 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.999 Test: blockdev writev readv 8 blocks ...passed 00:11:11.999 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.999 Test: blockdev writev readv block ...passed 00:11:11.999 Test: blockdev writev readv size > 128k ...passed 00:11:11.999 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.999 Test: blockdev comparev and writev ...passed 00:11:11.999 Test: blockdev nvme passthru rw ...passed 00:11:11.999 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.999 Test: blockdev nvme admin passthru ...passed 00:11:11.999 Test: blockdev copy ...passed 00:11:11.999 Suite: bdevio tests on: raid1 00:11:11.999 Test: blockdev write read block ...passed 00:11:11.999 Test: blockdev write zeroes read block ...passed 00:11:11.999 Test: blockdev write zeroes read no split ...passed 00:11:11.999 Test: blockdev write zeroes read split ...passed 00:11:11.999 Test: blockdev write zeroes read split partial ...passed 00:11:11.999 Test: blockdev reset ...passed 00:11:11.999 Test: blockdev write read 8 blocks ...passed 00:11:11.999 Test: blockdev write read size > 128k ...passed 00:11:11.999 Test: blockdev write read invalid size ...passed 00:11:11.999 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.999 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.999 Test: blockdev write read max offset ...passed 00:11:11.999 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.999 Test: blockdev writev readv 8 blocks ...passed 00:11:11.999 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.999 Test: blockdev writev readv block ...passed 00:11:11.999 Test: blockdev writev readv size > 128k ...passed 00:11:11.999 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.999 Test: blockdev comparev and writev ...passed 00:11:11.999 Test: blockdev nvme passthru rw ...passed 00:11:11.999 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.999 Test: blockdev nvme admin passthru ...passed 00:11:11.999 Test: blockdev copy ...passed 00:11:11.999 Suite: bdevio tests on: concat0 00:11:11.999 Test: blockdev write read block ...passed 00:11:11.999 Test: blockdev write zeroes read block ...passed 00:11:11.999 Test: blockdev write zeroes read no split ...passed 00:11:12.258 Test: blockdev write zeroes read split ...passed 00:11:12.258 Test: blockdev write zeroes read split partial ...passed 00:11:12.258 Test: blockdev reset ...passed 00:11:12.258 Test: blockdev write read 8 blocks ...passed 00:11:12.258 Test: blockdev write read size > 128k ...passed 00:11:12.258 Test: blockdev write read invalid size ...passed 00:11:12.258 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.258 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.258 Test: blockdev write read max offset ...passed 00:11:12.258 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.258 Test: blockdev writev readv 8 blocks ...passed 00:11:12.258 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.258 Test: blockdev writev readv block ...passed 00:11:12.258 Test: blockdev writev readv size > 128k ...passed 00:11:12.258 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.258 Test: blockdev comparev and writev ...passed 00:11:12.258 Test: blockdev nvme passthru rw ...passed 00:11:12.258 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.258 Test: blockdev nvme admin passthru ...passed 00:11:12.258 Test: blockdev copy ...passed 00:11:12.258 Suite: bdevio tests on: raid0 00:11:12.258 Test: blockdev write read block ...passed 00:11:12.258 Test: blockdev write zeroes read block ...passed 00:11:12.258 Test: blockdev write zeroes read no split ...passed 00:11:12.258 Test: blockdev write zeroes read split ...passed 00:11:12.258 Test: blockdev write zeroes read split partial ...passed 00:11:12.258 Test: blockdev reset ...passed 00:11:12.258 Test: blockdev write read 8 blocks ...passed 00:11:12.258 Test: blockdev write read size > 128k ...passed 00:11:12.258 Test: blockdev write read invalid size ...passed 00:11:12.258 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.258 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.259 Test: blockdev write read max offset ...passed 00:11:12.259 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.259 Test: blockdev writev readv 8 blocks ...passed 00:11:12.259 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.259 Test: blockdev writev readv block ...passed 00:11:12.259 Test: blockdev writev readv size > 128k ...passed 00:11:12.259 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.259 Test: blockdev comparev and writev ...passed 00:11:12.259 Test: blockdev nvme passthru rw ...passed 00:11:12.259 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.259 Test: blockdev nvme admin passthru ...passed 00:11:12.259 Test: blockdev copy ...passed 00:11:12.259 Suite: bdevio tests on: TestPT 00:11:12.259 Test: blockdev write read block ...passed 00:11:12.259 Test: blockdev write zeroes read block ...passed 00:11:12.259 Test: blockdev write zeroes read no split ...passed 00:11:12.259 Test: blockdev write zeroes read split ...passed 00:11:12.259 Test: blockdev write zeroes read split partial ...passed 00:11:12.259 Test: blockdev reset ...passed 00:11:12.259 Test: blockdev write read 8 blocks ...passed 00:11:12.259 Test: blockdev write read size > 128k ...passed 00:11:12.259 Test: blockdev write read invalid size ...passed 00:11:12.259 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.259 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.259 Test: blockdev write read max offset ...passed 00:11:12.259 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.259 Test: blockdev writev readv 8 blocks ...passed 00:11:12.259 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.259 Test: blockdev writev readv block ...passed 00:11:12.259 Test: blockdev writev readv size > 128k ...passed 00:11:12.259 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.259 Test: blockdev comparev and writev ...passed 00:11:12.259 Test: blockdev nvme passthru rw ...passed 00:11:12.259 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.259 Test: blockdev nvme admin passthru ...passed 00:11:12.259 Test: blockdev copy ...passed 00:11:12.259 Suite: bdevio tests on: Malloc2p7 00:11:12.259 Test: blockdev write read block ...passed 00:11:12.259 Test: blockdev write zeroes read block ...passed 00:11:12.259 Test: blockdev write zeroes read no split ...passed 00:11:12.518 Test: blockdev write zeroes read split ...passed 00:11:12.518 Test: blockdev write zeroes read split partial ...passed 00:11:12.518 Test: blockdev reset ...passed 00:11:12.518 Test: blockdev write read 8 blocks ...passed 00:11:12.518 Test: blockdev write read size > 128k ...passed 00:11:12.518 Test: blockdev write read invalid size ...passed 00:11:12.518 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.518 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.518 Test: blockdev write read max offset ...passed 00:11:12.518 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.518 Test: blockdev writev readv 8 blocks ...passed 00:11:12.518 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.518 Test: blockdev writev readv block ...passed 00:11:12.518 Test: blockdev writev readv size > 128k ...passed 00:11:12.518 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.518 Test: blockdev comparev and writev ...passed 00:11:12.518 Test: blockdev nvme passthru rw ...passed 00:11:12.518 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.518 Test: blockdev nvme admin passthru ...passed 00:11:12.518 Test: blockdev copy ...passed 00:11:12.518 Suite: bdevio tests on: Malloc2p6 00:11:12.518 Test: blockdev write read block ...passed 00:11:12.518 Test: blockdev write zeroes read block ...passed 00:11:12.518 Test: blockdev write zeroes read no split ...passed 00:11:12.518 Test: blockdev write zeroes read split ...passed 00:11:12.518 Test: blockdev write zeroes read split partial ...passed 00:11:12.518 Test: blockdev reset ...passed 00:11:12.518 Test: blockdev write read 8 blocks ...passed 00:11:12.518 Test: blockdev write read size > 128k ...passed 00:11:12.518 Test: blockdev write read invalid size ...passed 00:11:12.518 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.518 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.518 Test: blockdev write read max offset ...passed 00:11:12.518 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.518 Test: blockdev writev readv 8 blocks ...passed 00:11:12.518 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.518 Test: blockdev writev readv block ...passed 00:11:12.518 Test: blockdev writev readv size > 128k ...passed 00:11:12.518 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.518 Test: blockdev comparev and writev ...passed 00:11:12.518 Test: blockdev nvme passthru rw ...passed 00:11:12.518 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.518 Test: blockdev nvme admin passthru ...passed 00:11:12.518 Test: blockdev copy ...passed 00:11:12.518 Suite: bdevio tests on: Malloc2p5 00:11:12.518 Test: blockdev write read block ...passed 00:11:12.518 Test: blockdev write zeroes read block ...passed 00:11:12.518 Test: blockdev write zeroes read no split ...passed 00:11:12.518 Test: blockdev write zeroes read split ...passed 00:11:12.518 Test: blockdev write zeroes read split partial ...passed 00:11:12.518 Test: blockdev reset ...passed 00:11:12.518 Test: blockdev write read 8 blocks ...passed 00:11:12.518 Test: blockdev write read size > 128k ...passed 00:11:12.518 Test: blockdev write read invalid size ...passed 00:11:12.518 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.518 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.518 Test: blockdev write read max offset ...passed 00:11:12.518 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.518 Test: blockdev writev readv 8 blocks ...passed 00:11:12.518 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.518 Test: blockdev writev readv block ...passed 00:11:12.518 Test: blockdev writev readv size > 128k ...passed 00:11:12.518 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.518 Test: blockdev comparev and writev ...passed 00:11:12.518 Test: blockdev nvme passthru rw ...passed 00:11:12.518 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.518 Test: blockdev nvme admin passthru ...passed 00:11:12.518 Test: blockdev copy ...passed 00:11:12.518 Suite: bdevio tests on: Malloc2p4 00:11:12.518 Test: blockdev write read block ...passed 00:11:12.518 Test: blockdev write zeroes read block ...passed 00:11:12.518 Test: blockdev write zeroes read no split ...passed 00:11:12.518 Test: blockdev write zeroes read split ...passed 00:11:12.518 Test: blockdev write zeroes read split partial ...passed 00:11:12.518 Test: blockdev reset ...passed 00:11:12.518 Test: blockdev write read 8 blocks ...passed 00:11:12.518 Test: blockdev write read size > 128k ...passed 00:11:12.518 Test: blockdev write read invalid size ...passed 00:11:12.518 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.518 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.518 Test: blockdev write read max offset ...passed 00:11:12.518 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.518 Test: blockdev writev readv 8 blocks ...passed 00:11:12.518 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.518 Test: blockdev writev readv block ...passed 00:11:12.518 Test: blockdev writev readv size > 128k ...passed 00:11:12.518 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.518 Test: blockdev comparev and writev ...passed 00:11:12.518 Test: blockdev nvme passthru rw ...passed 00:11:12.518 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.518 Test: blockdev nvme admin passthru ...passed 00:11:12.518 Test: blockdev copy ...passed 00:11:12.518 Suite: bdevio tests on: Malloc2p3 00:11:12.518 Test: blockdev write read block ...passed 00:11:12.518 Test: blockdev write zeroes read block ...passed 00:11:12.518 Test: blockdev write zeroes read no split ...passed 00:11:12.518 Test: blockdev write zeroes read split ...passed 00:11:12.777 Test: blockdev write zeroes read split partial ...passed 00:11:12.777 Test: blockdev reset ...passed 00:11:12.777 Test: blockdev write read 8 blocks ...passed 00:11:12.777 Test: blockdev write read size > 128k ...passed 00:11:12.777 Test: blockdev write read invalid size ...passed 00:11:12.777 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.777 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.777 Test: blockdev write read max offset ...passed 00:11:12.777 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.777 Test: blockdev writev readv 8 blocks ...passed 00:11:12.777 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.777 Test: blockdev writev readv block ...passed 00:11:12.777 Test: blockdev writev readv size > 128k ...passed 00:11:12.777 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.777 Test: blockdev comparev and writev ...passed 00:11:12.777 Test: blockdev nvme passthru rw ...passed 00:11:12.777 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.777 Test: blockdev nvme admin passthru ...passed 00:11:12.777 Test: blockdev copy ...passed 00:11:12.777 Suite: bdevio tests on: Malloc2p2 00:11:12.777 Test: blockdev write read block ...passed 00:11:12.777 Test: blockdev write zeroes read block ...passed 00:11:12.777 Test: blockdev write zeroes read no split ...passed 00:11:12.777 Test: blockdev write zeroes read split ...passed 00:11:12.777 Test: blockdev write zeroes read split partial ...passed 00:11:12.777 Test: blockdev reset ...passed 00:11:12.777 Test: blockdev write read 8 blocks ...passed 00:11:12.777 Test: blockdev write read size > 128k ...passed 00:11:12.777 Test: blockdev write read invalid size ...passed 00:11:12.777 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.777 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.777 Test: blockdev write read max offset ...passed 00:11:12.777 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.777 Test: blockdev writev readv 8 blocks ...passed 00:11:12.777 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.777 Test: blockdev writev readv block ...passed 00:11:12.777 Test: blockdev writev readv size > 128k ...passed 00:11:12.777 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.777 Test: blockdev comparev and writev ...passed 00:11:12.777 Test: blockdev nvme passthru rw ...passed 00:11:12.777 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.777 Test: blockdev nvme admin passthru ...passed 00:11:12.777 Test: blockdev copy ...passed 00:11:12.777 Suite: bdevio tests on: Malloc2p1 00:11:12.777 Test: blockdev write read block ...passed 00:11:12.777 Test: blockdev write zeroes read block ...passed 00:11:12.777 Test: blockdev write zeroes read no split ...passed 00:11:12.777 Test: blockdev write zeroes read split ...passed 00:11:12.777 Test: blockdev write zeroes read split partial ...passed 00:11:12.777 Test: blockdev reset ...passed 00:11:12.777 Test: blockdev write read 8 blocks ...passed 00:11:12.777 Test: blockdev write read size > 128k ...passed 00:11:12.777 Test: blockdev write read invalid size ...passed 00:11:12.777 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.777 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.777 Test: blockdev write read max offset ...passed 00:11:12.777 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.777 Test: blockdev writev readv 8 blocks ...passed 00:11:12.777 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.777 Test: blockdev writev readv block ...passed 00:11:12.777 Test: blockdev writev readv size > 128k ...passed 00:11:12.777 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.777 Test: blockdev comparev and writev ...passed 00:11:12.777 Test: blockdev nvme passthru rw ...passed 00:11:12.777 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.777 Test: blockdev nvme admin passthru ...passed 00:11:12.777 Test: blockdev copy ...passed 00:11:12.777 Suite: bdevio tests on: Malloc2p0 00:11:12.777 Test: blockdev write read block ...passed 00:11:12.777 Test: blockdev write zeroes read block ...passed 00:11:12.777 Test: blockdev write zeroes read no split ...passed 00:11:12.777 Test: blockdev write zeroes read split ...passed 00:11:12.777 Test: blockdev write zeroes read split partial ...passed 00:11:12.777 Test: blockdev reset ...passed 00:11:12.777 Test: blockdev write read 8 blocks ...passed 00:11:12.777 Test: blockdev write read size > 128k ...passed 00:11:12.777 Test: blockdev write read invalid size ...passed 00:11:12.777 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.777 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.777 Test: blockdev write read max offset ...passed 00:11:12.777 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.777 Test: blockdev writev readv 8 blocks ...passed 00:11:12.777 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.777 Test: blockdev writev readv block ...passed 00:11:12.777 Test: blockdev writev readv size > 128k ...passed 00:11:12.777 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.777 Test: blockdev comparev and writev ...passed 00:11:12.777 Test: blockdev nvme passthru rw ...passed 00:11:12.777 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.777 Test: blockdev nvme admin passthru ...passed 00:11:12.777 Test: blockdev copy ...passed 00:11:12.777 Suite: bdevio tests on: Malloc1p1 00:11:12.777 Test: blockdev write read block ...passed 00:11:12.777 Test: blockdev write zeroes read block ...passed 00:11:12.777 Test: blockdev write zeroes read no split ...passed 00:11:12.777 Test: blockdev write zeroes read split ...passed 00:11:13.036 Test: blockdev write zeroes read split partial ...passed 00:11:13.036 Test: blockdev reset ...passed 00:11:13.036 Test: blockdev write read 8 blocks ...passed 00:11:13.036 Test: blockdev write read size > 128k ...passed 00:11:13.036 Test: blockdev write read invalid size ...passed 00:11:13.036 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:13.036 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:13.036 Test: blockdev write read max offset ...passed 00:11:13.036 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:13.036 Test: blockdev writev readv 8 blocks ...passed 00:11:13.036 Test: blockdev writev readv 30 x 1block ...passed 00:11:13.036 Test: blockdev writev readv block ...passed 00:11:13.036 Test: blockdev writev readv size > 128k ...passed 00:11:13.036 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:13.036 Test: blockdev comparev and writev ...passed 00:11:13.036 Test: blockdev nvme passthru rw ...passed 00:11:13.036 Test: blockdev nvme passthru vendor specific ...passed 00:11:13.036 Test: blockdev nvme admin passthru ...passed 00:11:13.036 Test: blockdev copy ...passed 00:11:13.036 Suite: bdevio tests on: Malloc1p0 00:11:13.036 Test: blockdev write read block ...passed 00:11:13.036 Test: blockdev write zeroes read block ...passed 00:11:13.036 Test: blockdev write zeroes read no split ...passed 00:11:13.036 Test: blockdev write zeroes read split ...passed 00:11:13.036 Test: blockdev write zeroes read split partial ...passed 00:11:13.036 Test: blockdev reset ...passed 00:11:13.036 Test: blockdev write read 8 blocks ...passed 00:11:13.036 Test: blockdev write read size > 128k ...passed 00:11:13.036 Test: blockdev write read invalid size ...passed 00:11:13.036 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:13.036 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:13.036 Test: blockdev write read max offset ...passed 00:11:13.036 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:13.036 Test: blockdev writev readv 8 blocks ...passed 00:11:13.036 Test: blockdev writev readv 30 x 1block ...passed 00:11:13.036 Test: blockdev writev readv block ...passed 00:11:13.036 Test: blockdev writev readv size > 128k ...passed 00:11:13.036 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:13.036 Test: blockdev comparev and writev ...passed 00:11:13.036 Test: blockdev nvme passthru rw ...passed 00:11:13.036 Test: blockdev nvme passthru vendor specific ...passed 00:11:13.036 Test: blockdev nvme admin passthru ...passed 00:11:13.036 Test: blockdev copy ...passed 00:11:13.036 Suite: bdevio tests on: Malloc0 00:11:13.036 Test: blockdev write read block ...passed 00:11:13.036 Test: blockdev write zeroes read block ...passed 00:11:13.036 Test: blockdev write zeroes read no split ...passed 00:11:13.036 Test: blockdev write zeroes read split ...passed 00:11:13.036 Test: blockdev write zeroes read split partial ...passed 00:11:13.036 Test: blockdev reset ...passed 00:11:13.036 Test: blockdev write read 8 blocks ...passed 00:11:13.036 Test: blockdev write read size > 128k ...passed 00:11:13.036 Test: blockdev write read invalid size ...passed 00:11:13.036 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:13.036 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:13.036 Test: blockdev write read max offset ...passed 00:11:13.036 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:13.036 Test: blockdev writev readv 8 blocks ...passed 00:11:13.036 Test: blockdev writev readv 30 x 1block ...passed 00:11:13.036 Test: blockdev writev readv block ...passed 00:11:13.036 Test: blockdev writev readv size > 128k ...passed 00:11:13.036 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:13.036 Test: blockdev comparev and writev ...passed 00:11:13.036 Test: blockdev nvme passthru rw ...passed 00:11:13.036 Test: blockdev nvme passthru vendor specific ...passed 00:11:13.036 Test: blockdev nvme admin passthru ...passed 00:11:13.036 Test: blockdev copy ...passed 00:11:13.036 00:11:13.036 Run Summary: Type Total Ran Passed Failed Inactive 00:11:13.036 suites 16 16 n/a 0 0 00:11:13.036 tests 368 368 368 0 0 00:11:13.036 asserts 2224 2224 2224 0 n/a 00:11:13.036 00:11:13.036 Elapsed time = 3.124 seconds 00:11:13.036 0 00:11:13.036 23:55:08 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72362 00:11:13.036 23:55:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 72362 ']' 00:11:13.036 23:55:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 72362 00:11:13.036 23:55:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:11:13.036 23:55:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:13.036 23:55:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72362 00:11:13.036 killing process with pid 72362 00:11:13.036 23:55:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:13.036 23:55:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:13.036 23:55:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72362' 00:11:13.036 23:55:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@969 -- # kill 72362 00:11:13.036 23:55:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@974 -- # wait 72362 00:11:14.940 23:55:10 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:11:14.940 00:11:14.940 real 0m4.304s 00:11:14.940 user 0m10.901s 00:11:14.940 sys 0m0.607s 00:11:14.940 23:55:10 blockdev_general.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:14.940 ************************************ 00:11:14.940 END TEST bdev_bounds 00:11:14.940 ************************************ 00:11:14.940 23:55:10 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:14.940 23:55:10 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:14.940 23:55:10 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:14.940 23:55:10 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:14.940 23:55:10 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:14.940 ************************************ 00:11:14.940 START TEST bdev_nbd 00:11:14.940 ************************************ 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=16 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=16 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72439 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72439 /var/tmp/spdk-nbd.sock 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 72439 ']' 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:14.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:14.940 23:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:14.940 [2024-07-24 23:55:10.690228] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:11:14.940 [2024-07-24 23:55:10.690604] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.199 [2024-07-24 23:55:10.845080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.199 [2024-07-24 23:55:11.005794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.457 [2024-07-24 23:55:11.311064] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:15.457 [2024-07-24 23:55:11.311158] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:15.457 [2024-07-24 23:55:11.319042] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:15.457 [2024-07-24 23:55:11.319094] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:15.715 [2024-07-24 23:55:11.327118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:15.715 [2024-07-24 23:55:11.327169] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:15.715 [2024-07-24 23:55:11.327187] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:15.715 [2024-07-24 23:55:11.494719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:15.715 [2024-07-24 23:55:11.494850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.715 [2024-07-24 23:55:11.494878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:11:15.715 [2024-07-24 23:55:11.494893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.715 [2024-07-24 23:55:11.497690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.715 [2024-07-24 23:55:11.497916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:15.973 23:55:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.973 23:55:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:11:15.973 23:55:11 blockdev_general.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:15.973 23:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:15.973 23:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:15.973 23:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:15.973 23:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:15.973 23:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:15.973 23:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:15.973 23:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:15.973 23:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:15.974 23:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:15.974 23:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:15.974 23:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:15.974 23:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:11:16.232 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:16.232 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:16.232 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:16.232 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:16.232 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:16.232 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:16.232 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:16.232 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:16.232 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:16.232 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:16.232 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:16.232 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:16.232 1+0 records in 00:11:16.232 1+0 records out 00:11:16.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571651 s, 7.2 MB/s 00:11:16.232 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:16.232 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:16.232 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:16.232 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:16.232 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:16.232 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:16.232 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:16.232 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:11:16.490 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:16.490 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:16.490 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:16.490 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:16.490 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:16.490 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:16.491 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:16.491 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:16.491 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:16.491 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:16.491 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:16.491 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:16.491 1+0 records in 00:11:16.491 1+0 records out 00:11:16.491 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360657 s, 11.4 MB/s 00:11:16.491 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:16.491 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:16.491 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:16.491 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:16.491 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:16.491 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:16.491 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:16.491 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:11:16.748 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:16.749 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:16.749 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:16.749 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:11:16.749 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:16.749 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:16.749 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:16.749 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:11:16.749 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:16.749 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:16.749 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:16.749 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:16.749 1+0 records in 00:11:16.749 1+0 records out 00:11:16.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401393 s, 10.2 MB/s 00:11:16.749 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:16.749 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:16.749 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.007 1+0 records in 00:11:17.007 1+0 records out 00:11:17.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355854 s, 11.5 MB/s 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:17.007 23:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:11:17.574 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:17.574 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:17.574 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:17.574 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:11:17.574 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:17.574 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:17.574 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:17.574 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:11:17.574 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:17.574 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:17.574 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:17.574 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.574 1+0 records in 00:11:17.574 1+0 records out 00:11:17.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431898 s, 9.5 MB/s 00:11:17.575 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.575 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:17.575 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.575 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:17.575 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:17.575 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:17.575 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:17.575 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:11:17.575 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:17.575 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:17.575 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:17.575 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:11:17.575 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:17.575 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:17.575 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:17.575 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.834 1+0 records in 00:11:17.834 1+0 records out 00:11:17.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438982 s, 9.3 MB/s 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.834 1+0 records in 00:11:17.834 1+0 records out 00:11:17.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425096 s, 9.6 MB/s 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:17.834 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:11:18.401 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:11:18.401 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:11:18.401 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:11:18.401 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:11:18.401 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:18.401 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:18.401 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:18.401 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:11:18.401 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:18.401 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:18.401 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:18.401 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.401 1+0 records in 00:11:18.401 1+0 records out 00:11:18.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432723 s, 9.5 MB/s 00:11:18.401 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.401 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:18.401 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.401 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:18.401 23:55:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:18.401 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:18.401 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:18.401 23:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:11:18.401 23:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:11:18.659 23:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:11:18.659 23:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:11:18.659 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:11:18.659 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:18.659 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:18.659 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:18.659 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:11:18.659 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:18.659 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:18.659 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:18.659 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.659 1+0 records in 00:11:18.659 1+0 records out 00:11:18.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487958 s, 8.4 MB/s 00:11:18.659 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.659 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:18.659 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.659 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:18.659 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:18.659 23:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:18.659 23:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:18.659 23:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:11:18.918 23:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:11:18.918 23:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:11:18.918 23:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:11:18.918 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:11:18.918 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:18.918 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:18.918 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:18.918 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:11:18.918 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:18.918 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:18.918 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:18.918 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.918 1+0 records in 00:11:18.918 1+0 records out 00:11:18.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444467 s, 9.2 MB/s 00:11:18.918 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.918 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:18.918 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.918 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:18.918 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:18.918 23:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:18.918 23:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:18.918 23:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:11:19.175 23:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:11:19.175 23:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:11:19.175 23:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:11:19.175 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:11:19.175 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:19.175 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:19.176 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:19.176 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:11:19.176 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:19.176 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:19.176 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:19.176 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:19.176 1+0 records in 00:11:19.176 1+0 records out 00:11:19.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000808631 s, 5.1 MB/s 00:11:19.176 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.176 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:19.176 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.176 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:19.176 23:55:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:19.176 23:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:19.176 23:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:19.176 23:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:11:19.434 23:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:11:19.434 23:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:11:19.434 23:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:11:19.434 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:11:19.434 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:19.434 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:19.434 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:19.434 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:11:19.434 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:19.434 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:19.434 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:19.434 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:19.434 1+0 records in 00:11:19.434 1+0 records out 00:11:19.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593025 s, 6.9 MB/s 00:11:19.434 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.434 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:19.434 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.434 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:19.434 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:19.434 23:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:19.434 23:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:19.434 23:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:11:19.694 23:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:11:19.694 23:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:11:19.694 23:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:11:19.694 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:11:19.694 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:19.694 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:19.694 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:19.694 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:11:19.694 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:19.694 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:19.694 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:19.694 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:19.694 1+0 records in 00:11:19.694 1+0 records out 00:11:19.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528837 s, 7.7 MB/s 00:11:19.694 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.694 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:19.694 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.694 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:19.694 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:19.694 23:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:19.694 23:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:19.694 23:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:11:19.952 23:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:11:19.952 23:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:11:19.952 23:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:11:19.952 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:11:19.952 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:19.952 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:19.952 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:19.952 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:11:19.952 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:19.952 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:19.952 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:19.952 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:19.952 1+0 records in 00:11:19.952 1+0 records out 00:11:19.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000865895 s, 4.7 MB/s 00:11:19.952 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.952 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:19.952 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.952 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:19.952 23:55:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:19.952 23:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:19.952 23:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:19.952 23:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:11:20.211 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:11:20.211 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:11:20.211 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:11:20.211 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:11:20.211 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:20.211 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:20.211 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:20.211 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:11:20.211 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:20.211 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:20.211 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:20.211 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:20.211 1+0 records in 00:11:20.211 1+0 records out 00:11:20.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000742745 s, 5.5 MB/s 00:11:20.211 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.211 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:20.211 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.211 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:20.211 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:20.211 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:20.211 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:20.211 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:11:20.469 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:11:20.469 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:11:20.469 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:11:20.469 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:11:20.469 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:20.469 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:20.469 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:20.469 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:11:20.469 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:20.469 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:20.469 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:20.469 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:20.469 1+0 records in 00:11:20.469 1+0 records out 00:11:20.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000969691 s, 4.2 MB/s 00:11:20.469 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.469 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:20.469 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.469 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:20.469 23:55:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:20.469 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:20.469 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:20.469 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:20.727 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:20.727 { 00:11:20.727 "nbd_device": "/dev/nbd0", 00:11:20.727 "bdev_name": "Malloc0" 00:11:20.727 }, 00:11:20.727 { 00:11:20.727 "nbd_device": "/dev/nbd1", 00:11:20.727 "bdev_name": "Malloc1p0" 00:11:20.727 }, 00:11:20.727 { 00:11:20.727 "nbd_device": "/dev/nbd2", 00:11:20.727 "bdev_name": "Malloc1p1" 00:11:20.727 }, 00:11:20.727 { 00:11:20.727 "nbd_device": "/dev/nbd3", 00:11:20.727 "bdev_name": "Malloc2p0" 00:11:20.727 }, 00:11:20.727 { 00:11:20.727 "nbd_device": "/dev/nbd4", 00:11:20.727 "bdev_name": "Malloc2p1" 00:11:20.727 }, 00:11:20.727 { 00:11:20.727 "nbd_device": "/dev/nbd5", 00:11:20.727 "bdev_name": "Malloc2p2" 00:11:20.727 }, 00:11:20.727 { 00:11:20.727 "nbd_device": "/dev/nbd6", 00:11:20.727 "bdev_name": "Malloc2p3" 00:11:20.727 }, 00:11:20.727 { 00:11:20.727 "nbd_device": "/dev/nbd7", 00:11:20.727 "bdev_name": "Malloc2p4" 00:11:20.727 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd8", 00:11:20.728 "bdev_name": "Malloc2p5" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd9", 00:11:20.728 "bdev_name": "Malloc2p6" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd10", 00:11:20.728 "bdev_name": "Malloc2p7" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd11", 00:11:20.728 "bdev_name": "TestPT" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd12", 00:11:20.728 "bdev_name": "raid0" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd13", 00:11:20.728 "bdev_name": "concat0" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd14", 00:11:20.728 "bdev_name": "raid1" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd15", 00:11:20.728 "bdev_name": "AIO0" 00:11:20.728 } 00:11:20.728 ]' 00:11:20.728 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:20.728 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd0", 00:11:20.728 "bdev_name": "Malloc0" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd1", 00:11:20.728 "bdev_name": "Malloc1p0" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd2", 00:11:20.728 "bdev_name": "Malloc1p1" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd3", 00:11:20.728 "bdev_name": "Malloc2p0" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd4", 00:11:20.728 "bdev_name": "Malloc2p1" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd5", 00:11:20.728 "bdev_name": "Malloc2p2" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd6", 00:11:20.728 "bdev_name": "Malloc2p3" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd7", 00:11:20.728 "bdev_name": "Malloc2p4" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd8", 00:11:20.728 "bdev_name": "Malloc2p5" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd9", 00:11:20.728 "bdev_name": "Malloc2p6" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd10", 00:11:20.728 "bdev_name": "Malloc2p7" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd11", 00:11:20.728 "bdev_name": "TestPT" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd12", 00:11:20.728 "bdev_name": "raid0" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd13", 00:11:20.728 "bdev_name": "concat0" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd14", 00:11:20.728 "bdev_name": "raid1" 00:11:20.728 }, 00:11:20.728 { 00:11:20.728 "nbd_device": "/dev/nbd15", 00:11:20.728 "bdev_name": "AIO0" 00:11:20.728 } 00:11:20.728 ]' 00:11:20.728 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:20.728 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:11:20.728 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:20.728 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:11:20.728 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:20.728 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:20.728 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:20.728 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:20.988 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:20.988 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:20.988 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:20.988 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:20.988 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:20.988 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:20.988 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:20.988 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:20.988 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:20.988 23:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:21.248 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:21.248 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:21.248 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:21.248 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.248 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.248 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:21.248 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:21.248 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.248 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.248 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:21.816 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:21.816 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:21.816 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:21.816 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.816 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.816 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:21.816 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:21.816 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.816 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.816 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:21.816 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:21.816 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:21.816 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:21.816 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.816 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.816 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:21.816 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:21.816 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.816 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.816 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:22.074 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:22.074 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:22.074 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:22.074 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.074 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.074 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:22.074 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:22.074 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.074 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.074 23:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:22.332 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:22.332 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:22.332 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:22.332 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.332 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.332 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:22.332 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:22.332 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.332 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.332 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:22.591 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:22.591 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:22.591 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:22.591 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.591 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.591 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:22.591 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:22.591 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.591 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.591 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:22.850 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:22.850 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:22.850 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:22.850 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.850 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.850 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:22.850 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:22.850 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.850 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.850 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:23.108 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:23.108 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:23.108 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:23.108 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.108 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.108 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:23.108 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:23.108 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.108 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.109 23:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:23.367 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:23.367 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:23.367 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:23.367 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.367 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.367 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:23.367 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:23.367 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.367 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.367 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:23.625 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:23.625 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:23.625 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:23.625 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.625 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.625 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:23.625 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:23.625 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.625 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.625 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:23.883 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:23.883 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:23.883 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:23.883 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.883 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.883 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:23.883 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:23.883 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.883 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.883 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:24.142 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:24.142 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:24.142 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:24.142 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.142 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.142 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:24.142 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:24.142 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.142 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.142 23:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:24.400 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:24.400 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:24.400 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:24.400 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.400 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.400 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:24.400 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:24.400 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.400 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.400 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:24.658 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:24.658 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:24.658 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:24.658 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.658 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.658 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:24.658 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:24.658 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.658 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.658 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:24.916 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:24.916 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:24.916 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:24.916 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.916 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.916 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:24.916 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:24.916 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.916 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:24.916 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:24.916 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:25.175 23:55:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:25.434 /dev/nbd0 00:11:25.434 23:55:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:25.434 23:55:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:25.434 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:25.434 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:25.434 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:25.434 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:25.434 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:25.434 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:25.434 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:25.434 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:25.434 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.434 1+0 records in 00:11:25.434 1+0 records out 00:11:25.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240312 s, 17.0 MB/s 00:11:25.434 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.434 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:25.434 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.434 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:25.434 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:25.434 23:55:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:25.434 23:55:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:25.434 23:55:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:11:25.693 /dev/nbd1 00:11:25.693 23:55:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:25.693 23:55:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:25.693 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:25.693 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:25.693 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:25.693 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:25.693 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:25.693 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:25.693 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:25.693 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:25.693 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.693 1+0 records in 00:11:25.693 1+0 records out 00:11:25.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217164 s, 18.9 MB/s 00:11:25.693 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.693 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:25.693 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.693 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:25.693 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:25.693 23:55:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:25.693 23:55:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:25.693 23:55:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:11:25.952 /dev/nbd10 00:11:25.952 23:55:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:25.952 23:55:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:25.952 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:11:25.952 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:25.952 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:25.952 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:25.952 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:11:25.952 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:25.952 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:25.952 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:25.952 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.952 1+0 records in 00:11:25.952 1+0 records out 00:11:25.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031803 s, 12.9 MB/s 00:11:25.952 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.952 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:25.952 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.952 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:25.952 23:55:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:25.952 23:55:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:25.952 23:55:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:25.952 23:55:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:11:26.211 /dev/nbd11 00:11:26.470 23:55:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:26.470 23:55:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:26.470 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:11:26.470 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:26.470 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:26.470 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:26.470 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:11:26.470 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:26.470 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:26.470 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:26.470 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.470 1+0 records in 00:11:26.470 1+0 records out 00:11:26.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408046 s, 10.0 MB/s 00:11:26.470 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.470 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:26.470 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.470 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:26.470 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:26.470 23:55:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.470 23:55:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:26.470 23:55:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:11:26.470 /dev/nbd12 00:11:26.470 23:55:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:26.471 23:55:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:26.471 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:11:26.471 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:26.471 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:26.471 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:26.471 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:11:26.471 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:26.471 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:26.471 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:26.471 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.471 1+0 records in 00:11:26.471 1+0 records out 00:11:26.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518888 s, 7.9 MB/s 00:11:26.471 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.471 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:26.471 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.729 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:11:26.730 /dev/nbd13 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.730 1+0 records in 00:11:26.730 1+0 records out 00:11:26.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519323 s, 7.9 MB/s 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:26.730 23:55:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:11:26.988 /dev/nbd14 00:11:26.988 23:55:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:26.988 23:55:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:26.988 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:11:26.988 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:26.988 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:26.988 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:26.988 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:11:26.988 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:26.988 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:26.988 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:26.988 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.988 1+0 records in 00:11:26.988 1+0 records out 00:11:26.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441919 s, 9.3 MB/s 00:11:26.988 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.988 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:26.988 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.988 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:26.988 23:55:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:26.988 23:55:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.988 23:55:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:26.988 23:55:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:11:27.247 /dev/nbd15 00:11:27.247 23:55:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:11:27.247 23:55:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:11:27.247 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:11:27.247 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:27.247 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:27.247 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:27.247 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:11:27.247 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:27.247 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:27.247 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:27.247 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.247 1+0 records in 00:11:27.247 1+0 records out 00:11:27.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613497 s, 6.7 MB/s 00:11:27.247 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.247 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:27.247 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.247 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:27.247 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:27.247 23:55:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.247 23:55:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:27.247 23:55:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:11:27.506 /dev/nbd2 00:11:27.506 23:55:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:11:27.506 23:55:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:11:27.506 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:11:27.506 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:27.506 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:27.506 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:27.506 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:11:27.506 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:27.506 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:27.506 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:27.506 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.506 1+0 records in 00:11:27.506 1+0 records out 00:11:27.506 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453173 s, 9.0 MB/s 00:11:27.506 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.506 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:27.506 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.506 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:27.506 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:27.506 23:55:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.506 23:55:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:27.506 23:55:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:11:27.763 /dev/nbd3 00:11:27.763 23:55:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:11:27.763 23:55:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:11:27.763 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:11:27.763 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:27.763 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:27.763 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:27.763 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:11:27.763 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:27.763 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:27.763 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:27.763 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.763 1+0 records in 00:11:27.763 1+0 records out 00:11:27.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000684479 s, 6.0 MB/s 00:11:27.763 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.763 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:27.763 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.763 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:27.763 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:27.763 23:55:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.763 23:55:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:27.763 23:55:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:11:28.020 /dev/nbd4 00:11:28.020 23:55:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:11:28.020 23:55:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:11:28.020 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:11:28.020 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:28.020 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:28.020 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:28.020 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:11:28.020 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:28.020 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:28.020 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:28.020 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.020 1+0 records in 00:11:28.020 1+0 records out 00:11:28.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566783 s, 7.2 MB/s 00:11:28.020 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.020 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:28.020 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.020 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:28.020 23:55:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:28.020 23:55:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.020 23:55:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:28.020 23:55:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:11:28.279 /dev/nbd5 00:11:28.279 23:55:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:11:28.279 23:55:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:11:28.279 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:11:28.279 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:28.279 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:28.279 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:28.279 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:11:28.279 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:28.279 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:28.279 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:28.279 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.279 1+0 records in 00:11:28.279 1+0 records out 00:11:28.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568783 s, 7.2 MB/s 00:11:28.279 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.279 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:28.279 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.279 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:28.279 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:28.279 23:55:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.279 23:55:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:28.279 23:55:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:11:28.537 /dev/nbd6 00:11:28.537 23:55:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:11:28.537 23:55:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:11:28.537 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:11:28.537 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:28.537 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:28.537 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:28.537 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:11:28.537 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:28.537 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:28.537 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:28.537 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.537 1+0 records in 00:11:28.537 1+0 records out 00:11:28.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524281 s, 7.8 MB/s 00:11:28.537 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.537 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:28.537 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.537 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:28.537 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:28.537 23:55:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.537 23:55:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:28.537 23:55:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:11:28.797 /dev/nbd7 00:11:28.797 23:55:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:11:28.797 23:55:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:11:28.797 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:11:28.797 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:28.797 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:28.797 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:28.797 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:11:28.797 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:28.797 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:28.797 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:28.797 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.797 1+0 records in 00:11:28.797 1+0 records out 00:11:28.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000886054 s, 4.6 MB/s 00:11:28.797 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.797 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:28.797 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.797 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:28.797 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:28.797 23:55:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.797 23:55:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:28.797 23:55:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:11:29.056 /dev/nbd8 00:11:29.056 23:55:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:11:29.056 23:55:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:11:29.056 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:11:29.057 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:29.057 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:29.057 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:29.057 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:11:29.057 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:29.057 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:29.057 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:29.057 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.057 1+0 records in 00:11:29.057 1+0 records out 00:11:29.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106057 s, 3.9 MB/s 00:11:29.057 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.057 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:29.057 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.057 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:29.057 23:55:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:29.057 23:55:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:29.057 23:55:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:29.057 23:55:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:11:29.316 /dev/nbd9 00:11:29.316 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:11:29.316 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:11:29.316 23:55:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:11:29.316 23:55:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:29.316 23:55:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:29.316 23:55:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:29.316 23:55:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:11:29.316 23:55:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:29.316 23:55:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:29.316 23:55:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:29.316 23:55:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.316 1+0 records in 00:11:29.316 1+0 records out 00:11:29.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126267 s, 3.2 MB/s 00:11:29.316 23:55:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.577 23:55:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:29.577 23:55:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.577 23:55:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:29.577 23:55:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:29.577 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:29.577 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:29.577 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:29.577 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:29.577 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:29.577 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:29.577 { 00:11:29.577 "nbd_device": "/dev/nbd0", 00:11:29.578 "bdev_name": "Malloc0" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd1", 00:11:29.578 "bdev_name": "Malloc1p0" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd10", 00:11:29.578 "bdev_name": "Malloc1p1" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd11", 00:11:29.578 "bdev_name": "Malloc2p0" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd12", 00:11:29.578 "bdev_name": "Malloc2p1" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd13", 00:11:29.578 "bdev_name": "Malloc2p2" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd14", 00:11:29.578 "bdev_name": "Malloc2p3" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd15", 00:11:29.578 "bdev_name": "Malloc2p4" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd2", 00:11:29.578 "bdev_name": "Malloc2p5" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd3", 00:11:29.578 "bdev_name": "Malloc2p6" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd4", 00:11:29.578 "bdev_name": "Malloc2p7" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd5", 00:11:29.578 "bdev_name": "TestPT" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd6", 00:11:29.578 "bdev_name": "raid0" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd7", 00:11:29.578 "bdev_name": "concat0" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd8", 00:11:29.578 "bdev_name": "raid1" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd9", 00:11:29.578 "bdev_name": "AIO0" 00:11:29.578 } 00:11:29.578 ]' 00:11:29.578 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:29.578 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd0", 00:11:29.578 "bdev_name": "Malloc0" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd1", 00:11:29.578 "bdev_name": "Malloc1p0" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd10", 00:11:29.578 "bdev_name": "Malloc1p1" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd11", 00:11:29.578 "bdev_name": "Malloc2p0" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd12", 00:11:29.578 "bdev_name": "Malloc2p1" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd13", 00:11:29.578 "bdev_name": "Malloc2p2" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd14", 00:11:29.578 "bdev_name": "Malloc2p3" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd15", 00:11:29.578 "bdev_name": "Malloc2p4" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd2", 00:11:29.578 "bdev_name": "Malloc2p5" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd3", 00:11:29.578 "bdev_name": "Malloc2p6" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd4", 00:11:29.578 "bdev_name": "Malloc2p7" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd5", 00:11:29.578 "bdev_name": "TestPT" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd6", 00:11:29.578 "bdev_name": "raid0" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd7", 00:11:29.578 "bdev_name": "concat0" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd8", 00:11:29.578 "bdev_name": "raid1" 00:11:29.578 }, 00:11:29.578 { 00:11:29.578 "nbd_device": "/dev/nbd9", 00:11:29.578 "bdev_name": "AIO0" 00:11:29.578 } 00:11:29.578 ]' 00:11:29.578 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:29.578 /dev/nbd1 00:11:29.578 /dev/nbd10 00:11:29.578 /dev/nbd11 00:11:29.578 /dev/nbd12 00:11:29.578 /dev/nbd13 00:11:29.578 /dev/nbd14 00:11:29.578 /dev/nbd15 00:11:29.578 /dev/nbd2 00:11:29.578 /dev/nbd3 00:11:29.578 /dev/nbd4 00:11:29.578 /dev/nbd5 00:11:29.578 /dev/nbd6 00:11:29.578 /dev/nbd7 00:11:29.578 /dev/nbd8 00:11:29.578 /dev/nbd9' 00:11:29.578 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:29.578 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:29.578 /dev/nbd1 00:11:29.578 /dev/nbd10 00:11:29.578 /dev/nbd11 00:11:29.578 /dev/nbd12 00:11:29.578 /dev/nbd13 00:11:29.578 /dev/nbd14 00:11:29.578 /dev/nbd15 00:11:29.578 /dev/nbd2 00:11:29.578 /dev/nbd3 00:11:29.578 /dev/nbd4 00:11:29.578 /dev/nbd5 00:11:29.578 /dev/nbd6 00:11:29.578 /dev/nbd7 00:11:29.578 /dev/nbd8 00:11:29.578 /dev/nbd9' 00:11:29.578 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:11:29.578 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:11:29.578 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:11:29.578 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:11:29.578 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:11:29.578 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:29.578 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:29.578 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:29.578 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:29.578 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:29.578 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:29.840 256+0 records in 00:11:29.840 256+0 records out 00:11:29.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106231 s, 98.7 MB/s 00:11:29.840 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.840 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:29.840 256+0 records in 00:11:29.840 256+0 records out 00:11:29.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162217 s, 6.5 MB/s 00:11:29.840 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.840 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:30.099 256+0 records in 00:11:30.099 256+0 records out 00:11:30.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162667 s, 6.4 MB/s 00:11:30.099 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:30.099 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:30.099 256+0 records in 00:11:30.099 256+0 records out 00:11:30.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162176 s, 6.5 MB/s 00:11:30.099 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:30.099 23:55:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:30.359 256+0 records in 00:11:30.359 256+0 records out 00:11:30.359 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163572 s, 6.4 MB/s 00:11:30.359 23:55:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:30.359 23:55:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:30.618 256+0 records in 00:11:30.618 256+0 records out 00:11:30.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152457 s, 6.9 MB/s 00:11:30.618 23:55:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:30.618 23:55:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:30.618 256+0 records in 00:11:30.618 256+0 records out 00:11:30.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171656 s, 6.1 MB/s 00:11:30.618 23:55:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:30.618 23:55:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:30.878 256+0 records in 00:11:30.878 256+0 records out 00:11:30.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168086 s, 6.2 MB/s 00:11:30.878 23:55:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:30.878 23:55:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:11:31.136 256+0 records in 00:11:31.136 256+0 records out 00:11:31.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167936 s, 6.2 MB/s 00:11:31.136 23:55:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:31.136 23:55:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:11:31.136 256+0 records in 00:11:31.136 256+0 records out 00:11:31.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167954 s, 6.2 MB/s 00:11:31.136 23:55:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:31.136 23:55:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:11:31.394 256+0 records in 00:11:31.394 256+0 records out 00:11:31.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162708 s, 6.4 MB/s 00:11:31.394 23:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:31.394 23:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:11:31.652 256+0 records in 00:11:31.652 256+0 records out 00:11:31.652 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165936 s, 6.3 MB/s 00:11:31.652 23:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:31.652 23:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:11:31.652 256+0 records in 00:11:31.652 256+0 records out 00:11:31.652 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16329 s, 6.4 MB/s 00:11:31.652 23:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:31.652 23:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:11:31.910 256+0 records in 00:11:31.910 256+0 records out 00:11:31.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.17465 s, 6.0 MB/s 00:11:31.910 23:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:31.910 23:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:11:32.169 256+0 records in 00:11:32.169 256+0 records out 00:11:32.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170631 s, 6.1 MB/s 00:11:32.169 23:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:32.169 23:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:11:32.169 256+0 records in 00:11:32.169 256+0 records out 00:11:32.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168372 s, 6.2 MB/s 00:11:32.169 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:32.169 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:11:32.428 256+0 records in 00:11:32.428 256+0 records out 00:11:32.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.250635 s, 4.2 MB/s 00:11:32.428 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:11:32.428 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:32.428 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:32.428 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:32.428 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:32.428 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:32.428 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:32.428 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.428 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:32.428 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.428 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:32.428 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.428 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:32.428 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.428 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:32.686 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.686 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:32.686 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.687 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:32.945 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:32.945 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:32.945 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:32.945 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.945 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.945 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:32.945 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:32.945 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.945 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.945 23:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:33.204 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:33.204 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:33.204 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:33.204 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.204 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.204 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:33.204 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:33.204 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.204 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.204 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:33.463 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:33.463 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:33.463 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:33.463 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.463 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.463 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:33.463 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:33.463 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.463 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.463 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:33.722 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:33.722 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:33.722 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:33.722 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.722 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.722 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:33.722 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:33.722 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.722 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.722 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:34.290 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:34.290 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:34.290 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:34.290 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.290 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.290 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:34.290 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.290 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.290 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.290 23:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:34.290 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:34.290 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:34.290 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:34.290 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.290 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.290 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:34.290 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.290 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.290 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.290 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:34.548 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:34.548 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:34.548 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:34.548 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.548 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.549 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:34.549 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.549 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.549 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.549 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:34.807 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:34.807 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:34.807 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:34.807 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.807 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.807 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:34.807 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.807 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.807 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.807 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:35.066 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:35.066 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:35.066 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:35.066 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.066 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.066 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:35.066 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.066 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.066 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.066 23:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:35.325 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:35.325 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:35.325 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:35.325 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.325 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.325 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:35.325 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.325 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.325 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.325 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:35.583 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:35.583 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:35.583 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:35.583 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.583 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.583 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:35.583 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.583 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.583 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.583 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:35.841 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:35.841 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:35.841 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:35.841 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.841 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.841 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:35.841 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.841 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.841 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.841 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:36.100 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:36.100 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:36.100 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:36.100 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.100 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.100 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:36.100 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:36.100 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.100 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.100 23:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:36.667 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:36.667 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:36.667 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:36.667 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.667 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.667 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:36.667 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:36.667 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.667 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.667 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:36.667 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:36.667 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:36.667 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:36.667 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.667 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.667 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:36.667 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:36.667 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.667 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.667 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:36.940 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:36.940 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:36.940 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:36.940 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.940 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.940 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:36.940 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:36.940 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.940 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:36.940 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:36.940 23:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:37.223 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:37.223 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:37.223 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:37.223 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:37.223 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:37.223 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:37.223 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:37.223 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:37.223 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:37.223 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:37.224 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:37.224 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:37.224 23:55:33 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:37.224 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:37.224 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:37.224 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:37.224 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:37.224 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:37.482 malloc_lvol_verify 00:11:37.482 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:37.740 6c74bbed-0488-461e-bdf3-6a358901dd1a 00:11:37.740 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:37.997 d6d2b3a8-d6ca-4358-9a50-1d8283c68828 00:11:37.998 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:38.256 /dev/nbd0 00:11:38.256 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:11:38.256 mke2fs 1.47.0 (5-Feb-2023) 00:11:38.256 00:11:38.256 Filesystem too small for a journal 00:11:38.256 Discarding device blocks: 0/1024 done 00:11:38.256 Creating filesystem with 1024 4k blocks and 1024 inodes 00:11:38.256 00:11:38.256 Allocating group tables: 0/1 done 00:11:38.256 Writing inode tables: 0/1 done 00:11:38.256 Writing superblocks and filesystem accounting information: 0/1 done 00:11:38.256 00:11:38.256 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:11:38.256 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:38.256 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:38.256 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:38.256 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:38.256 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:38.256 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:38.256 23:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:38.514 23:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:38.514 23:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:38.514 23:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:38.514 23:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:38.514 23:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:38.514 23:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:38.514 23:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:38.514 23:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:38.514 23:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:11:38.514 23:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:11:38.514 23:55:34 blockdev_general.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72439 00:11:38.514 23:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 72439 ']' 00:11:38.514 23:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 72439 00:11:38.514 23:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:11:38.514 23:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:38.514 23:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72439 00:11:38.514 killing process with pid 72439 00:11:38.514 23:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:38.514 23:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:38.514 23:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72439' 00:11:38.514 23:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@969 -- # kill 72439 00:11:38.514 23:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@974 -- # wait 72439 00:11:41.043 ************************************ 00:11:41.043 END TEST bdev_nbd 00:11:41.043 ************************************ 00:11:41.043 23:55:36 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:11:41.044 00:11:41.044 real 0m25.806s 00:11:41.044 user 0m35.889s 00:11:41.044 sys 0m9.253s 00:11:41.044 23:55:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.044 23:55:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:41.044 23:55:36 blockdev_general -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:11:41.044 23:55:36 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = nvme ']' 00:11:41.044 23:55:36 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = gpt ']' 00:11:41.044 23:55:36 blockdev_general -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:11:41.044 23:55:36 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:41.044 23:55:36 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:41.044 23:55:36 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:41.044 ************************************ 00:11:41.044 START TEST bdev_fio 00:11:41.044 ************************************ 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:11:41.044 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc0]' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc0 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p0]' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p0 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p1]' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p1 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p0]' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p0 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p1]' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p1 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p2]' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p2 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p3]' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p3 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p4]' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p4 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p5]' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p5 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p6]' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p6 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p7]' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p7 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_TestPT]' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=TestPT 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid0]' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid0 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_concat0]' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=concat0 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid1]' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid1 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_AIO0]' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=AIO0 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:41.044 23:55:36 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:11:41.044 ************************************ 00:11:41.044 START TEST bdev_fio_rw_verify 00:11:41.044 ************************************ 00:11:41.044 23:55:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:41.044 23:55:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:41.044 23:55:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:41.044 23:55:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:41.044 23:55:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:41.044 23:55:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:41.044 23:55:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:11:41.044 23:55:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:41.044 23:55:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:41.044 23:55:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:41.044 23:55:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:11:41.044 23:55:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:41.044 23:55:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:11:41.044 23:55:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:11:41.044 23:55:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:11:41.044 23:55:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:11:41.045 23:55:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:41.045 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.045 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.045 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.045 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.045 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.045 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.045 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.045 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.045 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.045 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.045 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.045 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.045 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.045 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.045 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.045 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.045 fio-3.35 00:11:41.045 Starting 16 threads 00:11:53.240 00:11:53.241 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=73577: Wed Jul 24 23:55:48 2024 00:11:53.241 read: IOPS=80.2k, BW=313MiB/s (328MB/s)(3132MiB/10001msec) 00:11:53.241 slat (usec): min=2, max=13042, avg=35.84, stdev=236.89 00:11:53.241 clat (usec): min=11, max=13329, avg=285.68, stdev=692.13 00:11:53.241 lat (usec): min=28, max=13333, avg=321.52, stdev=729.45 00:11:53.241 clat percentiles (usec): 00:11:53.241 | 50.000th=[ 169], 99.000th=[ 4228], 99.900th=[ 7242], 99.990th=[ 9241], 00:11:53.241 | 99.999th=[13173] 00:11:53.241 write: IOPS=128k, BW=501MiB/s (525MB/s)(4938MiB/9858msec); 0 zone resets 00:11:53.241 slat (usec): min=5, max=17052, avg=60.09, stdev=311.64 00:11:53.241 clat (usec): min=11, max=17365, avg=360.98, stdev=775.27 00:11:53.241 lat (usec): min=44, max=17402, avg=421.07, stdev=832.20 00:11:53.241 clat percentiles (usec): 00:11:53.241 | 50.000th=[ 219], 99.000th=[ 4293], 99.900th=[ 7308], 99.990th=[11207], 00:11:53.241 | 99.999th=[14353] 00:11:53.241 bw ( KiB/s): min=369792, max=747232, per=98.58%, avg=505615.42, stdev=7134.47, samples=304 00:11:53.241 iops : min=92448, max=186808, avg=126403.63, stdev=1783.62, samples=304 00:11:53.241 lat (usec) : 20=0.01%, 50=0.32%, 100=13.19%, 250=56.83%, 500=25.24% 00:11:53.241 lat (usec) : 750=0.94%, 1000=0.09% 00:11:53.241 lat (msec) : 2=0.13%, 4=1.19%, 10=2.04%, 20=0.02% 00:11:53.241 cpu : usr=58.01%, sys=2.24%, ctx=239260, majf=0, minf=104773 00:11:53.241 IO depths : 1=11.2%, 2=23.9%, 4=51.9%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:53.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.241 complete : 0=0.0%, 4=88.8%, 8=11.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.241 issued rwts: total=801723,1264020,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.241 latency : target=0, window=0, percentile=100.00%, depth=8 00:11:53.241 00:11:53.241 Run status group 0 (all jobs): 00:11:53.241 READ: bw=313MiB/s (328MB/s), 313MiB/s-313MiB/s (328MB/s-328MB/s), io=3132MiB (3284MB), run=10001-10001msec 00:11:53.241 WRITE: bw=501MiB/s (525MB/s), 501MiB/s-501MiB/s (525MB/s-525MB/s), io=4938MiB (5177MB), run=9858-9858msec 00:11:54.620 ----------------------------------------------------- 00:11:54.620 Suppressions used: 00:11:54.620 count bytes template 00:11:54.620 16 140 /usr/src/fio/parse.c 00:11:54.620 11103 1065888 /usr/src/fio/iolog.c 00:11:54.620 1 904 libcrypto.so 00:11:54.620 ----------------------------------------------------- 00:11:54.620 00:11:54.620 00:11:54.620 real 0m13.617s 00:11:54.620 user 1m37.582s 00:11:54.620 sys 0m4.305s 00:11:54.620 23:55:50 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.620 ************************************ 00:11:54.620 END TEST bdev_fio_rw_verify 00:11:54.620 ************************************ 00:11:54.620 23:55:50 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:11:54.620 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:11:54.620 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:54.620 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:11:54.620 23:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:54.620 23:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:11:54.620 23:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:11:54.620 23:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:11:54.620 23:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:11:54.620 23:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:11:54.620 23:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:11:54.620 23:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:11:54.620 23:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:54.620 23:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:11:54.620 23:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:11:54.620 23:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:11:54.620 23:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:11:54.620 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:11:54.621 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "0bfc9b7f-5ded-4936-b318-eacdf29806a6"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "0bfc9b7f-5ded-4936-b318-eacdf29806a6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "2a4e0445-be2e-5c5a-aa7f-9a21498bbf7b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "2a4e0445-be2e-5c5a-aa7f-9a21498bbf7b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "33ca91c8-3382-520e-87a2-2b5ccfd121b5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "33ca91c8-3382-520e-87a2-2b5ccfd121b5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "eef0472c-1cca-5e4a-ac07-5a51256f11b6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "eef0472c-1cca-5e4a-ac07-5a51256f11b6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "572e08bd-9190-5187-b8cf-e09203811ba2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "572e08bd-9190-5187-b8cf-e09203811ba2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "474f4b3c-64e9-5a1f-b641-673bdfe8913c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "474f4b3c-64e9-5a1f-b641-673bdfe8913c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "21bd5a47-3777-5704-bcb2-8686aef7cde3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "21bd5a47-3777-5704-bcb2-8686aef7cde3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "be3e02a2-b466-5c80-839e-14e5abc98698"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "be3e02a2-b466-5c80-839e-14e5abc98698",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "cc2aff87-2ee5-55d9-9f6a-791f1ddbac8b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cc2aff87-2ee5-55d9-9f6a-791f1ddbac8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "57161e87-ec21-5ddd-ae8b-52e7caae91d7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "57161e87-ec21-5ddd-ae8b-52e7caae91d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "b4343e04-a77a-52c8-9fba-414b4999b410"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b4343e04-a77a-52c8-9fba-414b4999b410",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "dbb3ab92-d0cf-5ee6-8506-00c4d08da0db"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "dbb3ab92-d0cf-5ee6-8506-00c4d08da0db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "7565b3b9-9f2d-4d6e-bb30-d49d920a396a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7565b3b9-9f2d-4d6e-bb30-d49d920a396a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "7565b3b9-9f2d-4d6e-bb30-d49d920a396a",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "a06a3604-fc06-4076-99a4-490b712b43ce",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "e5ffb729-7fdb-489c-8b20-6ee49f829f70",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "0df86aae-3c06-4ce9-aed1-f8d541228ddd"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "0df86aae-3c06-4ce9-aed1-f8d541228ddd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0df86aae-3c06-4ce9-aed1-f8d541228ddd",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "55cfb7ec-2a70-400d-a3fe-5eefe7cfe473",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "60320ef5-35b3-4c5b-8bfd-0d8c5a40a7e5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "64829fa2-c216-45a1-a1fd-49e50b163965"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "64829fa2-c216-45a1-a1fd-49e50b163965",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "64829fa2-c216-45a1-a1fd-49e50b163965",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "3d6dfa50-e833-4a9e-bd9b-8c823fe842de",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "d9a1395f-f490-4090-86f9-16ced34e15e9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "b2b0b5d5-1436-4f78-84bb-da93eeea4189"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "b2b0b5d5-1436-4f78-84bb-da93eeea4189",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:54.621 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Malloc0 00:11:54.621 Malloc1p0 00:11:54.621 Malloc1p1 00:11:54.621 Malloc2p0 00:11:54.621 Malloc2p1 00:11:54.621 Malloc2p2 00:11:54.621 Malloc2p3 00:11:54.621 Malloc2p4 00:11:54.621 Malloc2p5 00:11:54.621 Malloc2p6 00:11:54.621 Malloc2p7 00:11:54.621 TestPT 00:11:54.621 raid0 00:11:54.621 concat0 ]] 00:11:54.621 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "0bfc9b7f-5ded-4936-b318-eacdf29806a6"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "0bfc9b7f-5ded-4936-b318-eacdf29806a6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "2a4e0445-be2e-5c5a-aa7f-9a21498bbf7b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "2a4e0445-be2e-5c5a-aa7f-9a21498bbf7b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "33ca91c8-3382-520e-87a2-2b5ccfd121b5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "33ca91c8-3382-520e-87a2-2b5ccfd121b5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "eef0472c-1cca-5e4a-ac07-5a51256f11b6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "eef0472c-1cca-5e4a-ac07-5a51256f11b6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "572e08bd-9190-5187-b8cf-e09203811ba2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "572e08bd-9190-5187-b8cf-e09203811ba2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "474f4b3c-64e9-5a1f-b641-673bdfe8913c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "474f4b3c-64e9-5a1f-b641-673bdfe8913c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "21bd5a47-3777-5704-bcb2-8686aef7cde3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "21bd5a47-3777-5704-bcb2-8686aef7cde3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "be3e02a2-b466-5c80-839e-14e5abc98698"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "be3e02a2-b466-5c80-839e-14e5abc98698",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "cc2aff87-2ee5-55d9-9f6a-791f1ddbac8b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cc2aff87-2ee5-55d9-9f6a-791f1ddbac8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "57161e87-ec21-5ddd-ae8b-52e7caae91d7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "57161e87-ec21-5ddd-ae8b-52e7caae91d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "b4343e04-a77a-52c8-9fba-414b4999b410"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b4343e04-a77a-52c8-9fba-414b4999b410",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "dbb3ab92-d0cf-5ee6-8506-00c4d08da0db"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "dbb3ab92-d0cf-5ee6-8506-00c4d08da0db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "7565b3b9-9f2d-4d6e-bb30-d49d920a396a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7565b3b9-9f2d-4d6e-bb30-d49d920a396a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "7565b3b9-9f2d-4d6e-bb30-d49d920a396a",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "a06a3604-fc06-4076-99a4-490b712b43ce",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "e5ffb729-7fdb-489c-8b20-6ee49f829f70",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "0df86aae-3c06-4ce9-aed1-f8d541228ddd"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "0df86aae-3c06-4ce9-aed1-f8d541228ddd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0df86aae-3c06-4ce9-aed1-f8d541228ddd",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "55cfb7ec-2a70-400d-a3fe-5eefe7cfe473",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "60320ef5-35b3-4c5b-8bfd-0d8c5a40a7e5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "64829fa2-c216-45a1-a1fd-49e50b163965"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "64829fa2-c216-45a1-a1fd-49e50b163965",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "64829fa2-c216-45a1-a1fd-49e50b163965",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "3d6dfa50-e833-4a9e-bd9b-8c823fe842de",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "d9a1395f-f490-4090-86f9-16ced34e15e9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "b2b0b5d5-1436-4f78-84bb-da93eeea4189"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "b2b0b5d5-1436-4f78-84bb-da93eeea4189",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc0]' 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc0 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p0]' 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p0 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p1]' 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p1 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p0]' 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p0 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p1]' 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p1 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p2]' 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p2 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p3]' 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p3 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p4]' 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p4 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p5]' 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p5 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p6]' 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p6 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p7]' 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p7 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_TestPT]' 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=TestPT 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_raid0]' 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=raid0 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_concat0]' 00:11:54.623 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=concat0 00:11:54.624 23:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:54.624 23:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:11:54.624 23:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:54.624 23:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:11:54.624 ************************************ 00:11:54.624 START TEST bdev_fio_trim 00:11:54.624 ************************************ 00:11:54.624 23:55:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:54.624 23:55:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:54.624 23:55:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:54.624 23:55:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:54.624 23:55:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:54.624 23:55:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:54.624 23:55:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:11:54.624 23:55:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:54.624 23:55:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:54.624 23:55:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:54.624 23:55:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:54.624 23:55:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:11:54.624 23:55:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:11:54.624 23:55:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:11:54.624 23:55:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:11:54.624 23:55:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:11:54.624 23:55:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:54.883 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:54.883 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:54.883 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:54.883 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:54.883 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:54.883 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:54.883 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:54.883 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:54.883 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:54.883 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:54.883 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:54.883 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:54.883 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:54.883 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:54.883 fio-3.35 00:11:54.883 Starting 14 threads 00:12:07.084 00:12:07.084 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=73771: Wed Jul 24 23:56:01 2024 00:12:07.084 write: IOPS=162k, BW=633MiB/s (664MB/s)(6329MiB/10001msec); 0 zone resets 00:12:07.084 slat (usec): min=2, max=8072, avg=31.33, stdev=192.96 00:12:07.084 clat (usec): min=18, max=12895, avg=217.53, stdev=510.28 00:12:07.084 lat (usec): min=33, max=12912, avg=248.85, stdev=544.38 00:12:07.084 clat percentiles (usec): 00:12:07.084 | 50.000th=[ 147], 99.000th=[ 4113], 99.900th=[ 6128], 99.990th=[ 7242], 00:12:07.084 | 99.999th=[ 8356] 00:12:07.084 bw ( KiB/s): min=478635, max=743171, per=100.00%, avg=648291.37, stdev=6105.41, samples=266 00:12:07.084 iops : min=119658, max=185792, avg=162072.68, stdev=1526.36, samples=266 00:12:07.084 trim: IOPS=162k, BW=633MiB/s (664MB/s)(6329MiB/10001msec); 0 zone resets 00:12:07.084 slat (usec): min=4, max=16055, avg=20.88, stdev=159.52 00:12:07.084 clat (usec): min=5, max=12912, avg=234.60, stdev=533.66 00:12:07.084 lat (usec): min=11, max=16322, avg=255.49, stdev=556.50 00:12:07.084 clat percentiles (usec): 00:12:07.084 | 50.000th=[ 163], 99.000th=[ 4146], 99.900th=[ 6194], 99.990th=[ 7242], 00:12:07.084 | 99.999th=[ 8225] 00:12:07.084 bw ( KiB/s): min=478635, max=743179, per=100.00%, avg=648292.21, stdev=6105.34, samples=266 00:12:07.084 iops : min=119658, max=185794, avg=162072.79, stdev=1526.34, samples=266 00:12:07.084 lat (usec) : 10=0.11%, 20=0.27%, 50=0.92%, 100=14.18%, 250=77.78% 00:12:07.084 lat (usec) : 500=4.90%, 750=0.13%, 1000=0.01% 00:12:07.084 lat (msec) : 2=0.03%, 4=0.44%, 10=1.23%, 20=0.01% 00:12:07.084 cpu : usr=68.44%, sys=1.09%, ctx=150824, majf=0, minf=15586 00:12:07.084 IO depths : 1=12.3%, 2=24.6%, 4=50.0%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.084 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.084 issued rwts: total=0,1620341,1620344,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.084 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:07.084 00:12:07.084 Run status group 0 (all jobs): 00:12:07.084 WRITE: bw=633MiB/s (664MB/s), 633MiB/s-633MiB/s (664MB/s-664MB/s), io=6329MiB (6637MB), run=10001-10001msec 00:12:07.084 TRIM: bw=633MiB/s (664MB/s), 633MiB/s-633MiB/s (664MB/s-664MB/s), io=6329MiB (6637MB), run=10001-10001msec 00:12:08.020 ----------------------------------------------------- 00:12:08.020 Suppressions used: 00:12:08.020 count bytes template 00:12:08.020 14 129 /usr/src/fio/parse.c 00:12:08.020 1 904 libcrypto.so 00:12:08.020 ----------------------------------------------------- 00:12:08.020 00:12:08.020 00:12:08.020 real 0m13.381s 00:12:08.020 user 1m39.810s 00:12:08.020 sys 0m2.608s 00:12:08.020 23:56:03 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:08.020 ************************************ 00:12:08.020 23:56:03 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:12:08.020 END TEST bdev_fio_trim 00:12:08.020 ************************************ 00:12:08.020 23:56:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:12:08.020 23:56:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:08.020 /home/vagrant/spdk_repo/spdk 00:12:08.020 23:56:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:12:08.020 23:56:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:12:08.020 00:12:08.020 real 0m27.271s 00:12:08.020 user 3m17.496s 00:12:08.020 sys 0m7.052s 00:12:08.020 23:56:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:08.020 ************************************ 00:12:08.020 END TEST bdev_fio 00:12:08.020 ************************************ 00:12:08.020 23:56:03 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:08.020 23:56:03 blockdev_general -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:08.020 23:56:03 blockdev_general -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:08.020 23:56:03 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:12:08.021 23:56:03 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:08.021 23:56:03 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:08.021 ************************************ 00:12:08.021 START TEST bdev_verify 00:12:08.021 ************************************ 00:12:08.021 23:56:03 blockdev_general.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:08.021 [2024-07-24 23:56:03.879084] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:12:08.021 [2024-07-24 23:56:03.880007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73942 ] 00:12:08.279 [2024-07-24 23:56:04.064510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:08.538 [2024-07-24 23:56:04.305277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.538 [2024-07-24 23:56:04.305283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.797 [2024-07-24 23:56:04.637286] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:08.797 [2024-07-24 23:56:04.637397] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:08.797 [2024-07-24 23:56:04.645245] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:08.797 [2024-07-24 23:56:04.645328] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:08.797 [2024-07-24 23:56:04.653255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:08.797 [2024-07-24 23:56:04.653330] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:08.797 [2024-07-24 23:56:04.653378] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:09.056 [2024-07-24 23:56:04.820688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:09.056 [2024-07-24 23:56:04.820818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.056 [2024-07-24 23:56:04.820850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:12:09.056 [2024-07-24 23:56:04.820865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.056 [2024-07-24 23:56:04.823643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.056 [2024-07-24 23:56:04.823703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:09.623 Running I/O for 5 seconds... 00:12:14.887 00:12:14.887 Latency(us) 00:12:14.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.887 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.887 Verification LBA range: start 0x0 length 0x1000 00:12:14.887 Malloc0 : 5.17 1362.76 5.32 0.00 0.00 93788.09 711.21 312666.30 00:12:14.887 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.887 Verification LBA range: start 0x1000 length 0x1000 00:12:14.887 Malloc0 : 5.06 1365.56 5.33 0.00 0.00 93593.05 595.78 314572.80 00:12:14.887 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.887 Verification LBA range: start 0x0 length 0x800 00:12:14.887 Malloc1p0 : 5.22 711.61 2.78 0.00 0.00 179268.11 3470.43 169678.66 00:12:14.887 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.887 Verification LBA range: start 0x800 length 0x800 00:12:14.887 Malloc1p0 : 5.21 712.15 2.78 0.00 0.00 179132.97 3455.53 171585.16 00:12:14.887 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.887 Verification LBA range: start 0x0 length 0x800 00:12:14.887 Malloc1p1 : 5.22 711.20 2.78 0.00 0.00 179016.38 3306.59 168725.41 00:12:14.887 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.887 Verification LBA range: start 0x800 length 0x800 00:12:14.887 Malloc1p1 : 5.21 711.85 2.78 0.00 0.00 178845.56 3261.91 171585.16 00:12:14.887 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.887 Verification LBA range: start 0x0 length 0x200 00:12:14.887 Malloc2p0 : 5.22 710.79 2.78 0.00 0.00 178776.55 3232.12 169678.66 00:12:14.887 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.887 Verification LBA range: start 0x200 length 0x200 00:12:14.887 Malloc2p0 : 5.22 711.54 2.78 0.00 0.00 178578.78 3276.80 172538.41 00:12:14.887 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.887 Verification LBA range: start 0x0 length 0x200 00:12:14.887 Malloc2p1 : 5.22 710.43 2.78 0.00 0.00 178531.86 3425.75 168725.41 00:12:14.887 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.887 Verification LBA range: start 0x200 length 0x200 00:12:14.887 Malloc2p1 : 5.22 711.12 2.78 0.00 0.00 178340.07 3381.06 170631.91 00:12:14.887 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.887 Verification LBA range: start 0x0 length 0x200 00:12:14.887 Malloc2p2 : 5.23 710.06 2.77 0.00 0.00 178252.35 3351.27 164912.41 00:12:14.887 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.887 Verification LBA range: start 0x200 length 0x200 00:12:14.887 Malloc2p2 : 5.22 710.72 2.78 0.00 0.00 178062.73 3351.27 167772.16 00:12:14.887 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.887 Verification LBA range: start 0x0 length 0x200 00:12:14.887 Malloc2p3 : 5.23 709.70 2.77 0.00 0.00 177958.37 3351.27 163005.91 00:12:14.887 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.887 Verification LBA range: start 0x200 length 0x200 00:12:14.887 Malloc2p3 : 5.23 710.35 2.77 0.00 0.00 177774.05 3321.48 165865.66 00:12:14.888 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.888 Verification LBA range: start 0x0 length 0x200 00:12:14.888 Malloc2p4 : 5.23 709.36 2.77 0.00 0.00 177684.74 3202.33 160146.15 00:12:14.888 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.888 Verification LBA range: start 0x200 length 0x200 00:12:14.888 Malloc2p4 : 5.23 709.99 2.77 0.00 0.00 177497.90 3247.01 163005.91 00:12:14.888 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.888 Verification LBA range: start 0x0 length 0x200 00:12:14.888 Malloc2p5 : 5.24 709.02 2.77 0.00 0.00 177413.91 3321.48 158239.65 00:12:14.888 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.888 Verification LBA range: start 0x200 length 0x200 00:12:14.888 Malloc2p5 : 5.23 709.63 2.77 0.00 0.00 177234.00 3291.69 161099.40 00:12:14.888 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.888 Verification LBA range: start 0x0 length 0x200 00:12:14.888 Malloc2p6 : 5.24 708.71 2.77 0.00 0.00 177143.30 3068.28 156333.15 00:12:14.888 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.888 Verification LBA range: start 0x200 length 0x200 00:12:14.888 Malloc2p6 : 5.23 709.28 2.77 0.00 0.00 176960.89 3083.17 159192.90 00:12:14.888 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.888 Verification LBA range: start 0x0 length 0x200 00:12:14.888 Malloc2p7 : 5.24 708.22 2.77 0.00 0.00 176883.08 3112.96 152520.15 00:12:14.888 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.888 Verification LBA range: start 0x200 length 0x200 00:12:14.888 Malloc2p7 : 5.24 708.94 2.77 0.00 0.00 176652.07 3247.01 154426.65 00:12:14.888 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.888 Verification LBA range: start 0x0 length 0x1000 00:12:14.888 TestPT : 5.24 690.49 2.70 0.00 0.00 179909.02 12213.53 151566.89 00:12:14.888 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.888 Verification LBA range: start 0x1000 length 0x1000 00:12:14.888 TestPT : 5.24 687.17 2.68 0.00 0.00 180505.50 14656.23 154426.65 00:12:14.888 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.888 Verification LBA range: start 0x0 length 0x2000 00:12:14.888 raid0 : 5.25 707.50 2.76 0.00 0.00 176086.27 3470.43 136314.88 00:12:14.888 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.888 Verification LBA range: start 0x2000 length 0x2000 00:12:14.888 raid0 : 5.24 708.25 2.77 0.00 0.00 175865.49 3470.43 140127.88 00:12:14.888 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.888 Verification LBA range: start 0x0 length 0x2000 00:12:14.888 concat0 : 5.25 707.21 2.76 0.00 0.00 175802.21 3619.37 134408.38 00:12:14.888 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.888 Verification LBA range: start 0x2000 length 0x2000 00:12:14.888 concat0 : 5.24 707.77 2.76 0.00 0.00 175625.39 3544.90 137268.13 00:12:14.888 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.888 Verification LBA range: start 0x0 length 0x1000 00:12:14.888 raid1 : 5.25 706.86 2.76 0.00 0.00 175483.83 3813.00 136314.88 00:12:14.888 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.888 Verification LBA range: start 0x1000 length 0x1000 00:12:14.888 raid1 : 5.25 707.41 2.76 0.00 0.00 175300.10 3813.00 135361.63 00:12:14.888 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.888 Verification LBA range: start 0x0 length 0x4e2 00:12:14.888 AIO0 : 5.25 706.57 2.76 0.00 0.00 175030.33 1057.51 143940.89 00:12:14.888 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.888 Verification LBA range: start 0x4e2 length 0x4e2 00:12:14.888 AIO0 : 5.25 707.14 2.76 0.00 0.00 174835.10 815.48 142987.64 00:12:14.888 =================================================================================================================== 00:12:14.888 Total : 23969.37 93.63 0.00 0.00 168132.94 595.78 314572.80 00:12:16.790 00:12:16.790 real 0m8.592s 00:12:16.790 user 0m15.504s 00:12:16.790 sys 0m0.586s 00:12:16.790 23:56:12 blockdev_general.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:16.790 23:56:12 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:16.790 ************************************ 00:12:16.790 END TEST bdev_verify 00:12:16.790 ************************************ 00:12:16.790 23:56:12 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:16.790 23:56:12 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:12:16.790 23:56:12 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:16.790 23:56:12 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:16.790 ************************************ 00:12:16.790 START TEST bdev_verify_big_io 00:12:16.790 ************************************ 00:12:16.790 23:56:12 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:16.790 [2024-07-24 23:56:12.519301] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:12:16.790 [2024-07-24 23:56:12.519499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74057 ] 00:12:17.049 [2024-07-24 23:56:12.691819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:17.049 [2024-07-24 23:56:12.870915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.049 [2024-07-24 23:56:12.870928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.320 [2024-07-24 23:56:13.181006] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:17.320 [2024-07-24 23:56:13.181114] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:17.595 [2024-07-24 23:56:13.189005] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:17.595 [2024-07-24 23:56:13.189075] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:17.595 [2024-07-24 23:56:13.196952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:17.595 [2024-07-24 23:56:13.197015] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:17.595 [2024-07-24 23:56:13.197030] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:17.595 [2024-07-24 23:56:13.365516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:17.595 [2024-07-24 23:56:13.365629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.595 [2024-07-24 23:56:13.365659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:12:17.595 [2024-07-24 23:56:13.365672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.595 [2024-07-24 23:56:13.368106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.595 [2024-07-24 23:56:13.368178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:17.855 [2024-07-24 23:56:13.664386] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:17.855 [2024-07-24 23:56:13.667559] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:17.855 [2024-07-24 23:56:13.670939] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:17.855 [2024-07-24 23:56:13.674280] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:17.855 [2024-07-24 23:56:13.677432] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:17.855 [2024-07-24 23:56:13.680586] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:17.855 [2024-07-24 23:56:13.683470] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:17.855 [2024-07-24 23:56:13.686586] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:17.855 [2024-07-24 23:56:13.689385] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:17.855 [2024-07-24 23:56:13.692528] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:17.855 [2024-07-24 23:56:13.695395] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:17.855 [2024-07-24 23:56:13.698429] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:17.855 [2024-07-24 23:56:13.701383] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:17.855 [2024-07-24 23:56:13.704432] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:17.855 [2024-07-24 23:56:13.707527] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:17.855 [2024-07-24 23:56:13.710351] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:18.113 [2024-07-24 23:56:13.776506] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:18.113 [2024-07-24 23:56:13.782140] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:18.113 Running I/O for 5 seconds... 00:12:24.674 00:12:24.674 Latency(us) 00:12:24.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.674 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:24.674 Verification LBA range: start 0x0 length 0x100 00:12:24.674 Malloc0 : 5.70 201.98 12.62 0.00 0.00 622425.60 841.54 1822615.74 00:12:24.674 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:24.674 Verification LBA range: start 0x100 length 0x100 00:12:24.674 Malloc0 : 5.89 195.59 12.22 0.00 0.00 642642.55 841.54 1914127.83 00:12:24.674 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:24.674 Verification LBA range: start 0x0 length 0x80 00:12:24.674 Malloc1p0 : 6.02 93.02 5.81 0.00 0.00 1279395.93 3515.11 2165786.07 00:12:24.674 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:24.674 Verification LBA range: start 0x80 length 0x80 00:12:24.675 Malloc1p0 : 6.31 60.26 3.77 0.00 0.00 1961594.19 2591.65 3233427.08 00:12:24.675 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x0 length 0x80 00:12:24.675 Malloc1p1 : 6.39 42.59 2.66 0.00 0.00 2662728.19 1556.48 4423084.22 00:12:24.675 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x80 length 0x80 00:12:24.675 Malloc1p1 : 6.49 41.90 2.62 0.00 0.00 2724572.47 1452.22 4606108.39 00:12:24.675 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x0 length 0x20 00:12:24.675 Malloc2p0 : 6.02 31.88 1.99 0.00 0.00 903762.10 569.72 1540453.47 00:12:24.675 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x20 length 0x20 00:12:24.675 Malloc2p0 : 6.03 29.20 1.82 0.00 0.00 981007.62 647.91 1609087.53 00:12:24.675 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x0 length 0x20 00:12:24.675 Malloc2p1 : 6.02 31.87 1.99 0.00 0.00 897247.08 584.61 1517575.45 00:12:24.675 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x20 length 0x20 00:12:24.675 Malloc2p1 : 6.03 29.18 1.82 0.00 0.00 974388.54 651.64 1586209.51 00:12:24.675 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x0 length 0x20 00:12:24.675 Malloc2p2 : 6.03 31.86 1.99 0.00 0.00 891625.96 543.65 1502323.43 00:12:24.675 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x20 length 0x20 00:12:24.675 Malloc2p2 : 6.03 29.17 1.82 0.00 0.00 967627.71 592.06 1563331.49 00:12:24.675 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x0 length 0x20 00:12:24.675 Malloc2p3 : 6.03 31.85 1.99 0.00 0.00 886445.90 584.61 1487071.42 00:12:24.675 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x20 length 0x20 00:12:24.675 Malloc2p3 : 6.04 29.15 1.82 0.00 0.00 961143.67 595.78 1548079.48 00:12:24.675 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x0 length 0x20 00:12:24.675 Malloc2p4 : 6.03 31.84 1.99 0.00 0.00 880580.64 558.55 1471819.40 00:12:24.675 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x20 length 0x20 00:12:24.675 Malloc2p4 : 6.04 29.14 1.82 0.00 0.00 953994.69 536.20 1532827.46 00:12:24.675 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x0 length 0x20 00:12:24.675 Malloc2p5 : 6.03 31.82 1.99 0.00 0.00 875408.05 498.97 1456567.39 00:12:24.675 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x20 length 0x20 00:12:24.675 Malloc2p5 : 6.14 31.26 1.95 0.00 0.00 888139.30 525.03 1517575.45 00:12:24.675 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x0 length 0x20 00:12:24.675 Malloc2p6 : 6.04 31.80 1.99 0.00 0.00 869531.16 487.80 1433689.37 00:12:24.675 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x20 length 0x20 00:12:24.675 Malloc2p6 : 6.14 31.26 1.95 0.00 0.00 882406.04 506.41 1502323.43 00:12:24.675 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x0 length 0x20 00:12:24.675 Malloc2p7 : 6.04 31.79 1.99 0.00 0.00 864231.03 495.24 1418437.35 00:12:24.675 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x20 length 0x20 00:12:24.675 Malloc2p7 : 6.14 31.25 1.95 0.00 0.00 875906.30 707.49 1479445.41 00:12:24.675 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x0 length 0x100 00:12:24.675 TestPT : 6.45 42.49 2.66 0.00 0.00 2479776.13 62914.56 3843507.67 00:12:24.675 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x100 length 0x100 00:12:24.675 TestPT : 6.52 41.75 2.61 0.00 0.00 2536481.51 96278.34 3965523.78 00:12:24.675 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x0 length 0x200 00:12:24.675 raid0 : 6.48 49.37 3.09 0.00 0.00 2116335.74 1683.08 4057035.87 00:12:24.675 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x200 length 0x200 00:12:24.675 raid0 : 6.54 46.49 2.91 0.00 0.00 2229619.22 1683.08 4179051.99 00:12:24.675 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x0 length 0x200 00:12:24.675 concat0 : 6.49 51.81 3.24 0.00 0.00 1972071.98 1697.98 3889263.71 00:12:24.675 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x200 length 0x200 00:12:24.675 concat0 : 6.52 51.55 3.22 0.00 0.00 1970238.14 1593.72 4026531.84 00:12:24.675 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x0 length 0x100 00:12:24.675 raid1 : 6.45 67.72 4.23 0.00 0.00 1480551.56 2204.39 3751995.58 00:12:24.675 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x100 length 0x100 00:12:24.675 raid1 : 6.50 59.09 3.69 0.00 0.00 1680394.36 2115.03 3858759.68 00:12:24.675 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x0 length 0x4e 00:12:24.675 AIO0 : 6.48 70.95 4.43 0.00 0.00 845634.17 1750.11 2211542.11 00:12:24.675 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:12:24.675 Verification LBA range: start 0x4e length 0x4e 00:12:24.675 AIO0 : 6.56 88.74 5.55 0.00 0.00 666354.36 1124.54 2272550.17 00:12:24.675 =================================================================================================================== 00:12:24.675 Total : 1699.62 106.23 0.00 0.00 1260804.46 487.80 4606108.39 00:12:27.212 00:12:27.212 real 0m10.254s 00:12:27.212 user 0m18.953s 00:12:27.212 sys 0m0.531s 00:12:27.212 23:56:22 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:27.212 23:56:22 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.212 ************************************ 00:12:27.212 END TEST bdev_verify_big_io 00:12:27.212 ************************************ 00:12:27.212 23:56:22 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:27.212 23:56:22 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:27.212 23:56:22 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:27.212 23:56:22 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:27.212 ************************************ 00:12:27.212 START TEST bdev_write_zeroes 00:12:27.212 ************************************ 00:12:27.212 23:56:22 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:27.212 [2024-07-24 23:56:22.825548] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:12:27.212 [2024-07-24 23:56:22.825756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74183 ] 00:12:27.212 [2024-07-24 23:56:22.997550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.476 [2024-07-24 23:56:23.176439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.735 [2024-07-24 23:56:23.507182] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:27.735 [2024-07-24 23:56:23.507290] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:27.735 [2024-07-24 23:56:23.515148] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:27.735 [2024-07-24 23:56:23.515244] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:27.735 [2024-07-24 23:56:23.523139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:27.735 [2024-07-24 23:56:23.523216] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:27.735 [2024-07-24 23:56:23.523249] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:27.994 [2024-07-24 23:56:23.690488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:27.994 [2024-07-24 23:56:23.690599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.994 [2024-07-24 23:56:23.690626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:12:27.994 [2024-07-24 23:56:23.690649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.994 [2024-07-24 23:56:23.693202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.994 [2024-07-24 23:56:23.693261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:28.252 Running I/O for 1 seconds... 00:12:29.629 00:12:29.629 Latency(us) 00:12:29.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.629 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:29.629 Malloc0 : 1.05 5106.43 19.95 0.00 0.00 25047.37 659.08 40274.85 00:12:29.629 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:29.629 Malloc1p0 : 1.05 5099.45 19.92 0.00 0.00 25045.68 808.03 39559.91 00:12:29.629 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:29.629 Malloc1p1 : 1.06 5093.07 19.89 0.00 0.00 25025.41 819.20 38844.97 00:12:29.629 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:29.629 Malloc2p0 : 1.06 5086.48 19.87 0.00 0.00 25013.47 770.79 38130.04 00:12:29.629 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:29.629 Malloc2p1 : 1.06 5079.97 19.84 0.00 0.00 24996.16 819.20 37415.10 00:12:29.629 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:29.629 Malloc2p2 : 1.06 5073.80 19.82 0.00 0.00 24975.83 778.24 36700.16 00:12:29.629 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:29.629 Malloc2p3 : 1.06 5067.33 19.79 0.00 0.00 24963.56 819.20 36223.53 00:12:29.629 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:29.629 Malloc2p4 : 1.06 5060.59 19.77 0.00 0.00 24951.09 770.79 35508.60 00:12:29.629 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:29.629 Malloc2p5 : 1.06 5054.04 19.74 0.00 0.00 24937.73 819.20 35031.97 00:12:29.629 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:29.629 Malloc2p6 : 1.07 5047.51 19.72 0.00 0.00 24920.23 789.41 34317.03 00:12:29.629 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:29.629 Malloc2p7 : 1.07 5041.18 19.69 0.00 0.00 24903.43 811.75 33602.09 00:12:29.629 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:29.629 TestPT : 1.07 5035.09 19.67 0.00 0.00 24890.60 789.41 32887.16 00:12:29.629 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:29.629 raid0 : 1.07 5027.48 19.64 0.00 0.00 24871.49 1623.51 31695.59 00:12:29.629 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:29.629 concat0 : 1.07 5020.22 19.61 0.00 0.00 24812.84 1608.61 30504.03 00:12:29.629 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:29.629 raid1 : 1.07 5010.37 19.57 0.00 0.00 24758.20 2666.12 28835.84 00:12:29.629 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:29.629 AIO0 : 1.07 4997.62 19.52 0.00 0.00 24695.09 1660.74 28478.37 00:12:29.629 =================================================================================================================== 00:12:29.629 Total : 80900.63 316.02 0.00 0.00 24925.53 659.08 40274.85 00:12:31.534 00:12:31.534 real 0m4.314s 00:12:31.534 user 0m3.760s 00:12:31.534 sys 0m0.397s 00:12:31.534 23:56:27 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:31.534 23:56:27 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:12:31.534 ************************************ 00:12:31.534 END TEST bdev_write_zeroes 00:12:31.534 ************************************ 00:12:31.534 23:56:27 blockdev_general -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:31.534 23:56:27 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:31.534 23:56:27 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:31.534 23:56:27 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:31.534 ************************************ 00:12:31.534 START TEST bdev_json_nonenclosed 00:12:31.534 ************************************ 00:12:31.534 23:56:27 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:31.534 [2024-07-24 23:56:27.192764] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:12:31.534 [2024-07-24 23:56:27.192980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74247 ] 00:12:31.534 [2024-07-24 23:56:27.365411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.794 [2024-07-24 23:56:27.546937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.794 [2024-07-24 23:56:27.547039] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:31.794 [2024-07-24 23:56:27.547066] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:31.794 [2024-07-24 23:56:27.547086] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:32.362 00:12:32.362 real 0m0.805s 00:12:32.362 user 0m0.572s 00:12:32.362 sys 0m0.132s 00:12:32.362 23:56:27 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:32.362 23:56:27 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:12:32.362 ************************************ 00:12:32.362 END TEST bdev_json_nonenclosed 00:12:32.362 ************************************ 00:12:32.362 23:56:27 blockdev_general -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:32.362 23:56:27 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:32.362 23:56:27 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:32.362 23:56:27 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:32.362 ************************************ 00:12:32.362 START TEST bdev_json_nonarray 00:12:32.362 ************************************ 00:12:32.362 23:56:27 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:32.362 [2024-07-24 23:56:28.051884] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:12:32.362 [2024-07-24 23:56:28.052085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74278 ] 00:12:32.362 [2024-07-24 23:56:28.228125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.621 [2024-07-24 23:56:28.451358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.621 [2024-07-24 23:56:28.451510] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:32.621 [2024-07-24 23:56:28.451540] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:32.621 [2024-07-24 23:56:28.451558] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:33.189 00:12:33.189 real 0m0.886s 00:12:33.189 user 0m0.658s 00:12:33.189 sys 0m0.127s 00:12:33.189 ************************************ 00:12:33.189 END TEST bdev_json_nonarray 00:12:33.189 ************************************ 00:12:33.189 23:56:28 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:33.189 23:56:28 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:12:33.189 23:56:28 blockdev_general -- bdev/blockdev.sh@786 -- # [[ bdev == bdev ]] 00:12:33.189 23:56:28 blockdev_general -- bdev/blockdev.sh@787 -- # run_test bdev_qos qos_test_suite '' 00:12:33.189 23:56:28 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:33.189 23:56:28 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:33.189 23:56:28 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:33.189 ************************************ 00:12:33.189 START TEST bdev_qos 00:12:33.189 ************************************ 00:12:33.189 23:56:28 blockdev_general.bdev_qos -- common/autotest_common.sh@1125 -- # qos_test_suite '' 00:12:33.189 23:56:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # QOS_PID=74309 00:12:33.189 Process qos testing pid: 74309 00:12:33.189 23:56:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # echo 'Process qos testing pid: 74309' 00:12:33.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.189 23:56:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:12:33.189 23:56:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@444 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:12:33.189 23:56:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # waitforlisten 74309 00:12:33.189 23:56:28 blockdev_general.bdev_qos -- common/autotest_common.sh@831 -- # '[' -z 74309 ']' 00:12:33.189 23:56:28 blockdev_general.bdev_qos -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.189 23:56:28 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:33.189 23:56:28 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.189 23:56:28 blockdev_general.bdev_qos -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:33.189 23:56:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:33.189 [2024-07-24 23:56:28.992674] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:12:33.189 [2024-07-24 23:56:28.993172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74309 ] 00:12:33.448 [2024-07-24 23:56:29.164351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.707 [2024-07-24 23:56:29.357534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.273 23:56:29 blockdev_general.bdev_qos -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:34.273 23:56:29 blockdev_general.bdev_qos -- common/autotest_common.sh@864 -- # return 0 00:12:34.273 23:56:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@450 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:12:34.273 23:56:29 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.273 23:56:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:34.273 Malloc_0 00:12:34.273 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.273 23:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # waitforbdev Malloc_0 00:12:34.273 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_0 00:12:34.273 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:34.273 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:12:34.273 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:34.273 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:34.273 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:34.273 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.273 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:34.273 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.273 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:12:34.273 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.273 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:34.273 [ 00:12:34.273 { 00:12:34.273 "name": "Malloc_0", 00:12:34.273 "aliases": [ 00:12:34.273 "54afa2fe-6dbe-4e0d-bced-9eb1420b9e05" 00:12:34.273 ], 00:12:34.273 "product_name": "Malloc disk", 00:12:34.273 "block_size": 512, 00:12:34.273 "num_blocks": 262144, 00:12:34.273 "uuid": "54afa2fe-6dbe-4e0d-bced-9eb1420b9e05", 00:12:34.273 "assigned_rate_limits": { 00:12:34.273 "rw_ios_per_sec": 0, 00:12:34.273 "rw_mbytes_per_sec": 0, 00:12:34.273 "r_mbytes_per_sec": 0, 00:12:34.273 "w_mbytes_per_sec": 0 00:12:34.273 }, 00:12:34.273 "claimed": false, 00:12:34.273 "zoned": false, 00:12:34.273 "supported_io_types": { 00:12:34.273 "read": true, 00:12:34.273 "write": true, 00:12:34.273 "unmap": true, 00:12:34.273 "flush": true, 00:12:34.273 "reset": true, 00:12:34.273 "nvme_admin": false, 00:12:34.273 "nvme_io": false, 00:12:34.273 "nvme_io_md": false, 00:12:34.273 "write_zeroes": true, 00:12:34.273 "zcopy": true, 00:12:34.273 "get_zone_info": false, 00:12:34.273 "zone_management": false, 00:12:34.273 "zone_append": false, 00:12:34.273 "compare": false, 00:12:34.273 "compare_and_write": false, 00:12:34.273 "abort": true, 00:12:34.273 "seek_hole": false, 00:12:34.273 "seek_data": false, 00:12:34.273 "copy": true, 00:12:34.273 "nvme_iov_md": false 00:12:34.273 }, 00:12:34.273 "memory_domains": [ 00:12:34.273 { 00:12:34.273 "dma_device_id": "system", 00:12:34.273 "dma_device_type": 1 00:12:34.273 }, 00:12:34.273 { 00:12:34.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.273 "dma_device_type": 2 00:12:34.273 } 00:12:34.273 ], 00:12:34.273 "driver_specific": {} 00:12:34.273 } 00:12:34.273 ] 00:12:34.273 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.273 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:12:34.273 23:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # rpc_cmd bdev_null_create Null_1 128 512 00:12:34.273 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.273 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:34.531 Null_1 00:12:34.531 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.531 23:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # waitforbdev Null_1 00:12:34.531 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Null_1 00:12:34.531 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:34.531 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:12:34.531 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:34.531 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:34.531 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:34.531 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.531 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:34.531 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.531 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:12:34.531 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.531 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:34.531 [ 00:12:34.531 { 00:12:34.531 "name": "Null_1", 00:12:34.531 "aliases": [ 00:12:34.531 "3932228e-949e-4246-aa05-6bb682b509ff" 00:12:34.532 ], 00:12:34.532 "product_name": "Null disk", 00:12:34.532 "block_size": 512, 00:12:34.532 "num_blocks": 262144, 00:12:34.532 "uuid": "3932228e-949e-4246-aa05-6bb682b509ff", 00:12:34.532 "assigned_rate_limits": { 00:12:34.532 "rw_ios_per_sec": 0, 00:12:34.532 "rw_mbytes_per_sec": 0, 00:12:34.532 "r_mbytes_per_sec": 0, 00:12:34.532 "w_mbytes_per_sec": 0 00:12:34.532 }, 00:12:34.532 "claimed": false, 00:12:34.532 "zoned": false, 00:12:34.532 "supported_io_types": { 00:12:34.532 "read": true, 00:12:34.532 "write": true, 00:12:34.532 "unmap": false, 00:12:34.532 "flush": false, 00:12:34.532 "reset": true, 00:12:34.532 "nvme_admin": false, 00:12:34.532 "nvme_io": false, 00:12:34.532 "nvme_io_md": false, 00:12:34.532 "write_zeroes": true, 00:12:34.532 "zcopy": false, 00:12:34.532 "get_zone_info": false, 00:12:34.532 "zone_management": false, 00:12:34.532 "zone_append": false, 00:12:34.532 "compare": false, 00:12:34.532 "compare_and_write": false, 00:12:34.532 "abort": true, 00:12:34.532 "seek_hole": false, 00:12:34.532 "seek_data": false, 00:12:34.532 "copy": false, 00:12:34.532 "nvme_iov_md": false 00:12:34.532 }, 00:12:34.532 "driver_specific": {} 00:12:34.532 } 00:12:34.532 ] 00:12:34.532 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.532 23:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:12:34.532 23:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # qos_function_test 00:12:34.532 23:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@455 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:34.532 23:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@409 -- # local qos_lower_iops_limit=1000 00:12:34.532 23:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_bw_limit=2 00:12:34.532 23:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local io_result=0 00:12:34.532 23:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local iops_limit=0 00:12:34.532 23:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local bw_limit=0 00:12:34.532 23:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # get_io_result IOPS Malloc_0 00:12:34.532 23:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:12:34.532 23:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:12:34.532 23:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:12:34.532 23:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:34.532 23:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:12:34.532 23:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:12:34.532 Running I/O for 60 seconds... 00:12:39.827 23:56:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 58021.71 232086.84 0.00 0.00 233472.00 0.00 0.00 ' 00:12:39.827 23:56:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:12:39.827 23:56:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:12:39.827 23:56:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # iostat_result=58021.71 00:12:39.827 23:56:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 58021 00:12:39.827 23:56:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # io_result=58021 00:12:39.827 23:56:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@417 -- # iops_limit=14000 00:12:39.827 23:56:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # '[' 14000 -gt 1000 ']' 00:12:39.827 23:56:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@421 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 14000 Malloc_0 00:12:39.827 23:56:35 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.827 23:56:35 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:39.827 23:56:35 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.827 23:56:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # run_test bdev_qos_iops run_qos_test 14000 IOPS Malloc_0 00:12:39.827 23:56:35 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:39.827 23:56:35 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:39.827 23:56:35 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:39.827 ************************************ 00:12:39.827 START TEST bdev_qos_iops 00:12:39.827 ************************************ 00:12:39.827 23:56:35 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1125 -- # run_qos_test 14000 IOPS Malloc_0 00:12:39.827 23:56:35 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@388 -- # local qos_limit=14000 00:12:39.827 23:56:35 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_result=0 00:12:39.827 23:56:35 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # get_io_result IOPS Malloc_0 00:12:39.827 23:56:35 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:12:39.827 23:56:35 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:12:39.827 23:56:35 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local iostat_result 00:12:39.827 23:56:35 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:39.827 23:56:35 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:12:39.827 23:56:35 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # tail -1 00:12:45.096 23:56:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 13983.35 55933.40 0.00 0.00 57064.00 0.00 0.00 ' 00:12:45.096 23:56:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:12:45.096 23:56:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:12:45.096 23:56:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # iostat_result=13983.35 00:12:45.096 23:56:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@384 -- # echo 13983 00:12:45.096 23:56:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # qos_result=13983 00:12:45.096 ************************************ 00:12:45.096 END TEST bdev_qos_iops 00:12:45.096 ************************************ 00:12:45.096 23:56:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # '[' IOPS = BANDWIDTH ']' 00:12:45.096 23:56:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@395 -- # lower_limit=12600 00:12:45.096 23:56:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # upper_limit=15400 00:12:45.096 23:56:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 13983 -lt 12600 ']' 00:12:45.096 23:56:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 13983 -gt 15400 ']' 00:12:45.096 00:12:45.096 real 0m5.235s 00:12:45.096 user 0m0.127s 00:12:45.096 sys 0m0.042s 00:12:45.096 23:56:40 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:45.096 23:56:40 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:12:45.096 23:56:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # get_io_result BANDWIDTH Null_1 00:12:45.096 23:56:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:12:45.096 23:56:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:12:45.096 23:56:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:12:45.096 23:56:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Null_1 00:12:45.096 23:56:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:45.096 23:56:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:12:50.391 23:56:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 26744.49 106977.94 0.00 0.00 108544.00 0.00 0.00 ' 00:12:50.391 23:56:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:12:50.391 23:56:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:50.391 23:56:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:12:50.391 23:56:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # iostat_result=108544.00 00:12:50.391 23:56:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 108544 00:12:50.391 23:56:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # bw_limit=108544 00:12:50.391 23:56:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=10 00:12:50.391 23:56:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # '[' 10 -lt 2 ']' 00:12:50.391 23:56:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@431 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 10 Null_1 00:12:50.391 23:56:45 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.391 23:56:45 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:50.391 23:56:45 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.391 23:56:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # run_test bdev_qos_bw run_qos_test 10 BANDWIDTH Null_1 00:12:50.391 23:56:45 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:50.391 23:56:45 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.391 23:56:45 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:50.391 ************************************ 00:12:50.391 START TEST bdev_qos_bw 00:12:50.391 ************************************ 00:12:50.391 23:56:45 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1125 -- # run_qos_test 10 BANDWIDTH Null_1 00:12:50.391 23:56:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@388 -- # local qos_limit=10 00:12:50.391 23:56:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:12:50.391 23:56:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Null_1 00:12:50.391 23:56:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:12:50.391 23:56:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:12:50.391 23:56:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:12:50.391 23:56:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # grep Null_1 00:12:50.392 23:56:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:50.392 23:56:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # tail -1 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 2555.91 10223.65 0.00 0.00 10420.00 0.00 0.00 ' 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # iostat_result=10420.00 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@384 -- # echo 10420 00:12:55.664 ************************************ 00:12:55.664 END TEST bdev_qos_bw 00:12:55.664 ************************************ 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # qos_result=10420 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # qos_limit=10240 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@395 -- # lower_limit=9216 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # upper_limit=11264 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 10420 -lt 9216 ']' 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 10420 -gt 11264 ']' 00:12:55.664 00:12:55.664 real 0m5.254s 00:12:55.664 user 0m0.131s 00:12:55.664 sys 0m0.039s 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:12:55.664 23:56:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@435 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:12:55.664 23:56:51 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.664 23:56:51 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:55.664 23:56:51 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.664 23:56:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:12:55.664 23:56:51 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:55.664 23:56:51 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:55.664 23:56:51 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:55.664 ************************************ 00:12:55.664 START TEST bdev_qos_ro_bw 00:12:55.664 ************************************ 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1125 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@388 -- # local qos_limit=2 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Malloc_0 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:12:55.664 23:56:51 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # tail -1 00:13:00.938 23:56:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 511.42 2045.67 0.00 0.00 2060.00 0.00 0.00 ' 00:13:00.938 23:56:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:13:00.938 23:56:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:00.938 23:56:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:13:00.938 23:56:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # iostat_result=2060.00 00:13:00.938 23:56:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@384 -- # echo 2060 00:13:00.938 ************************************ 00:13:00.938 END TEST bdev_qos_ro_bw 00:13:00.938 ************************************ 00:13:00.938 23:56:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # qos_result=2060 00:13:00.938 23:56:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:00.938 23:56:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # qos_limit=2048 00:13:00.938 23:56:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@395 -- # lower_limit=1843 00:13:00.938 23:56:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # upper_limit=2252 00:13:00.938 23:56:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2060 -lt 1843 ']' 00:13:00.938 23:56:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2060 -gt 2252 ']' 00:13:00.938 00:13:00.938 real 0m5.193s 00:13:00.938 user 0m0.138s 00:13:00.938 sys 0m0.034s 00:13:00.938 23:56:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:00.938 23:56:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:13:00.938 23:56:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:13:00.938 23:56:56 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.938 23:56:56 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:01.505 23:56:57 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.505 23:56:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_null_delete Null_1 00:13:01.505 23:56:57 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.505 23:56:57 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:01.505 00:13:01.505 Latency(us) 00:13:01.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.505 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:01.505 Malloc_0 : 26.75 19601.68 76.57 0.00 0.00 12939.23 2412.92 503316.48 00:13:01.505 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:01.505 Null_1 : 26.94 23284.58 90.96 0.00 0.00 10969.59 796.86 189696.93 00:13:01.505 =================================================================================================================== 00:13:01.505 Total : 42886.26 167.52 0.00 0.00 11866.36 796.86 503316.48 00:13:01.505 0 00:13:01.505 23:56:57 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.505 23:56:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # killprocess 74309 00:13:01.505 23:56:57 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # '[' -z 74309 ']' 00:13:01.505 23:56:57 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # kill -0 74309 00:13:01.505 23:56:57 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # uname 00:13:01.505 23:56:57 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:01.505 23:56:57 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74309 00:13:01.505 killing process with pid 74309 00:13:01.505 Received shutdown signal, test time was about 26.983571 seconds 00:13:01.505 00:13:01.505 Latency(us) 00:13:01.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.505 =================================================================================================================== 00:13:01.505 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:01.505 23:56:57 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:01.505 23:56:57 blockdev_general.bdev_qos -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:01.505 23:56:57 blockdev_general.bdev_qos -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74309' 00:13:01.505 23:56:57 blockdev_general.bdev_qos -- common/autotest_common.sh@969 -- # kill 74309 00:13:01.505 23:56:57 blockdev_general.bdev_qos -- common/autotest_common.sh@974 -- # wait 74309 00:13:02.881 ************************************ 00:13:02.881 END TEST bdev_qos 00:13:02.881 ************************************ 00:13:02.881 23:56:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # trap - SIGINT SIGTERM EXIT 00:13:02.881 00:13:02.881 real 0m29.583s 00:13:02.881 user 0m30.522s 00:13:02.881 sys 0m0.668s 00:13:02.881 23:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:02.881 23:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:02.881 23:56:58 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:13:02.881 23:56:58 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:02.881 23:56:58 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:02.881 23:56:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:02.881 ************************************ 00:13:02.881 START TEST bdev_qd_sampling 00:13:02.881 ************************************ 00:13:02.881 23:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1125 -- # qd_sampling_test_suite '' 00:13:02.881 23:56:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@537 -- # QD_DEV=Malloc_QD 00:13:02.881 23:56:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # QD_PID=74727 00:13:02.881 Process bdev QD sampling period testing pid: 74727 00:13:02.881 23:56:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # echo 'Process bdev QD sampling period testing pid: 74727' 00:13:02.881 23:56:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:13:02.881 23:56:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@539 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:13:02.881 23:56:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # waitforlisten 74727 00:13:02.881 23:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@831 -- # '[' -z 74727 ']' 00:13:02.881 23:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.881 23:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:02.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.881 23:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.881 23:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:02.881 23:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:02.881 [2024-07-24 23:56:58.620101] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:13:02.881 [2024-07-24 23:56:58.620259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74727 ] 00:13:03.140 [2024-07-24 23:56:58.786027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:03.399 [2024-07-24 23:56:59.009748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.399 [2024-07-24 23:56:59.009759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@864 -- # return 0 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@545 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:03.966 Malloc_QD 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # waitforbdev Malloc_QD 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_QD 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@901 -- # local i 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:03.966 [ 00:13:03.966 { 00:13:03.966 "name": "Malloc_QD", 00:13:03.966 "aliases": [ 00:13:03.966 "66f7345a-2bcc-445e-9133-bf4182d0b2e2" 00:13:03.966 ], 00:13:03.966 "product_name": "Malloc disk", 00:13:03.966 "block_size": 512, 00:13:03.966 "num_blocks": 262144, 00:13:03.966 "uuid": "66f7345a-2bcc-445e-9133-bf4182d0b2e2", 00:13:03.966 "assigned_rate_limits": { 00:13:03.966 "rw_ios_per_sec": 0, 00:13:03.966 "rw_mbytes_per_sec": 0, 00:13:03.966 "r_mbytes_per_sec": 0, 00:13:03.966 "w_mbytes_per_sec": 0 00:13:03.966 }, 00:13:03.966 "claimed": false, 00:13:03.966 "zoned": false, 00:13:03.966 "supported_io_types": { 00:13:03.966 "read": true, 00:13:03.966 "write": true, 00:13:03.966 "unmap": true, 00:13:03.966 "flush": true, 00:13:03.966 "reset": true, 00:13:03.966 "nvme_admin": false, 00:13:03.966 "nvme_io": false, 00:13:03.966 "nvme_io_md": false, 00:13:03.966 "write_zeroes": true, 00:13:03.966 "zcopy": true, 00:13:03.966 "get_zone_info": false, 00:13:03.966 "zone_management": false, 00:13:03.966 "zone_append": false, 00:13:03.966 "compare": false, 00:13:03.966 "compare_and_write": false, 00:13:03.966 "abort": true, 00:13:03.966 "seek_hole": false, 00:13:03.966 "seek_data": false, 00:13:03.966 "copy": true, 00:13:03.966 "nvme_iov_md": false 00:13:03.966 }, 00:13:03.966 "memory_domains": [ 00:13:03.966 { 00:13:03.966 "dma_device_id": "system", 00:13:03.966 "dma_device_type": 1 00:13:03.966 }, 00:13:03.966 { 00:13:03.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.966 "dma_device_type": 2 00:13:03.966 } 00:13:03.966 ], 00:13:03.966 "driver_specific": {} 00:13:03.966 } 00:13:03.966 ] 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@907 -- # return 0 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # sleep 2 00:13:03.966 23:56:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@548 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:04.224 Running I/O for 5 seconds... 00:13:06.124 23:57:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # qd_sampling_function_test Malloc_QD 00:13:06.124 23:57:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@518 -- # local bdev_name=Malloc_QD 00:13:06.124 23:57:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local sampling_period=10 00:13:06.124 23:57:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local iostats 00:13:06.124 23:57:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@522 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:13:06.124 23:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.124 23:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:06.124 23:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.124 23:57:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:13:06.124 23:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.124 23:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:06.124 23:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.124 23:57:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # iostats='{ 00:13:06.124 "tick_rate": 2200000000, 00:13:06.124 "ticks": 1661774894741, 00:13:06.124 "bdevs": [ 00:13:06.124 { 00:13:06.124 "name": "Malloc_QD", 00:13:06.124 "bytes_read": 771789312, 00:13:06.124 "num_read_ops": 188419, 00:13:06.124 "bytes_written": 0, 00:13:06.124 "num_write_ops": 0, 00:13:06.124 "bytes_unmapped": 0, 00:13:06.124 "num_unmap_ops": 0, 00:13:06.124 "bytes_copied": 0, 00:13:06.124 "num_copy_ops": 0, 00:13:06.124 "read_latency_ticks": 2126051045828, 00:13:06.124 "max_read_latency_ticks": 12812363, 00:13:06.124 "min_read_latency_ticks": 325568, 00:13:06.124 "write_latency_ticks": 0, 00:13:06.124 "max_write_latency_ticks": 0, 00:13:06.124 "min_write_latency_ticks": 0, 00:13:06.124 "unmap_latency_ticks": 0, 00:13:06.124 "max_unmap_latency_ticks": 0, 00:13:06.124 "min_unmap_latency_ticks": 0, 00:13:06.124 "copy_latency_ticks": 0, 00:13:06.124 "max_copy_latency_ticks": 0, 00:13:06.124 "min_copy_latency_ticks": 0, 00:13:06.124 "io_error": {}, 00:13:06.124 "queue_depth_polling_period": 10, 00:13:06.124 "queue_depth": 512, 00:13:06.124 "io_time": 20, 00:13:06.124 "weighted_io_time": 10240 00:13:06.124 } 00:13:06.124 ] 00:13:06.124 }' 00:13:06.124 23:57:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:13:06.124 23:57:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # qd_sampling_period=10 00:13:06.124 23:57:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 == null ']' 00:13:06.124 23:57:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 -ne 10 ']' 00:13:06.125 23:57:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@552 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:13:06.125 23:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.125 23:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:06.125 00:13:06.125 Latency(us) 00:13:06.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.125 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:06.125 Malloc_QD : 1.93 49536.50 193.50 0.00 0.00 5154.67 1280.93 6672.76 00:13:06.125 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:06.125 Malloc_QD : 1.93 50028.43 195.42 0.00 0.00 5104.26 942.08 5808.87 00:13:06.125 =================================================================================================================== 00:13:06.125 Total : 99564.94 388.93 0.00 0.00 5129.33 942.08 6672.76 00:13:06.125 0 00:13:06.125 23:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.125 23:57:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # killprocess 74727 00:13:06.125 23:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # '[' -z 74727 ']' 00:13:06.125 23:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # kill -0 74727 00:13:06.125 23:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # uname 00:13:06.125 23:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:06.125 23:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74727 00:13:06.125 killing process with pid 74727 00:13:06.125 Received shutdown signal, test time was about 2.065528 seconds 00:13:06.125 00:13:06.125 Latency(us) 00:13:06.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.125 =================================================================================================================== 00:13:06.125 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:06.125 23:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:06.125 23:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:06.125 23:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74727' 00:13:06.125 23:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@969 -- # kill 74727 00:13:06.125 23:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@974 -- # wait 74727 00:13:07.530 23:57:03 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # trap - SIGINT SIGTERM EXIT 00:13:07.530 00:13:07.530 real 0m4.664s 00:13:07.530 user 0m8.679s 00:13:07.530 sys 0m0.389s 00:13:07.530 23:57:03 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:07.530 ************************************ 00:13:07.530 END TEST bdev_qd_sampling 00:13:07.530 ************************************ 00:13:07.530 23:57:03 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:07.530 23:57:03 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_error error_test_suite '' 00:13:07.530 23:57:03 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:07.530 23:57:03 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:07.530 23:57:03 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:07.530 ************************************ 00:13:07.530 START TEST bdev_error 00:13:07.530 ************************************ 00:13:07.530 23:57:03 blockdev_general.bdev_error -- common/autotest_common.sh@1125 -- # error_test_suite '' 00:13:07.530 23:57:03 blockdev_general.bdev_error -- bdev/blockdev.sh@465 -- # DEV_1=Dev_1 00:13:07.530 Process error testing pid: 74810 00:13:07.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.530 23:57:03 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_2=Dev_2 00:13:07.530 23:57:03 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # ERR_DEV=EE_Dev_1 00:13:07.530 23:57:03 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # ERR_PID=74810 00:13:07.530 23:57:03 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # echo 'Process error testing pid: 74810' 00:13:07.530 23:57:03 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # waitforlisten 74810 00:13:07.530 23:57:03 blockdev_general.bdev_error -- bdev/blockdev.sh@470 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:13:07.530 23:57:03 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 74810 ']' 00:13:07.530 23:57:03 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.530 23:57:03 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:07.530 23:57:03 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.530 23:57:03 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:07.530 23:57:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:07.530 [2024-07-24 23:57:03.346478] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:13:07.530 [2024-07-24 23:57:03.346908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74810 ] 00:13:07.788 [2024-07-24 23:57:03.518698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.046 [2024-07-24 23:57:03.703730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:13:08.612 23:57:04 blockdev_general.bdev_error -- bdev/blockdev.sh@475 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:08.612 Dev_1 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.612 23:57:04 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # waitforbdev Dev_1 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:08.612 [ 00:13:08.612 { 00:13:08.612 "name": "Dev_1", 00:13:08.612 "aliases": [ 00:13:08.612 "56906d37-59a4-47fe-936a-b1808ed61b61" 00:13:08.612 ], 00:13:08.612 "product_name": "Malloc disk", 00:13:08.612 "block_size": 512, 00:13:08.612 "num_blocks": 262144, 00:13:08.612 "uuid": "56906d37-59a4-47fe-936a-b1808ed61b61", 00:13:08.612 "assigned_rate_limits": { 00:13:08.612 "rw_ios_per_sec": 0, 00:13:08.612 "rw_mbytes_per_sec": 0, 00:13:08.612 "r_mbytes_per_sec": 0, 00:13:08.612 "w_mbytes_per_sec": 0 00:13:08.612 }, 00:13:08.612 "claimed": false, 00:13:08.612 "zoned": false, 00:13:08.612 "supported_io_types": { 00:13:08.612 "read": true, 00:13:08.612 "write": true, 00:13:08.612 "unmap": true, 00:13:08.612 "flush": true, 00:13:08.612 "reset": true, 00:13:08.612 "nvme_admin": false, 00:13:08.612 "nvme_io": false, 00:13:08.612 "nvme_io_md": false, 00:13:08.612 "write_zeroes": true, 00:13:08.612 "zcopy": true, 00:13:08.612 "get_zone_info": false, 00:13:08.612 "zone_management": false, 00:13:08.612 "zone_append": false, 00:13:08.612 "compare": false, 00:13:08.612 "compare_and_write": false, 00:13:08.612 "abort": true, 00:13:08.612 "seek_hole": false, 00:13:08.612 "seek_data": false, 00:13:08.612 "copy": true, 00:13:08.612 "nvme_iov_md": false 00:13:08.612 }, 00:13:08.612 "memory_domains": [ 00:13:08.612 { 00:13:08.612 "dma_device_id": "system", 00:13:08.612 "dma_device_type": 1 00:13:08.612 }, 00:13:08.612 { 00:13:08.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.612 "dma_device_type": 2 00:13:08.612 } 00:13:08.612 ], 00:13:08.612 "driver_specific": {} 00:13:08.612 } 00:13:08.612 ] 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:13:08.612 23:57:04 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_error_create Dev_1 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:08.612 true 00:13:08.612 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.613 23:57:04 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:08.613 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.613 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:08.871 Dev_2 00:13:08.871 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.871 23:57:04 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # waitforbdev Dev_2 00:13:08.871 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:13:08.871 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:08.871 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:13:08.871 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:08.871 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:08.871 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:08.871 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.871 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:08.871 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.872 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:08.872 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.872 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:08.872 [ 00:13:08.872 { 00:13:08.872 "name": "Dev_2", 00:13:08.872 "aliases": [ 00:13:08.872 "359bbba4-4f49-449a-8323-b0221f68bb64" 00:13:08.872 ], 00:13:08.872 "product_name": "Malloc disk", 00:13:08.872 "block_size": 512, 00:13:08.872 "num_blocks": 262144, 00:13:08.872 "uuid": "359bbba4-4f49-449a-8323-b0221f68bb64", 00:13:08.872 "assigned_rate_limits": { 00:13:08.872 "rw_ios_per_sec": 0, 00:13:08.872 "rw_mbytes_per_sec": 0, 00:13:08.872 "r_mbytes_per_sec": 0, 00:13:08.872 "w_mbytes_per_sec": 0 00:13:08.872 }, 00:13:08.872 "claimed": false, 00:13:08.872 "zoned": false, 00:13:08.872 "supported_io_types": { 00:13:08.872 "read": true, 00:13:08.872 "write": true, 00:13:08.872 "unmap": true, 00:13:08.872 "flush": true, 00:13:08.872 "reset": true, 00:13:08.872 "nvme_admin": false, 00:13:08.872 "nvme_io": false, 00:13:08.872 "nvme_io_md": false, 00:13:08.872 "write_zeroes": true, 00:13:08.872 "zcopy": true, 00:13:08.872 "get_zone_info": false, 00:13:08.872 "zone_management": false, 00:13:08.872 "zone_append": false, 00:13:08.872 "compare": false, 00:13:08.872 "compare_and_write": false, 00:13:08.872 "abort": true, 00:13:08.872 "seek_hole": false, 00:13:08.872 "seek_data": false, 00:13:08.872 "copy": true, 00:13:08.872 "nvme_iov_md": false 00:13:08.872 }, 00:13:08.872 "memory_domains": [ 00:13:08.872 { 00:13:08.872 "dma_device_id": "system", 00:13:08.872 "dma_device_type": 1 00:13:08.872 }, 00:13:08.872 { 00:13:08.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.872 "dma_device_type": 2 00:13:08.872 } 00:13:08.872 ], 00:13:08.872 "driver_specific": {} 00:13:08.872 } 00:13:08.872 ] 00:13:08.872 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.872 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:13:08.872 23:57:04 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:08.872 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.872 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:08.872 23:57:04 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.872 23:57:04 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # sleep 1 00:13:08.872 23:57:04 blockdev_general.bdev_error -- bdev/blockdev.sh@482 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:08.872 Running I/O for 5 seconds... 00:13:09.808 23:57:05 blockdev_general.bdev_error -- bdev/blockdev.sh@486 -- # kill -0 74810 00:13:09.808 23:57:05 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # echo 'Process is existed as continue on error is set. Pid: 74810' 00:13:09.808 Process is existed as continue on error is set. Pid: 74810 00:13:09.808 23:57:05 blockdev_general.bdev_error -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:13:09.808 23:57:05 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.808 23:57:05 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:09.808 23:57:05 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.808 23:57:05 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_malloc_delete Dev_1 00:13:09.808 23:57:05 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.808 23:57:05 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:10.067 Timeout while waiting for response: 00:13:10.067 00:13:10.067 00:13:10.067 23:57:05 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.067 23:57:05 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # sleep 5 00:13:14.255 00:13:14.255 Latency(us) 00:13:14.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.255 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:14.255 EE_Dev_1 : 0.88 34672.42 135.44 5.67 0.00 458.02 153.60 953.25 00:13:14.255 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:14.255 Dev_2 : 5.00 70243.01 274.39 0.00 0.00 224.39 72.61 276442.76 00:13:14.255 =================================================================================================================== 00:13:14.255 Total : 104915.42 409.83 5.67 0.00 243.08 72.61 276442.76 00:13:15.190 23:57:10 blockdev_general.bdev_error -- bdev/blockdev.sh@498 -- # killprocess 74810 00:13:15.190 23:57:10 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # '[' -z 74810 ']' 00:13:15.190 23:57:10 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # kill -0 74810 00:13:15.190 23:57:10 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # uname 00:13:15.190 23:57:10 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:15.190 23:57:10 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74810 00:13:15.190 killing process with pid 74810 00:13:15.190 Received shutdown signal, test time was about 5.000000 seconds 00:13:15.190 00:13:15.190 Latency(us) 00:13:15.190 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.190 =================================================================================================================== 00:13:15.190 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:15.190 23:57:10 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:15.190 23:57:10 blockdev_general.bdev_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:15.190 23:57:10 blockdev_general.bdev_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74810' 00:13:15.190 23:57:10 blockdev_general.bdev_error -- common/autotest_common.sh@969 -- # kill 74810 00:13:15.190 23:57:10 blockdev_general.bdev_error -- common/autotest_common.sh@974 -- # wait 74810 00:13:16.598 Process error testing pid: 74911 00:13:16.598 23:57:12 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # ERR_PID=74911 00:13:16.598 23:57:12 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # echo 'Process error testing pid: 74911' 00:13:16.598 23:57:12 blockdev_general.bdev_error -- bdev/blockdev.sh@501 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:13:16.598 23:57:12 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # waitforlisten 74911 00:13:16.598 23:57:12 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 74911 ']' 00:13:16.598 23:57:12 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.598 23:57:12 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:16.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.598 23:57:12 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.598 23:57:12 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:16.598 23:57:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:16.598 [2024-07-24 23:57:12.294234] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:13:16.598 [2024-07-24 23:57:12.294421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74911 ] 00:13:16.598 [2024-07-24 23:57:12.463214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.856 [2024-07-24 23:57:12.635203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.422 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:17.422 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:13:17.422 23:57:13 blockdev_general.bdev_error -- bdev/blockdev.sh@506 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:17.422 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.422 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:17.680 Dev_1 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.680 23:57:13 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # waitforbdev Dev_1 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:17.680 [ 00:13:17.680 { 00:13:17.680 "name": "Dev_1", 00:13:17.680 "aliases": [ 00:13:17.680 "5596b596-8320-4f49-afba-9a29b056a959" 00:13:17.680 ], 00:13:17.680 "product_name": "Malloc disk", 00:13:17.680 "block_size": 512, 00:13:17.680 "num_blocks": 262144, 00:13:17.680 "uuid": "5596b596-8320-4f49-afba-9a29b056a959", 00:13:17.680 "assigned_rate_limits": { 00:13:17.680 "rw_ios_per_sec": 0, 00:13:17.680 "rw_mbytes_per_sec": 0, 00:13:17.680 "r_mbytes_per_sec": 0, 00:13:17.680 "w_mbytes_per_sec": 0 00:13:17.680 }, 00:13:17.680 "claimed": false, 00:13:17.680 "zoned": false, 00:13:17.680 "supported_io_types": { 00:13:17.680 "read": true, 00:13:17.680 "write": true, 00:13:17.680 "unmap": true, 00:13:17.680 "flush": true, 00:13:17.680 "reset": true, 00:13:17.680 "nvme_admin": false, 00:13:17.680 "nvme_io": false, 00:13:17.680 "nvme_io_md": false, 00:13:17.680 "write_zeroes": true, 00:13:17.680 "zcopy": true, 00:13:17.680 "get_zone_info": false, 00:13:17.680 "zone_management": false, 00:13:17.680 "zone_append": false, 00:13:17.680 "compare": false, 00:13:17.680 "compare_and_write": false, 00:13:17.680 "abort": true, 00:13:17.680 "seek_hole": false, 00:13:17.680 "seek_data": false, 00:13:17.680 "copy": true, 00:13:17.680 "nvme_iov_md": false 00:13:17.680 }, 00:13:17.680 "memory_domains": [ 00:13:17.680 { 00:13:17.680 "dma_device_id": "system", 00:13:17.680 "dma_device_type": 1 00:13:17.680 }, 00:13:17.680 { 00:13:17.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.680 "dma_device_type": 2 00:13:17.680 } 00:13:17.680 ], 00:13:17.680 "driver_specific": {} 00:13:17.680 } 00:13:17.680 ] 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:13:17.680 23:57:13 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_error_create Dev_1 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:17.680 true 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.680 23:57:13 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:17.680 Dev_2 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.680 23:57:13 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # waitforbdev Dev_2 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.680 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:17.680 [ 00:13:17.680 { 00:13:17.680 "name": "Dev_2", 00:13:17.680 "aliases": [ 00:13:17.680 "b8250f3b-06bc-4ae8-982a-4b2f2ab7aad1" 00:13:17.680 ], 00:13:17.680 "product_name": "Malloc disk", 00:13:17.680 "block_size": 512, 00:13:17.680 "num_blocks": 262144, 00:13:17.680 "uuid": "b8250f3b-06bc-4ae8-982a-4b2f2ab7aad1", 00:13:17.680 "assigned_rate_limits": { 00:13:17.680 "rw_ios_per_sec": 0, 00:13:17.680 "rw_mbytes_per_sec": 0, 00:13:17.680 "r_mbytes_per_sec": 0, 00:13:17.680 "w_mbytes_per_sec": 0 00:13:17.680 }, 00:13:17.680 "claimed": false, 00:13:17.680 "zoned": false, 00:13:17.680 "supported_io_types": { 00:13:17.680 "read": true, 00:13:17.680 "write": true, 00:13:17.680 "unmap": true, 00:13:17.680 "flush": true, 00:13:17.680 "reset": true, 00:13:17.680 "nvme_admin": false, 00:13:17.680 "nvme_io": false, 00:13:17.939 "nvme_io_md": false, 00:13:17.939 "write_zeroes": true, 00:13:17.939 "zcopy": true, 00:13:17.939 "get_zone_info": false, 00:13:17.939 "zone_management": false, 00:13:17.939 "zone_append": false, 00:13:17.939 "compare": false, 00:13:17.939 "compare_and_write": false, 00:13:17.939 "abort": true, 00:13:17.939 "seek_hole": false, 00:13:17.939 "seek_data": false, 00:13:17.939 "copy": true, 00:13:17.939 "nvme_iov_md": false 00:13:17.939 }, 00:13:17.939 "memory_domains": [ 00:13:17.939 { 00:13:17.939 "dma_device_id": "system", 00:13:17.939 "dma_device_type": 1 00:13:17.939 }, 00:13:17.939 { 00:13:17.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.939 "dma_device_type": 2 00:13:17.939 } 00:13:17.939 ], 00:13:17.939 "driver_specific": {} 00:13:17.939 } 00:13:17.939 ] 00:13:17.939 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.939 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:13:17.939 23:57:13 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:17.939 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.939 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:17.939 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.939 23:57:13 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # NOT wait 74911 00:13:17.939 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # local es=0 00:13:17.939 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # valid_exec_arg wait 74911 00:13:17.939 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@638 -- # local arg=wait 00:13:17.939 23:57:13 blockdev_general.bdev_error -- bdev/blockdev.sh@513 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:17.939 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:17.939 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # type -t wait 00:13:17.939 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:17.939 23:57:13 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # wait 74911 00:13:17.939 Running I/O for 5 seconds... 00:13:17.939 task offset: 196856 on job bdev=EE_Dev_1 fails 00:13:17.939 00:13:17.939 Latency(us) 00:13:17.939 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.939 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:17.939 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:13:17.939 EE_Dev_1 : 0.00 25522.04 99.70 5800.46 0.00 421.89 165.70 733.56 00:13:17.939 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:17.939 Dev_2 : 0.00 19288.73 75.35 0.00 0.00 577.80 148.01 1050.07 00:13:17.939 =================================================================================================================== 00:13:17.939 Total : 44810.77 175.04 5800.46 0.00 506.45 148.01 1050.07 00:13:17.939 [2024-07-24 23:57:13.693657] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:17.939 request: 00:13:17.939 { 00:13:17.939 "method": "perform_tests", 00:13:17.939 "req_id": 1 00:13:17.939 } 00:13:17.939 Got JSON-RPC error response 00:13:17.939 response: 00:13:17.939 { 00:13:17.939 "code": -32603, 00:13:17.939 "message": "bdevperf failed with error Operation not permitted" 00:13:17.939 } 00:13:19.842 23:57:15 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # es=255 00:13:19.842 23:57:15 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:19.842 23:57:15 blockdev_general.bdev_error -- common/autotest_common.sh@662 -- # es=127 00:13:19.842 23:57:15 blockdev_general.bdev_error -- common/autotest_common.sh@663 -- # case "$es" in 00:13:19.842 23:57:15 blockdev_general.bdev_error -- common/autotest_common.sh@670 -- # es=1 00:13:19.842 23:57:15 blockdev_general.bdev_error -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:19.842 00:13:19.842 real 0m12.021s 00:13:19.842 user 0m12.336s 00:13:19.842 sys 0m0.794s 00:13:19.842 23:57:15 blockdev_general.bdev_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:19.842 ************************************ 00:13:19.842 END TEST bdev_error 00:13:19.842 ************************************ 00:13:19.842 23:57:15 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:19.842 23:57:15 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_stat stat_test_suite '' 00:13:19.842 23:57:15 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:19.842 23:57:15 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:19.842 23:57:15 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:19.842 ************************************ 00:13:19.842 START TEST bdev_stat 00:13:19.842 ************************************ 00:13:19.842 23:57:15 blockdev_general.bdev_stat -- common/autotest_common.sh@1125 -- # stat_test_suite '' 00:13:19.842 23:57:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@591 -- # STAT_DEV=Malloc_STAT 00:13:19.842 23:57:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # STAT_PID=74969 00:13:19.842 Process Bdev IO statistics testing pid: 74969 00:13:19.842 23:57:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # echo 'Process Bdev IO statistics testing pid: 74969' 00:13:19.842 23:57:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@594 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:13:19.842 23:57:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:13:19.842 23:57:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # waitforlisten 74969 00:13:19.842 23:57:15 blockdev_general.bdev_stat -- common/autotest_common.sh@831 -- # '[' -z 74969 ']' 00:13:19.842 23:57:15 blockdev_general.bdev_stat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.842 23:57:15 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:19.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.842 23:57:15 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.842 23:57:15 blockdev_general.bdev_stat -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:19.842 23:57:15 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:19.842 [2024-07-24 23:57:15.409407] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:13:19.842 [2024-07-24 23:57:15.409562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74969 ] 00:13:19.842 [2024-07-24 23:57:15.575448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:20.102 [2024-07-24 23:57:15.803334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.102 [2024-07-24 23:57:15.803345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.670 23:57:16 blockdev_general.bdev_stat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:20.670 23:57:16 blockdev_general.bdev_stat -- common/autotest_common.sh@864 -- # return 0 00:13:20.670 23:57:16 blockdev_general.bdev_stat -- bdev/blockdev.sh@600 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:13:20.670 23:57:16 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.670 23:57:16 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:20.670 Malloc_STAT 00:13:20.670 23:57:16 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.670 23:57:16 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # waitforbdev Malloc_STAT 00:13:20.670 23:57:16 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_STAT 00:13:20.670 23:57:16 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:20.670 23:57:16 blockdev_general.bdev_stat -- common/autotest_common.sh@901 -- # local i 00:13:20.670 23:57:16 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:20.670 23:57:16 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:20.670 23:57:16 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:20.670 23:57:16 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.670 23:57:16 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:20.670 23:57:16 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.670 23:57:16 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:13:20.670 23:57:16 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.670 23:57:16 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:20.670 [ 00:13:20.670 { 00:13:20.670 "name": "Malloc_STAT", 00:13:20.670 "aliases": [ 00:13:20.670 "56309d24-4933-49b4-861f-2113a11a3267" 00:13:20.670 ], 00:13:20.670 "product_name": "Malloc disk", 00:13:20.670 "block_size": 512, 00:13:20.670 "num_blocks": 262144, 00:13:20.671 "uuid": "56309d24-4933-49b4-861f-2113a11a3267", 00:13:20.671 "assigned_rate_limits": { 00:13:20.671 "rw_ios_per_sec": 0, 00:13:20.671 "rw_mbytes_per_sec": 0, 00:13:20.671 "r_mbytes_per_sec": 0, 00:13:20.671 "w_mbytes_per_sec": 0 00:13:20.671 }, 00:13:20.671 "claimed": false, 00:13:20.671 "zoned": false, 00:13:20.671 "supported_io_types": { 00:13:20.671 "read": true, 00:13:20.671 "write": true, 00:13:20.671 "unmap": true, 00:13:20.671 "flush": true, 00:13:20.671 "reset": true, 00:13:20.671 "nvme_admin": false, 00:13:20.671 "nvme_io": false, 00:13:20.671 "nvme_io_md": false, 00:13:20.671 "write_zeroes": true, 00:13:20.671 "zcopy": true, 00:13:20.671 "get_zone_info": false, 00:13:20.671 "zone_management": false, 00:13:20.671 "zone_append": false, 00:13:20.671 "compare": false, 00:13:20.671 "compare_and_write": false, 00:13:20.671 "abort": true, 00:13:20.671 "seek_hole": false, 00:13:20.671 "seek_data": false, 00:13:20.671 "copy": true, 00:13:20.671 "nvme_iov_md": false 00:13:20.671 }, 00:13:20.671 "memory_domains": [ 00:13:20.671 { 00:13:20.671 "dma_device_id": "system", 00:13:20.671 "dma_device_type": 1 00:13:20.671 }, 00:13:20.671 { 00:13:20.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.671 "dma_device_type": 2 00:13:20.671 } 00:13:20.671 ], 00:13:20.671 "driver_specific": {} 00:13:20.671 } 00:13:20.671 ] 00:13:20.671 23:57:16 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.671 23:57:16 blockdev_general.bdev_stat -- common/autotest_common.sh@907 -- # return 0 00:13:20.671 23:57:16 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # sleep 2 00:13:20.671 23:57:16 blockdev_general.bdev_stat -- bdev/blockdev.sh@603 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:20.930 Running I/O for 10 seconds... 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # stat_function_test Malloc_STAT 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@558 -- # local bdev_name=Malloc_STAT 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local iostats 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local io_count1 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count2 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local iostats_per_channel 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local io_count_per_channel1 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel2 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel_all=0 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # iostats='{ 00:13:22.839 "tick_rate": 2200000000, 00:13:22.839 "ticks": 1698686813386, 00:13:22.839 "bdevs": [ 00:13:22.839 { 00:13:22.839 "name": "Malloc_STAT", 00:13:22.839 "bytes_read": 792760832, 00:13:22.839 "num_read_ops": 193539, 00:13:22.839 "bytes_written": 0, 00:13:22.839 "num_write_ops": 0, 00:13:22.839 "bytes_unmapped": 0, 00:13:22.839 "num_unmap_ops": 0, 00:13:22.839 "bytes_copied": 0, 00:13:22.839 "num_copy_ops": 0, 00:13:22.839 "read_latency_ticks": 2137031447762, 00:13:22.839 "max_read_latency_ticks": 15740263, 00:13:22.839 "min_read_latency_ticks": 308540, 00:13:22.839 "write_latency_ticks": 0, 00:13:22.839 "max_write_latency_ticks": 0, 00:13:22.839 "min_write_latency_ticks": 0, 00:13:22.839 "unmap_latency_ticks": 0, 00:13:22.839 "max_unmap_latency_ticks": 0, 00:13:22.839 "min_unmap_latency_ticks": 0, 00:13:22.839 "copy_latency_ticks": 0, 00:13:22.839 "max_copy_latency_ticks": 0, 00:13:22.839 "min_copy_latency_ticks": 0, 00:13:22.839 "io_error": {} 00:13:22.839 } 00:13:22.839 ] 00:13:22.839 }' 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # jq -r '.bdevs[0].num_read_ops' 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # io_count1=193539 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # iostats_per_channel='{ 00:13:22.839 "tick_rate": 2200000000, 00:13:22.839 "ticks": 1698757375160, 00:13:22.839 "name": "Malloc_STAT", 00:13:22.839 "channels": [ 00:13:22.839 { 00:13:22.839 "thread_id": 2, 00:13:22.839 "bytes_read": 399507456, 00:13:22.839 "num_read_ops": 97536, 00:13:22.839 "bytes_written": 0, 00:13:22.839 "num_write_ops": 0, 00:13:22.839 "bytes_unmapped": 0, 00:13:22.839 "num_unmap_ops": 0, 00:13:22.839 "bytes_copied": 0, 00:13:22.839 "num_copy_ops": 0, 00:13:22.839 "read_latency_ticks": 1085803611896, 00:13:22.839 "max_read_latency_ticks": 15740263, 00:13:22.839 "min_read_latency_ticks": 8359489, 00:13:22.839 "write_latency_ticks": 0, 00:13:22.839 "max_write_latency_ticks": 0, 00:13:22.839 "min_write_latency_ticks": 0, 00:13:22.839 "unmap_latency_ticks": 0, 00:13:22.839 "max_unmap_latency_ticks": 0, 00:13:22.839 "min_unmap_latency_ticks": 0, 00:13:22.839 "copy_latency_ticks": 0, 00:13:22.839 "max_copy_latency_ticks": 0, 00:13:22.839 "min_copy_latency_ticks": 0 00:13:22.839 }, 00:13:22.839 { 00:13:22.839 "thread_id": 3, 00:13:22.839 "bytes_read": 405798912, 00:13:22.839 "num_read_ops": 99072, 00:13:22.839 "bytes_written": 0, 00:13:22.839 "num_write_ops": 0, 00:13:22.839 "bytes_unmapped": 0, 00:13:22.839 "num_unmap_ops": 0, 00:13:22.839 "bytes_copied": 0, 00:13:22.839 "num_copy_ops": 0, 00:13:22.839 "read_latency_ticks": 1086784166642, 00:13:22.839 "max_read_latency_ticks": 12834884, 00:13:22.839 "min_read_latency_ticks": 8335741, 00:13:22.839 "write_latency_ticks": 0, 00:13:22.839 "max_write_latency_ticks": 0, 00:13:22.839 "min_write_latency_ticks": 0, 00:13:22.839 "unmap_latency_ticks": 0, 00:13:22.839 "max_unmap_latency_ticks": 0, 00:13:22.839 "min_unmap_latency_ticks": 0, 00:13:22.839 "copy_latency_ticks": 0, 00:13:22.839 "max_copy_latency_ticks": 0, 00:13:22.839 "min_copy_latency_ticks": 0 00:13:22.839 } 00:13:22.839 ] 00:13:22.839 }' 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # jq -r '.channels[0].num_read_ops' 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # io_count_per_channel1=97536 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel_all=97536 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # jq -r '.channels[1].num_read_ops' 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel2=99072 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel_all=196608 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # iostats='{ 00:13:22.839 "tick_rate": 2200000000, 00:13:22.839 "ticks": 1698846916501, 00:13:22.839 "bdevs": [ 00:13:22.839 { 00:13:22.839 "name": "Malloc_STAT", 00:13:22.839 "bytes_read": 821072384, 00:13:22.839 "num_read_ops": 200451, 00:13:22.839 "bytes_written": 0, 00:13:22.839 "num_write_ops": 0, 00:13:22.839 "bytes_unmapped": 0, 00:13:22.839 "num_unmap_ops": 0, 00:13:22.839 "bytes_copied": 0, 00:13:22.839 "num_copy_ops": 0, 00:13:22.839 "read_latency_ticks": 2219102575604, 00:13:22.839 "max_read_latency_ticks": 15740263, 00:13:22.839 "min_read_latency_ticks": 308540, 00:13:22.839 "write_latency_ticks": 0, 00:13:22.839 "max_write_latency_ticks": 0, 00:13:22.839 "min_write_latency_ticks": 0, 00:13:22.839 "unmap_latency_ticks": 0, 00:13:22.839 "max_unmap_latency_ticks": 0, 00:13:22.839 "min_unmap_latency_ticks": 0, 00:13:22.839 "copy_latency_ticks": 0, 00:13:22.839 "max_copy_latency_ticks": 0, 00:13:22.839 "min_copy_latency_ticks": 0, 00:13:22.839 "io_error": {} 00:13:22.839 } 00:13:22.839 ] 00:13:22.839 }' 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # jq -r '.bdevs[0].num_read_ops' 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # io_count2=200451 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 196608 -lt 193539 ']' 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 196608 -gt 200451 ']' 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@607 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.839 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:22.839 00:13:22.839 Latency(us) 00:13:22.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.839 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:22.839 Malloc_STAT : 2.01 50288.07 196.44 0.00 0.00 5077.67 1251.14 7179.17 00:13:22.839 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:22.839 Malloc_STAT : 2.01 51155.05 199.82 0.00 0.00 4992.00 826.65 5838.66 00:13:22.839 =================================================================================================================== 00:13:22.839 Total : 101443.12 396.26 0.00 0.00 5034.46 826.65 7179.17 00:13:23.099 0 00:13:23.099 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.099 23:57:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # killprocess 74969 00:13:23.099 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # '[' -z 74969 ']' 00:13:23.099 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # kill -0 74969 00:13:23.099 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # uname 00:13:23.099 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:23.099 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74969 00:13:23.099 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:23.099 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:23.099 killing process with pid 74969 00:13:23.099 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74969' 00:13:23.099 Received shutdown signal, test time was about 2.143204 seconds 00:13:23.099 00:13:23.099 Latency(us) 00:13:23.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.099 =================================================================================================================== 00:13:23.099 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:23.099 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@969 -- # kill 74969 00:13:23.099 23:57:18 blockdev_general.bdev_stat -- common/autotest_common.sh@974 -- # wait 74969 00:13:24.478 23:57:20 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # trap - SIGINT SIGTERM EXIT 00:13:24.478 00:13:24.478 real 0m4.693s 00:13:24.478 user 0m8.773s 00:13:24.478 sys 0m0.396s 00:13:24.478 23:57:20 blockdev_general.bdev_stat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:24.478 ************************************ 00:13:24.478 END TEST bdev_stat 00:13:24.478 23:57:20 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:24.478 ************************************ 00:13:24.478 23:57:20 blockdev_general -- bdev/blockdev.sh@793 -- # [[ bdev == gpt ]] 00:13:24.478 23:57:20 blockdev_general -- bdev/blockdev.sh@797 -- # [[ bdev == crypto_sw ]] 00:13:24.478 23:57:20 blockdev_general -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:24.478 23:57:20 blockdev_general -- bdev/blockdev.sh@810 -- # cleanup 00:13:24.478 23:57:20 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:24.478 23:57:20 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:24.478 23:57:20 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:13:24.478 23:57:20 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:13:24.478 23:57:20 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:13:24.478 23:57:20 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:13:24.478 00:13:24.478 real 2m22.297s 00:13:24.478 user 5m52.219s 00:13:24.478 sys 0m22.442s 00:13:24.478 23:57:20 blockdev_general -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:24.478 23:57:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:24.478 ************************************ 00:13:24.478 END TEST blockdev_general 00:13:24.478 ************************************ 00:13:24.478 23:57:20 -- spdk/autotest.sh@194 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:24.478 23:57:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:24.478 23:57:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:24.478 23:57:20 -- common/autotest_common.sh@10 -- # set +x 00:13:24.478 ************************************ 00:13:24.478 START TEST bdev_raid 00:13:24.478 ************************************ 00:13:24.478 23:57:20 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:24.478 * Looking for test storage... 00:13:24.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:24.478 23:57:20 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:24.478 23:57:20 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:13:24.478 23:57:20 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:13:24.479 23:57:20 bdev_raid -- bdev/bdev_raid.sh@927 -- # mkdir -p /raidtest 00:13:24.479 23:57:20 bdev_raid -- bdev/bdev_raid.sh@928 -- # trap 'cleanup; exit 1' EXIT 00:13:24.479 23:57:20 bdev_raid -- bdev/bdev_raid.sh@930 -- # base_blocklen=512 00:13:24.479 23:57:20 bdev_raid -- bdev/bdev_raid.sh@932 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:13:24.479 23:57:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:24.479 23:57:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:24.479 23:57:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:24.479 ************************************ 00:13:24.479 START TEST raid0_resize_superblock_test 00:13:24.479 ************************************ 00:13:24.479 23:57:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:13:24.479 23:57:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@868 -- # local raid_level=0 00:13:24.479 23:57:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # raid_pid=75104 00:13:24.479 Process raid pid: 75104 00:13:24.479 23:57:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@872 -- # echo 'Process raid pid: 75104' 00:13:24.479 23:57:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@873 -- # waitforlisten 75104 /var/tmp/spdk-raid.sock 00:13:24.479 23:57:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 75104 ']' 00:13:24.479 23:57:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:24.479 23:57:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:24.479 23:57:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:24.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:24.479 23:57:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:24.479 23:57:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:24.479 23:57:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.479 [2024-07-24 23:57:20.309348] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:13:24.479 [2024-07-24 23:57:20.309536] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.737 [2024-07-24 23:57:20.473518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.996 [2024-07-24 23:57:20.645411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.996 [2024-07-24 23:57:20.817580] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.563 23:57:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:25.563 23:57:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:25.563 23:57:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:13:26.130 malloc0 00:13:26.130 23:57:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@877 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:13:26.389 [2024-07-24 23:57:22.136056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:26.389 [2024-07-24 23:57:22.136147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.389 [2024-07-24 23:57:22.136181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:13:26.389 [2024-07-24 23:57:22.136217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.389 [2024-07-24 23:57:22.138992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.389 [2024-07-24 23:57:22.139043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:26.389 pt0 00:13:26.389 23:57:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@878 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:13:26.649 84c8777c-0342-4b24-9045-42818ab7f629 00:13:26.649 23:57:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:13:26.907 cd51a517-2541-4328-add4-d73a8b008cc2 00:13:26.907 23:57:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:13:27.166 b155fddc-6f9d-4963-99dc-3aca1989c21b 00:13:27.166 23:57:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@883 -- # case $raid_level in 00:13:27.166 23:57:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@884 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 0 -z 64 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:13:27.425 [2024-07-24 23:57:23.042005] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev cd51a517-2541-4328-add4-d73a8b008cc2 is claimed 00:13:27.425 [2024-07-24 23:57:23.042190] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev b155fddc-6f9d-4963-99dc-3aca1989c21b is claimed 00:13:27.425 [2024-07-24 23:57:23.042438] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007b80 00:13:27.425 [2024-07-24 23:57:23.042478] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:13:27.425 [2024-07-24 23:57:23.042648] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:13:27.425 [2024-07-24 23:57:23.043099] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007b80 00:13:27.425 [2024-07-24 23:57:23.043132] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x516000007b80 00:13:27.425 [2024-07-24 23:57:23.043341] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.425 23:57:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:13:27.425 23:57:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:13:27.684 23:57:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 64 == 64 )) 00:13:27.685 23:57:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:13:27.685 23:57:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:13:27.943 23:57:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 64 == 64 )) 00:13:27.943 23:57:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:27.943 23:57:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:27.943 23:57:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:27.943 23:57:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:13:28.201 [2024-07-24 23:57:23.830351] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:28.201 23:57:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:28.201 23:57:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:28.201 23:57:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 245760 == 245760 )) 00:13:28.201 23:57:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:13:28.201 [2024-07-24 23:57:24.046442] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:28.201 [2024-07-24 23:57:24.046499] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'cd51a517-2541-4328-add4-d73a8b008cc2' was resized: old size 131072, new size 204800 00:13:28.201 23:57:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:13:28.459 [2024-07-24 23:57:24.290473] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:28.459 [2024-07-24 23:57:24.290518] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b155fddc-6f9d-4963-99dc-3aca1989c21b' was resized: old size 131072, new size 204800 00:13:28.459 [2024-07-24 23:57:24.290556] bdev_raid.c:2315:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:13:28.459 23:57:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:13:28.459 23:57:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # jq '.[].num_blocks' 00:13:28.718 23:57:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # (( 100 == 100 )) 00:13:28.718 23:57:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:13:28.718 23:57:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # jq '.[].num_blocks' 00:13:28.977 23:57:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # (( 100 == 100 )) 00:13:28.977 23:57:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:28.977 23:57:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # jq '.[].num_blocks' 00:13:28.977 23:57:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:28.977 23:57:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:29.236 [2024-07-24 23:57:24.982691] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:29.236 23:57:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:29.236 23:57:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:29.236 23:57:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # (( 393216 == 393216 )) 00:13:29.236 23:57:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@912 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:13:29.495 [2024-07-24 23:57:25.238595] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:13:29.495 [2024-07-24 23:57:25.238732] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:13:29.495 [2024-07-24 23:57:25.238755] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:29.495 [2024-07-24 23:57:25.238772] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:13:29.495 [2024-07-24 23:57:25.238927] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:29.495 [2024-07-24 23:57:25.238980] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:29.495 [2024-07-24 23:57:25.239003] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007b80 name Raid, state offline 00:13:29.495 23:57:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:13:29.754 [2024-07-24 23:57:25.510580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:29.754 [2024-07-24 23:57:25.510679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.754 [2024-07-24 23:57:25.510714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:13:29.754 [2024-07-24 23:57:25.510733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.754 [2024-07-24 23:57:25.513097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.754 [2024-07-24 23:57:25.513140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:29.754 [2024-07-24 23:57:25.515280] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev cd51a517-2541-4328-add4-d73a8b008cc2 00:13:29.754 [2024-07-24 23:57:25.515359] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev cd51a517-2541-4328-add4-d73a8b008cc2 is claimed 00:13:29.754 [2024-07-24 23:57:25.515533] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b155fddc-6f9d-4963-99dc-3aca1989c21b 00:13:29.754 [2024-07-24 23:57:25.515571] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev b155fddc-6f9d-4963-99dc-3aca1989c21b is claimed 00:13:29.754 [2024-07-24 23:57:25.515772] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev b155fddc-6f9d-4963-99dc-3aca1989c21b (2) smaller than existing raid bdev Raid (3) 00:13:29.754 [2024-07-24 23:57:25.515845] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:13:29.754 [2024-07-24 23:57:25.515858] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:13:29.754 [2024-07-24 23:57:25.515973] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ba0 00:13:29.754 [2024-07-24 23:57:25.516319] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:13:29.754 [2024-07-24 23:57:25.516354] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x516000008a80 00:13:29.754 pt0 00:13:29.754 [2024-07-24 23:57:25.516509] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.754 23:57:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:29.754 23:57:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:29.754 23:57:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:29.754 23:57:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # jq '.[].num_blocks' 00:13:30.014 [2024-07-24 23:57:25.723796] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.014 23:57:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:30.014 23:57:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:30.014 23:57:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # (( 393216 == 393216 )) 00:13:30.014 23:57:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@922 -- # killprocess 75104 00:13:30.014 23:57:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 75104 ']' 00:13:30.014 23:57:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 75104 00:13:30.014 23:57:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:30.014 23:57:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:30.014 23:57:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75104 00:13:30.014 23:57:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:30.014 23:57:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:30.014 killing process with pid 75104 00:13:30.014 23:57:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75104' 00:13:30.014 23:57:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 75104 00:13:30.014 [2024-07-24 23:57:25.774976] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:30.014 23:57:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 75104 00:13:30.014 [2024-07-24 23:57:25.775073] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.014 [2024-07-24 23:57:25.775134] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.014 [2024-07-24 23:57:25.775154] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name Raid, state offline 00:13:31.412 [2024-07-24 23:57:26.877441] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:32.347 23:57:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@924 -- # return 0 00:13:32.347 00:13:32.347 real 0m7.679s 00:13:32.347 user 0m11.142s 00:13:32.347 sys 0m0.894s 00:13:32.347 23:57:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.347 23:57:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.347 ************************************ 00:13:32.347 END TEST raid0_resize_superblock_test 00:13:32.348 ************************************ 00:13:32.348 23:57:27 bdev_raid -- bdev/bdev_raid.sh@933 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:13:32.348 23:57:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:32.348 23:57:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.348 23:57:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:32.348 ************************************ 00:13:32.348 START TEST raid1_resize_superblock_test 00:13:32.348 ************************************ 00:13:32.348 23:57:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:13:32.348 23:57:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@868 -- # local raid_level=1 00:13:32.348 23:57:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # raid_pid=75246 00:13:32.348 23:57:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@872 -- # echo 'Process raid pid: 75246' 00:13:32.348 Process raid pid: 75246 00:13:32.348 23:57:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@873 -- # waitforlisten 75246 /var/tmp/spdk-raid.sock 00:13:32.348 23:57:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:32.348 23:57:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 75246 ']' 00:13:32.348 23:57:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:32.348 23:57:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:32.348 23:57:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:32.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:32.348 23:57:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:32.348 23:57:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.348 [2024-07-24 23:57:28.043037] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:13:32.348 [2024-07-24 23:57:28.043183] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.348 [2024-07-24 23:57:28.204159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.607 [2024-07-24 23:57:28.372606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.865 [2024-07-24 23:57:28.536199] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.124 23:57:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:33.124 23:57:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:33.124 23:57:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:13:34.059 malloc0 00:13:34.059 23:57:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@877 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:13:34.059 [2024-07-24 23:57:29.857439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:34.059 [2024-07-24 23:57:29.857552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.059 [2024-07-24 23:57:29.857588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:13:34.059 [2024-07-24 23:57:29.857610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.059 [2024-07-24 23:57:29.860480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.059 [2024-07-24 23:57:29.860542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:34.059 pt0 00:13:34.059 23:57:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@878 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:13:34.318 fde0f8b7-8b14-4f07-a029-d08ce3e47d9d 00:13:34.318 23:57:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:13:34.576 ab79a39c-9a40-4adb-b0c1-95e8325fb29c 00:13:34.576 23:57:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:13:34.833 640b31f8-278f-451b-9962-38c84f4e8288 00:13:34.834 23:57:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@883 -- # case $raid_level in 00:13:34.834 23:57:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 1 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:13:35.092 [2024-07-24 23:57:30.906296] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev ab79a39c-9a40-4adb-b0c1-95e8325fb29c is claimed 00:13:35.092 [2024-07-24 23:57:30.906481] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev 640b31f8-278f-451b-9962-38c84f4e8288 is claimed 00:13:35.092 [2024-07-24 23:57:30.906751] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007b80 00:13:35.092 [2024-07-24 23:57:30.906786] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:13:35.092 [2024-07-24 23:57:30.906976] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:13:35.092 [2024-07-24 23:57:30.907447] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007b80 00:13:35.092 [2024-07-24 23:57:30.907490] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x516000007b80 00:13:35.092 [2024-07-24 23:57:30.907683] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.092 23:57:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:13:35.092 23:57:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:13:35.351 23:57:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 64 == 64 )) 00:13:35.351 23:57:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:13:35.351 23:57:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:13:35.610 23:57:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 64 == 64 )) 00:13:35.610 23:57:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:35.610 23:57:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:35.610 23:57:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:35.610 23:57:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:13:35.869 [2024-07-24 23:57:31.618658] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.869 23:57:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:35.869 23:57:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:35.869 23:57:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 122880 == 122880 )) 00:13:35.869 23:57:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:13:36.128 [2024-07-24 23:57:31.826677] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:36.128 [2024-07-24 23:57:31.826725] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ab79a39c-9a40-4adb-b0c1-95e8325fb29c' was resized: old size 131072, new size 204800 00:13:36.128 23:57:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:13:36.387 [2024-07-24 23:57:32.050783] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:36.387 [2024-07-24 23:57:32.050837] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '640b31f8-278f-451b-9962-38c84f4e8288' was resized: old size 131072, new size 204800 00:13:36.387 [2024-07-24 23:57:32.050883] bdev_raid.c:2315:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:13:36.388 23:57:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:13:36.388 23:57:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # jq '.[].num_blocks' 00:13:36.646 23:57:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # (( 100 == 100 )) 00:13:36.646 23:57:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # jq '.[].num_blocks' 00:13:36.646 23:57:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:13:36.904 23:57:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # (( 100 == 100 )) 00:13:36.904 23:57:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:36.904 23:57:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:36.904 23:57:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:36.904 23:57:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # jq '.[].num_blocks' 00:13:37.162 [2024-07-24 23:57:32.795017] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.163 23:57:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:37.163 23:57:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:37.163 23:57:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # (( 196608 == 196608 )) 00:13:37.163 23:57:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@912 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:13:37.163 [2024-07-24 23:57:33.014797] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:13:37.163 [2024-07-24 23:57:33.014924] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:13:37.163 [2024-07-24 23:57:33.014991] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:13:37.163 [2024-07-24 23:57:33.015221] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.163 [2024-07-24 23:57:33.015495] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.163 [2024-07-24 23:57:33.015599] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.163 [2024-07-24 23:57:33.015625] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007b80 name Raid, state offline 00:13:37.421 23:57:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:13:37.421 [2024-07-24 23:57:33.274836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:37.421 [2024-07-24 23:57:33.274957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.421 [2024-07-24 23:57:33.275008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:13:37.421 [2024-07-24 23:57:33.275027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.421 [2024-07-24 23:57:33.277506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.421 [2024-07-24 23:57:33.277585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:37.421 pt0 00:13:37.421 [2024-07-24 23:57:33.279947] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ab79a39c-9a40-4adb-b0c1-95e8325fb29c 00:13:37.421 [2024-07-24 23:57:33.280019] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev ab79a39c-9a40-4adb-b0c1-95e8325fb29c is claimed 00:13:37.421 [2024-07-24 23:57:33.280180] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 640b31f8-278f-451b-9962-38c84f4e8288 00:13:37.421 [2024-07-24 23:57:33.280231] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev 640b31f8-278f-451b-9962-38c84f4e8288 is claimed 00:13:37.421 [2024-07-24 23:57:33.280385] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 640b31f8-278f-451b-9962-38c84f4e8288 (2) smaller than existing raid bdev Raid (3) 00:13:37.421 [2024-07-24 23:57:33.280445] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:13:37.421 [2024-07-24 23:57:33.280458] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:37.421 [2024-07-24 23:57:33.280574] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ba0 00:13:37.421 [2024-07-24 23:57:33.280986] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:13:37.421 [2024-07-24 23:57:33.281039] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x516000008a80 00:13:37.421 [2024-07-24 23:57:33.281232] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.679 23:57:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:37.679 23:57:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:37.679 23:57:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:37.679 23:57:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # jq '.[].num_blocks' 00:13:37.679 [2024-07-24 23:57:33.479266] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.679 23:57:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:37.679 23:57:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:37.679 23:57:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # (( 196608 == 196608 )) 00:13:37.679 23:57:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@922 -- # killprocess 75246 00:13:37.679 23:57:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 75246 ']' 00:13:37.679 23:57:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 75246 00:13:37.679 23:57:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:37.679 23:57:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:37.679 23:57:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75246 00:13:37.679 23:57:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:37.679 23:57:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:37.679 killing process with pid 75246 00:13:37.679 23:57:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75246' 00:13:37.679 23:57:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 75246 00:13:37.679 [2024-07-24 23:57:33.531379] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:37.679 23:57:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 75246 00:13:37.679 [2024-07-24 23:57:33.531479] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.679 [2024-07-24 23:57:33.531547] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.679 [2024-07-24 23:57:33.531567] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name Raid, state offline 00:13:39.068 [2024-07-24 23:57:34.647520] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:40.002 23:57:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@924 -- # return 0 00:13:40.002 00:13:40.002 real 0m7.708s 00:13:40.002 user 0m11.169s 00:13:40.002 sys 0m0.929s 00:13:40.002 23:57:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.002 23:57:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.002 ************************************ 00:13:40.002 END TEST raid1_resize_superblock_test 00:13:40.002 ************************************ 00:13:40.002 23:57:35 bdev_raid -- bdev/bdev_raid.sh@935 -- # uname -s 00:13:40.002 23:57:35 bdev_raid -- bdev/bdev_raid.sh@935 -- # '[' Linux = Linux ']' 00:13:40.002 23:57:35 bdev_raid -- bdev/bdev_raid.sh@935 -- # modprobe -n nbd 00:13:40.002 23:57:35 bdev_raid -- bdev/bdev_raid.sh@936 -- # has_nbd=true 00:13:40.002 23:57:35 bdev_raid -- bdev/bdev_raid.sh@937 -- # modprobe nbd 00:13:40.002 23:57:35 bdev_raid -- bdev/bdev_raid.sh@938 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:13:40.002 23:57:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:40.002 23:57:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.002 23:57:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:40.002 ************************************ 00:13:40.002 START TEST raid_function_test_raid0 00:13:40.002 ************************************ 00:13:40.002 23:57:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:13:40.002 23:57:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:13:40.002 23:57:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:13:40.002 23:57:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:13:40.002 23:57:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=75386 00:13:40.002 23:57:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 75386' 00:13:40.002 Process raid pid: 75386 00:13:40.002 23:57:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 75386 /var/tmp/spdk-raid.sock 00:13:40.002 23:57:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:40.002 23:57:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 75386 ']' 00:13:40.002 23:57:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:40.002 23:57:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:40.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:40.002 23:57:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:40.002 23:57:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:40.002 23:57:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:40.002 [2024-07-24 23:57:35.839935] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:13:40.002 [2024-07-24 23:57:35.840113] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.260 [2024-07-24 23:57:36.015818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.518 [2024-07-24 23:57:36.184118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.518 [2024-07-24 23:57:36.356213] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.083 23:57:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:41.083 23:57:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:13:41.083 23:57:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:13:41.083 23:57:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:13:41.083 23:57:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:41.083 23:57:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:13:41.083 23:57:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:41.340 [2024-07-24 23:57:37.112679] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:41.340 [2024-07-24 23:57:37.114973] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:41.340 Base_1 00:13:41.340 Base_2 00:13:41.340 [2024-07-24 23:57:37.115270] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:13:41.340 [2024-07-24 23:57:37.115299] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:41.340 [2024-07-24 23:57:37.115456] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:13:41.341 [2024-07-24 23:57:37.115847] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:13:41.341 [2024-07-24 23:57:37.115864] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x516000007280 00:13:41.341 [2024-07-24 23:57:37.116040] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.341 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:41.341 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:41.341 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:13:41.598 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:13:41.598 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:13:41.598 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:41.598 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:41.598 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:41.598 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:41.598 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:41.598 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:41.599 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:13:41.599 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:41.599 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:41.599 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:41.856 [2024-07-24 23:57:37.576836] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:13:41.856 /dev/nbd0 00:13:41.856 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:41.856 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:41.856 23:57:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:41.856 23:57:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:13:41.856 23:57:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:41.856 23:57:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:41.856 23:57:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:41.856 23:57:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:13:41.856 23:57:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:41.856 23:57:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:41.856 23:57:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:41.856 1+0 records in 00:13:41.856 1+0 records out 00:13:41.856 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000756913 s, 5.4 MB/s 00:13:41.856 23:57:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.856 23:57:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:13:41.856 23:57:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.856 23:57:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:41.856 23:57:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:13:41.856 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:41.856 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:41.856 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:41.856 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:41.856 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:42.113 { 00:13:42.113 "nbd_device": "/dev/nbd0", 00:13:42.113 "bdev_name": "raid" 00:13:42.113 } 00:13:42.113 ]' 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:42.113 { 00:13:42.113 "nbd_device": "/dev/nbd0", 00:13:42.113 "bdev_name": "raid" 00:13:42.113 } 00:13:42.113 ]' 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:13:42.113 4096+0 records in 00:13:42.113 4096+0 records out 00:13:42.113 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0242905 s, 86.3 MB/s 00:13:42.113 23:57:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:42.679 4096+0 records in 00:13:42.679 4096+0 records out 00:13:42.679 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.32776 s, 6.4 MB/s 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:42.679 128+0 records in 00:13:42.679 128+0 records out 00:13:42.679 65536 bytes (66 kB, 64 KiB) copied, 0.000248425 s, 264 MB/s 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:42.679 2035+0 records in 00:13:42.679 2035+0 records out 00:13:42.679 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00357655 s, 291 MB/s 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:42.679 456+0 records in 00:13:42.679 456+0 records out 00:13:42.679 233472 bytes (233 kB, 228 KiB) copied, 0.00086447 s, 270 MB/s 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.679 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:42.936 [2024-07-24 23:57:38.641618] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.937 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:42.937 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:42.937 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:42.937 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.937 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.937 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:42.937 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:13:42.937 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.937 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:42.937 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:42.937 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 75386 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 75386 ']' 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 75386 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75386 00:13:43.195 killing process with pid 75386 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75386' 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 75386 00:13:43.195 [2024-07-24 23:57:38.915924] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:43.195 23:57:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 75386 00:13:43.195 [2024-07-24 23:57:38.916044] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.195 [2024-07-24 23:57:38.916107] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.195 [2024-07-24 23:57:38.916127] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name raid, state offline 00:13:43.454 [2024-07-24 23:57:39.067283] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:44.391 ************************************ 00:13:44.391 END TEST raid_function_test_raid0 00:13:44.391 ************************************ 00:13:44.391 23:57:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:13:44.391 00:13:44.391 real 0m4.352s 00:13:44.391 user 0m5.549s 00:13:44.391 sys 0m0.927s 00:13:44.391 23:57:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:44.391 23:57:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 23:57:40 bdev_raid -- bdev/bdev_raid.sh@939 -- # run_test raid_function_test_concat raid_function_test concat 00:13:44.391 23:57:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:44.391 23:57:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:44.391 23:57:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 ************************************ 00:13:44.391 START TEST raid_function_test_concat 00:13:44.391 ************************************ 00:13:44.391 23:57:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:13:44.391 23:57:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:13:44.391 23:57:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:13:44.391 23:57:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:13:44.391 Process raid pid: 75538 00:13:44.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:44.391 23:57:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=75538 00:13:44.391 23:57:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 75538' 00:13:44.391 23:57:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:44.391 23:57:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 75538 /var/tmp/spdk-raid.sock 00:13:44.391 23:57:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 75538 ']' 00:13:44.391 23:57:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:44.391 23:57:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:44.391 23:57:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:44.391 23:57:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:44.391 23:57:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:44.391 [2024-07-24 23:57:40.230256] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:13:44.391 [2024-07-24 23:57:40.230407] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.650 [2024-07-24 23:57:40.394193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.908 [2024-07-24 23:57:40.577679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.908 [2024-07-24 23:57:40.740355] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.475 23:57:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:45.475 23:57:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:13:45.475 23:57:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:13:45.475 23:57:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:13:45.475 23:57:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:45.475 23:57:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:13:45.475 23:57:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:45.735 [2024-07-24 23:57:41.472634] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:45.735 [2024-07-24 23:57:41.474730] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:45.735 [2024-07-24 23:57:41.474838] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:13:45.735 [2024-07-24 23:57:41.474859] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:45.735 [2024-07-24 23:57:41.475011] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:13:45.735 [2024-07-24 23:57:41.475404] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:13:45.735 [2024-07-24 23:57:41.475429] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x516000007280 00:13:45.735 [2024-07-24 23:57:41.475623] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.735 Base_1 00:13:45.735 Base_2 00:13:45.735 23:57:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:45.735 23:57:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:45.735 23:57:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:13:46.027 23:57:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:13:46.027 23:57:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:13:46.027 23:57:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:46.027 23:57:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:46.027 23:57:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:46.027 23:57:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:46.027 23:57:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:46.027 23:57:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:46.027 23:57:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:13:46.027 23:57:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:46.027 23:57:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:46.027 23:57:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:46.287 [2024-07-24 23:57:41.916815] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:13:46.287 /dev/nbd0 00:13:46.287 23:57:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:46.287 23:57:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:46.287 23:57:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:46.287 23:57:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:13:46.287 23:57:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:46.287 23:57:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:46.287 23:57:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:46.287 23:57:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:13:46.287 23:57:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:46.287 23:57:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:46.287 23:57:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:46.287 1+0 records in 00:13:46.287 1+0 records out 00:13:46.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321111 s, 12.8 MB/s 00:13:46.287 23:57:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.287 23:57:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:13:46.287 23:57:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.287 23:57:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:46.287 23:57:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:13:46.287 23:57:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:46.287 23:57:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:46.287 23:57:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:46.287 23:57:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:46.287 23:57:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:46.546 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:46.546 { 00:13:46.546 "nbd_device": "/dev/nbd0", 00:13:46.546 "bdev_name": "raid" 00:13:46.546 } 00:13:46.546 ]' 00:13:46.546 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:46.546 { 00:13:46.546 "nbd_device": "/dev/nbd0", 00:13:46.546 "bdev_name": "raid" 00:13:46.546 } 00:13:46.546 ]' 00:13:46.546 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:46.546 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:46.546 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:46.546 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:46.546 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:13:46.546 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:13:46.546 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:13:46.546 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:13:46.546 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:46.546 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:13:46.546 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:13:46.546 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:46.546 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:13:46.546 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:46.546 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:13:46.546 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:13:46.547 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:13:46.547 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:13:46.547 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:13:46.547 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:13:46.547 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:13:46.547 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:13:46.547 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:13:46.547 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:13:46.547 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:13:46.547 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:13:46.547 4096+0 records in 00:13:46.547 4096+0 records out 00:13:46.547 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0163326 s, 128 MB/s 00:13:46.547 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:46.806 4096+0 records in 00:13:46.806 4096+0 records out 00:13:46.806 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.333935 s, 6.3 MB/s 00:13:46.806 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:13:46.806 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:46.806 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:13:46.806 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:46.806 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:13:46.806 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:13:46.806 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:46.806 128+0 records in 00:13:46.806 128+0 records out 00:13:46.806 65536 bytes (66 kB, 64 KiB) copied, 0.0009134 s, 71.7 MB/s 00:13:46.806 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:46.806 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:46.806 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:46.806 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:46.806 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:46.806 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:13:46.806 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:13:46.806 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:47.066 2035+0 records in 00:13:47.066 2035+0 records out 00:13:47.066 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00614756 s, 169 MB/s 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:47.066 456+0 records in 00:13:47.066 456+0 records out 00:13:47.066 233472 bytes (233 kB, 228 KiB) copied, 0.00163021 s, 143 MB/s 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:47.066 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:47.326 [2024-07-24 23:57:42.934692] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.326 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:47.326 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:47.326 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:47.326 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:47.326 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:47.326 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:13:47.326 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:13:47.326 23:57:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:47.326 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:47.326 23:57:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:47.326 23:57:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:47.326 23:57:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:47.326 23:57:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:47.326 23:57:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:47.326 23:57:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:13:47.326 23:57:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:47.326 23:57:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:13:47.326 23:57:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:13:47.326 23:57:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:13:47.326 23:57:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:13:47.326 23:57:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:13:47.326 23:57:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 75538 00:13:47.326 23:57:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 75538 ']' 00:13:47.326 23:57:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 75538 00:13:47.326 23:57:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:13:47.326 23:57:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:47.326 23:57:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75538 00:13:47.586 killing process with pid 75538 00:13:47.586 23:57:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:47.586 23:57:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:47.586 23:57:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75538' 00:13:47.586 23:57:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 75538 00:13:47.586 [2024-07-24 23:57:43.201776] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:47.586 23:57:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 75538 00:13:47.586 [2024-07-24 23:57:43.201931] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.586 [2024-07-24 23:57:43.201997] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:47.586 [2024-07-24 23:57:43.202017] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name raid, state offline 00:13:47.586 [2024-07-24 23:57:43.349737] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:48.963 23:57:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:13:48.963 00:13:48.963 real 0m4.231s 00:13:48.963 user 0m5.313s 00:13:48.963 sys 0m0.941s 00:13:48.963 23:57:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:48.963 ************************************ 00:13:48.963 END TEST raid_function_test_concat 00:13:48.963 ************************************ 00:13:48.963 23:57:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:48.963 23:57:44 bdev_raid -- bdev/bdev_raid.sh@942 -- # run_test raid0_resize_test raid_resize_test 0 00:13:48.963 23:57:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:48.963 23:57:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.963 23:57:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:48.963 ************************************ 00:13:48.963 START TEST raid0_resize_test 00:13:48.963 ************************************ 00:13:48.963 23:57:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:13:48.963 23:57:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=0 00:13:48.963 23:57:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:13:48.963 23:57:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:13:48.963 23:57:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:13:48.963 23:57:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:13:48.963 23:57:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:13:48.963 23:57:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:13:48.963 23:57:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:13:48.963 Process raid pid: 75679 00:13:48.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:48.964 23:57:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=75679 00:13:48.964 23:57:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 75679' 00:13:48.964 23:57:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:48.964 23:57:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 75679 /var/tmp/spdk-raid.sock 00:13:48.964 23:57:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 75679 ']' 00:13:48.964 23:57:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:48.964 23:57:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:48.964 23:57:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:48.964 23:57:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:48.964 23:57:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.964 [2024-07-24 23:57:44.520589] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:13:48.964 [2024-07-24 23:57:44.520991] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.964 [2024-07-24 23:57:44.679759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.223 [2024-07-24 23:57:44.861117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.223 [2024-07-24 23:57:45.025125] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.790 23:57:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:49.790 23:57:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:13:49.790 23:57:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:13:50.049 Base_1 00:13:50.049 23:57:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:13:50.308 Base_2 00:13:50.308 23:57:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 0 -eq 0 ']' 00:13:50.308 23:57:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:13:50.567 [2024-07-24 23:57:46.185678] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:50.567 [2024-07-24 23:57:46.187875] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:50.567 [2024-07-24 23:57:46.187990] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:13:50.567 [2024-07-24 23:57:46.188011] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:50.567 [2024-07-24 23:57:46.188155] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:13:50.567 [2024-07-24 23:57:46.188512] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:13:50.567 [2024-07-24 23:57:46.188536] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x516000007280 00:13:50.567 [2024-07-24 23:57:46.188729] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.567 23:57:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:13:50.567 [2024-07-24 23:57:46.393716] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:50.567 [2024-07-24 23:57:46.393775] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:13:50.567 true 00:13:50.567 23:57:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:50.567 23:57:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:13:50.826 [2024-07-24 23:57:46.609965] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:50.826 23:57:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=131072 00:13:50.826 23:57:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=64 00:13:50.826 23:57:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 0 -eq 0 ']' 00:13:50.826 23:57:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # expected_size=64 00:13:50.826 23:57:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 64 '!=' 64 ']' 00:13:50.826 23:57:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:13:51.085 [2024-07-24 23:57:46.817788] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:51.085 [2024-07-24 23:57:46.818018] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:13:51.085 [2024-07-24 23:57:46.818285] bdev_raid.c:2315:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:13:51.085 true 00:13:51.085 23:57:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:51.085 23:57:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:13:51.345 [2024-07-24 23:57:47.034075] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.345 23:57:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=262144 00:13:51.345 23:57:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=128 00:13:51.345 23:57:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 0 -eq 0 ']' 00:13:51.345 23:57:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@393 -- # expected_size=128 00:13:51.345 23:57:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 128 '!=' 128 ']' 00:13:51.345 23:57:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 75679 00:13:51.345 23:57:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 75679 ']' 00:13:51.345 23:57:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 75679 00:13:51.345 23:57:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:13:51.345 23:57:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:51.345 23:57:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75679 00:13:51.345 killing process with pid 75679 00:13:51.345 23:57:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:51.345 23:57:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:51.345 23:57:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75679' 00:13:51.345 23:57:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 75679 00:13:51.345 [2024-07-24 23:57:47.084146] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.345 23:57:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 75679 00:13:51.345 [2024-07-24 23:57:47.084257] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.345 [2024-07-24 23:57:47.084313] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:51.345 [2024-07-24 23:57:47.084326] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Raid, state offline 00:13:51.345 [2024-07-24 23:57:47.085043] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:52.282 23:57:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:13:52.282 00:13:52.282 real 0m3.678s 00:13:52.282 user 0m5.180s 00:13:52.282 sys 0m0.479s 00:13:52.282 23:57:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:52.282 23:57:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.282 ************************************ 00:13:52.282 END TEST raid0_resize_test 00:13:52.282 ************************************ 00:13:52.542 23:57:48 bdev_raid -- bdev/bdev_raid.sh@943 -- # run_test raid1_resize_test raid_resize_test 1 00:13:52.542 23:57:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:52.542 23:57:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:52.542 23:57:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:52.542 ************************************ 00:13:52.542 START TEST raid1_resize_test 00:13:52.542 ************************************ 00:13:52.542 23:57:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:13:52.542 23:57:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=1 00:13:52.542 23:57:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:13:52.542 23:57:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:13:52.542 23:57:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:13:52.542 23:57:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:13:52.542 23:57:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:13:52.542 23:57:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:13:52.542 23:57:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:13:52.542 23:57:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=75756 00:13:52.542 Process raid pid: 75756 00:13:52.542 23:57:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 75756' 00:13:52.542 23:57:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 75756 /var/tmp/spdk-raid.sock 00:13:52.542 23:57:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 75756 ']' 00:13:52.542 23:57:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:52.542 23:57:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:52.542 23:57:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:52.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:52.542 23:57:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:52.542 23:57:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:52.542 23:57:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.542 [2024-07-24 23:57:48.260973] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:13:52.542 [2024-07-24 23:57:48.261146] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.802 [2024-07-24 23:57:48.434318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.802 [2024-07-24 23:57:48.607718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.122 [2024-07-24 23:57:48.774734] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.410 23:57:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:53.410 23:57:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:13:53.410 23:57:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:13:53.670 Base_1 00:13:53.670 23:57:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:13:53.929 Base_2 00:13:53.929 23:57:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 1 -eq 0 ']' 00:13:53.929 23:57:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@367 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r 1 -b 'Base_1 Base_2' -n Raid 00:13:54.189 [2024-07-24 23:57:49.809880] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:54.189 [2024-07-24 23:57:49.811947] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:54.189 [2024-07-24 23:57:49.812028] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:13:54.189 [2024-07-24 23:57:49.812045] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:54.189 [2024-07-24 23:57:49.812177] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:13:54.189 [2024-07-24 23:57:49.812503] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:13:54.189 [2024-07-24 23:57:49.812519] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x516000007280 00:13:54.189 [2024-07-24 23:57:49.812686] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.189 23:57:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:13:54.189 [2024-07-24 23:57:50.021990] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:54.189 [2024-07-24 23:57:50.022042] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:13:54.189 true 00:13:54.189 23:57:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:54.189 23:57:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:13:54.449 [2024-07-24 23:57:50.234167] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:54.449 23:57:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=65536 00:13:54.449 23:57:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=32 00:13:54.449 23:57:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 1 -eq 0 ']' 00:13:54.449 23:57:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@379 -- # expected_size=32 00:13:54.449 23:57:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 32 '!=' 32 ']' 00:13:54.449 23:57:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:13:54.709 [2024-07-24 23:57:50.454101] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:54.709 [2024-07-24 23:57:50.454154] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:13:54.709 [2024-07-24 23:57:50.454211] bdev_raid.c:2315:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:13:54.709 true 00:13:54.709 23:57:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:54.709 23:57:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:13:54.969 [2024-07-24 23:57:50.722367] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:54.969 23:57:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=131072 00:13:54.969 23:57:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=64 00:13:54.969 23:57:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 1 -eq 0 ']' 00:13:54.969 23:57:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@395 -- # expected_size=64 00:13:54.969 23:57:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 64 '!=' 64 ']' 00:13:54.969 23:57:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 75756 00:13:54.969 23:57:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 75756 ']' 00:13:54.969 23:57:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 75756 00:13:54.969 23:57:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:13:54.969 23:57:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:54.969 23:57:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75756 00:13:54.969 killing process with pid 75756 00:13:54.969 23:57:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:54.969 23:57:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:54.969 23:57:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75756' 00:13:54.969 23:57:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 75756 00:13:54.969 [2024-07-24 23:57:50.776760] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:54.969 23:57:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 75756 00:13:54.969 [2024-07-24 23:57:50.776889] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:54.969 [2024-07-24 23:57:50.777491] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:54.969 [2024-07-24 23:57:50.777519] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Raid, state offline 00:13:54.969 [2024-07-24 23:57:50.777683] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:56.347 23:57:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:13:56.347 00:13:56.347 real 0m3.640s 00:13:56.347 user 0m5.102s 00:13:56.347 sys 0m0.486s 00:13:56.347 23:57:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:56.347 23:57:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.347 ************************************ 00:13:56.347 END TEST raid1_resize_test 00:13:56.347 ************************************ 00:13:56.347 23:57:51 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:13:56.347 23:57:51 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:13:56.347 23:57:51 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:13:56.347 23:57:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:56.347 23:57:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:56.347 23:57:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:56.347 ************************************ 00:13:56.347 START TEST raid_state_function_test 00:13:56.347 ************************************ 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:56.347 Process raid pid: 75835 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=75835 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 75835' 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 75835 /var/tmp/spdk-raid.sock 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 75835 ']' 00:13:56.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:56.347 23:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.347 [2024-07-24 23:57:51.961123] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:13:56.347 [2024-07-24 23:57:51.961551] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.347 [2024-07-24 23:57:52.138781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.606 [2024-07-24 23:57:52.310526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.606 [2024-07-24 23:57:52.475077] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.173 23:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:57.174 23:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:57.174 23:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:57.433 [2024-07-24 23:57:53.119944] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:57.433 [2024-07-24 23:57:53.120024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:57.433 [2024-07-24 23:57:53.120039] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:57.433 [2024-07-24 23:57:53.120053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:57.433 23:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:57.433 23:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:57.433 23:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:57.433 23:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:57.433 23:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:57.433 23:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:57.433 23:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:57.433 23:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:57.433 23:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:57.433 23:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:57.433 23:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.433 23:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.691 23:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:57.691 "name": "Existed_Raid", 00:13:57.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.691 "strip_size_kb": 64, 00:13:57.691 "state": "configuring", 00:13:57.691 "raid_level": "raid0", 00:13:57.691 "superblock": false, 00:13:57.691 "num_base_bdevs": 2, 00:13:57.691 "num_base_bdevs_discovered": 0, 00:13:57.691 "num_base_bdevs_operational": 2, 00:13:57.691 "base_bdevs_list": [ 00:13:57.691 { 00:13:57.691 "name": "BaseBdev1", 00:13:57.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.691 "is_configured": false, 00:13:57.691 "data_offset": 0, 00:13:57.691 "data_size": 0 00:13:57.691 }, 00:13:57.691 { 00:13:57.691 "name": "BaseBdev2", 00:13:57.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.691 "is_configured": false, 00:13:57.691 "data_offset": 0, 00:13:57.691 "data_size": 0 00:13:57.691 } 00:13:57.691 ] 00:13:57.691 }' 00:13:57.691 23:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:57.691 23:57:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.949 23:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:58.207 [2024-07-24 23:57:53.888031] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:58.207 [2024-07-24 23:57:53.888088] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:13:58.207 23:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:58.466 [2024-07-24 23:57:54.152108] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:58.466 [2024-07-24 23:57:54.152186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:58.466 [2024-07-24 23:57:54.152200] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:58.466 [2024-07-24 23:57:54.152215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:58.466 23:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:58.725 [2024-07-24 23:57:54.436269] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:58.725 BaseBdev1 00:13:58.725 23:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:58.725 23:57:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:58.725 23:57:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:58.725 23:57:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:58.725 23:57:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:58.725 23:57:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:58.725 23:57:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:58.984 23:57:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:58.984 [ 00:13:58.984 { 00:13:58.984 "name": "BaseBdev1", 00:13:58.984 "aliases": [ 00:13:58.984 "4a979c33-8e8e-420e-9083-0d996f021dc5" 00:13:58.984 ], 00:13:58.984 "product_name": "Malloc disk", 00:13:58.984 "block_size": 512, 00:13:58.984 "num_blocks": 65536, 00:13:58.984 "uuid": "4a979c33-8e8e-420e-9083-0d996f021dc5", 00:13:58.984 "assigned_rate_limits": { 00:13:58.984 "rw_ios_per_sec": 0, 00:13:58.984 "rw_mbytes_per_sec": 0, 00:13:58.984 "r_mbytes_per_sec": 0, 00:13:58.984 "w_mbytes_per_sec": 0 00:13:58.984 }, 00:13:58.984 "claimed": true, 00:13:58.984 "claim_type": "exclusive_write", 00:13:58.984 "zoned": false, 00:13:58.984 "supported_io_types": { 00:13:58.984 "read": true, 00:13:58.984 "write": true, 00:13:58.984 "unmap": true, 00:13:58.984 "flush": true, 00:13:58.984 "reset": true, 00:13:58.984 "nvme_admin": false, 00:13:58.984 "nvme_io": false, 00:13:58.984 "nvme_io_md": false, 00:13:58.984 "write_zeroes": true, 00:13:58.984 "zcopy": true, 00:13:58.984 "get_zone_info": false, 00:13:58.984 "zone_management": false, 00:13:58.984 "zone_append": false, 00:13:58.984 "compare": false, 00:13:58.984 "compare_and_write": false, 00:13:58.984 "abort": true, 00:13:58.984 "seek_hole": false, 00:13:58.984 "seek_data": false, 00:13:58.984 "copy": true, 00:13:58.984 "nvme_iov_md": false 00:13:58.984 }, 00:13:58.984 "memory_domains": [ 00:13:58.984 { 00:13:58.984 "dma_device_id": "system", 00:13:58.984 "dma_device_type": 1 00:13:58.984 }, 00:13:58.984 { 00:13:58.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.984 "dma_device_type": 2 00:13:58.984 } 00:13:58.984 ], 00:13:58.984 "driver_specific": {} 00:13:58.984 } 00:13:58.984 ] 00:13:59.243 23:57:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:59.243 23:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:59.243 23:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:59.243 23:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:59.243 23:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:59.243 23:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:59.243 23:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:59.243 23:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:59.243 23:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:59.243 23:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:59.243 23:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:59.243 23:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.243 23:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.243 23:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:59.243 "name": "Existed_Raid", 00:13:59.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.243 "strip_size_kb": 64, 00:13:59.243 "state": "configuring", 00:13:59.243 "raid_level": "raid0", 00:13:59.243 "superblock": false, 00:13:59.243 "num_base_bdevs": 2, 00:13:59.243 "num_base_bdevs_discovered": 1, 00:13:59.243 "num_base_bdevs_operational": 2, 00:13:59.243 "base_bdevs_list": [ 00:13:59.243 { 00:13:59.243 "name": "BaseBdev1", 00:13:59.243 "uuid": "4a979c33-8e8e-420e-9083-0d996f021dc5", 00:13:59.243 "is_configured": true, 00:13:59.243 "data_offset": 0, 00:13:59.243 "data_size": 65536 00:13:59.243 }, 00:13:59.243 { 00:13:59.243 "name": "BaseBdev2", 00:13:59.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.243 "is_configured": false, 00:13:59.243 "data_offset": 0, 00:13:59.243 "data_size": 0 00:13:59.243 } 00:13:59.243 ] 00:13:59.243 }' 00:13:59.243 23:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:59.243 23:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.811 23:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:59.811 [2024-07-24 23:57:55.660651] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:59.811 [2024-07-24 23:57:55.660714] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:14:00.069 23:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:00.069 [2024-07-24 23:57:55.864723] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:00.069 [2024-07-24 23:57:55.866762] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:00.069 [2024-07-24 23:57:55.866826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:00.069 23:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:00.069 23:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:00.069 23:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:00.069 23:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:00.069 23:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:00.069 23:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:00.069 23:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:00.069 23:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:00.069 23:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:00.069 23:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:00.069 23:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:00.069 23:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:00.069 23:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.069 23:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.327 23:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:00.327 "name": "Existed_Raid", 00:14:00.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.327 "strip_size_kb": 64, 00:14:00.327 "state": "configuring", 00:14:00.327 "raid_level": "raid0", 00:14:00.327 "superblock": false, 00:14:00.327 "num_base_bdevs": 2, 00:14:00.327 "num_base_bdevs_discovered": 1, 00:14:00.327 "num_base_bdevs_operational": 2, 00:14:00.327 "base_bdevs_list": [ 00:14:00.327 { 00:14:00.327 "name": "BaseBdev1", 00:14:00.327 "uuid": "4a979c33-8e8e-420e-9083-0d996f021dc5", 00:14:00.327 "is_configured": true, 00:14:00.327 "data_offset": 0, 00:14:00.327 "data_size": 65536 00:14:00.327 }, 00:14:00.327 { 00:14:00.327 "name": "BaseBdev2", 00:14:00.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.327 "is_configured": false, 00:14:00.327 "data_offset": 0, 00:14:00.327 "data_size": 0 00:14:00.327 } 00:14:00.327 ] 00:14:00.327 }' 00:14:00.327 23:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:00.327 23:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.586 23:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:01.153 [2024-07-24 23:57:56.732482] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:01.153 [2024-07-24 23:57:56.732554] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:14:01.153 [2024-07-24 23:57:56.732568] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:01.153 [2024-07-24 23:57:56.732729] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:14:01.153 [2024-07-24 23:57:56.733169] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:14:01.153 [2024-07-24 23:57:56.733195] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:14:01.153 BaseBdev2 00:14:01.153 [2024-07-24 23:57:56.733564] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.153 23:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:01.153 23:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:01.153 23:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:01.153 23:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:01.153 23:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:01.153 23:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:01.153 23:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:01.153 23:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:01.412 [ 00:14:01.412 { 00:14:01.412 "name": "BaseBdev2", 00:14:01.412 "aliases": [ 00:14:01.412 "7786f029-bc6b-451e-aee5-fc4ff9e1e80a" 00:14:01.412 ], 00:14:01.412 "product_name": "Malloc disk", 00:14:01.412 "block_size": 512, 00:14:01.412 "num_blocks": 65536, 00:14:01.412 "uuid": "7786f029-bc6b-451e-aee5-fc4ff9e1e80a", 00:14:01.412 "assigned_rate_limits": { 00:14:01.412 "rw_ios_per_sec": 0, 00:14:01.412 "rw_mbytes_per_sec": 0, 00:14:01.412 "r_mbytes_per_sec": 0, 00:14:01.412 "w_mbytes_per_sec": 0 00:14:01.412 }, 00:14:01.412 "claimed": true, 00:14:01.412 "claim_type": "exclusive_write", 00:14:01.412 "zoned": false, 00:14:01.412 "supported_io_types": { 00:14:01.412 "read": true, 00:14:01.412 "write": true, 00:14:01.412 "unmap": true, 00:14:01.412 "flush": true, 00:14:01.412 "reset": true, 00:14:01.412 "nvme_admin": false, 00:14:01.412 "nvme_io": false, 00:14:01.412 "nvme_io_md": false, 00:14:01.412 "write_zeroes": true, 00:14:01.412 "zcopy": true, 00:14:01.412 "get_zone_info": false, 00:14:01.412 "zone_management": false, 00:14:01.412 "zone_append": false, 00:14:01.412 "compare": false, 00:14:01.412 "compare_and_write": false, 00:14:01.412 "abort": true, 00:14:01.412 "seek_hole": false, 00:14:01.412 "seek_data": false, 00:14:01.412 "copy": true, 00:14:01.412 "nvme_iov_md": false 00:14:01.412 }, 00:14:01.412 "memory_domains": [ 00:14:01.412 { 00:14:01.412 "dma_device_id": "system", 00:14:01.412 "dma_device_type": 1 00:14:01.412 }, 00:14:01.412 { 00:14:01.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.412 "dma_device_type": 2 00:14:01.412 } 00:14:01.412 ], 00:14:01.412 "driver_specific": {} 00:14:01.412 } 00:14:01.412 ] 00:14:01.412 23:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:01.412 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:01.412 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:01.412 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:01.412 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:01.412 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:01.412 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:01.412 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:01.412 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:01.412 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:01.412 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:01.412 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:01.412 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:01.412 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.412 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.670 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:01.670 "name": "Existed_Raid", 00:14:01.670 "uuid": "ba1e49f7-f79d-4bcd-9bc1-61f63470dcbe", 00:14:01.670 "strip_size_kb": 64, 00:14:01.670 "state": "online", 00:14:01.670 "raid_level": "raid0", 00:14:01.670 "superblock": false, 00:14:01.670 "num_base_bdevs": 2, 00:14:01.670 "num_base_bdevs_discovered": 2, 00:14:01.670 "num_base_bdevs_operational": 2, 00:14:01.670 "base_bdevs_list": [ 00:14:01.670 { 00:14:01.670 "name": "BaseBdev1", 00:14:01.670 "uuid": "4a979c33-8e8e-420e-9083-0d996f021dc5", 00:14:01.670 "is_configured": true, 00:14:01.670 "data_offset": 0, 00:14:01.670 "data_size": 65536 00:14:01.670 }, 00:14:01.670 { 00:14:01.670 "name": "BaseBdev2", 00:14:01.670 "uuid": "7786f029-bc6b-451e-aee5-fc4ff9e1e80a", 00:14:01.670 "is_configured": true, 00:14:01.670 "data_offset": 0, 00:14:01.670 "data_size": 65536 00:14:01.670 } 00:14:01.670 ] 00:14:01.670 }' 00:14:01.670 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:01.670 23:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.236 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:02.236 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:02.236 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:02.236 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:02.236 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:02.236 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:02.236 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:02.236 23:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:02.236 [2024-07-24 23:57:58.021196] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.236 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:02.236 "name": "Existed_Raid", 00:14:02.236 "aliases": [ 00:14:02.236 "ba1e49f7-f79d-4bcd-9bc1-61f63470dcbe" 00:14:02.236 ], 00:14:02.236 "product_name": "Raid Volume", 00:14:02.236 "block_size": 512, 00:14:02.236 "num_blocks": 131072, 00:14:02.236 "uuid": "ba1e49f7-f79d-4bcd-9bc1-61f63470dcbe", 00:14:02.236 "assigned_rate_limits": { 00:14:02.236 "rw_ios_per_sec": 0, 00:14:02.236 "rw_mbytes_per_sec": 0, 00:14:02.236 "r_mbytes_per_sec": 0, 00:14:02.236 "w_mbytes_per_sec": 0 00:14:02.236 }, 00:14:02.236 "claimed": false, 00:14:02.236 "zoned": false, 00:14:02.236 "supported_io_types": { 00:14:02.236 "read": true, 00:14:02.236 "write": true, 00:14:02.236 "unmap": true, 00:14:02.236 "flush": true, 00:14:02.236 "reset": true, 00:14:02.236 "nvme_admin": false, 00:14:02.236 "nvme_io": false, 00:14:02.236 "nvme_io_md": false, 00:14:02.236 "write_zeroes": true, 00:14:02.236 "zcopy": false, 00:14:02.236 "get_zone_info": false, 00:14:02.236 "zone_management": false, 00:14:02.236 "zone_append": false, 00:14:02.236 "compare": false, 00:14:02.236 "compare_and_write": false, 00:14:02.236 "abort": false, 00:14:02.236 "seek_hole": false, 00:14:02.236 "seek_data": false, 00:14:02.236 "copy": false, 00:14:02.236 "nvme_iov_md": false 00:14:02.236 }, 00:14:02.236 "memory_domains": [ 00:14:02.236 { 00:14:02.236 "dma_device_id": "system", 00:14:02.236 "dma_device_type": 1 00:14:02.236 }, 00:14:02.236 { 00:14:02.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.236 "dma_device_type": 2 00:14:02.236 }, 00:14:02.236 { 00:14:02.236 "dma_device_id": "system", 00:14:02.236 "dma_device_type": 1 00:14:02.236 }, 00:14:02.236 { 00:14:02.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.236 "dma_device_type": 2 00:14:02.236 } 00:14:02.236 ], 00:14:02.236 "driver_specific": { 00:14:02.236 "raid": { 00:14:02.236 "uuid": "ba1e49f7-f79d-4bcd-9bc1-61f63470dcbe", 00:14:02.237 "strip_size_kb": 64, 00:14:02.237 "state": "online", 00:14:02.237 "raid_level": "raid0", 00:14:02.237 "superblock": false, 00:14:02.237 "num_base_bdevs": 2, 00:14:02.237 "num_base_bdevs_discovered": 2, 00:14:02.237 "num_base_bdevs_operational": 2, 00:14:02.237 "base_bdevs_list": [ 00:14:02.237 { 00:14:02.237 "name": "BaseBdev1", 00:14:02.237 "uuid": "4a979c33-8e8e-420e-9083-0d996f021dc5", 00:14:02.237 "is_configured": true, 00:14:02.237 "data_offset": 0, 00:14:02.237 "data_size": 65536 00:14:02.237 }, 00:14:02.237 { 00:14:02.237 "name": "BaseBdev2", 00:14:02.237 "uuid": "7786f029-bc6b-451e-aee5-fc4ff9e1e80a", 00:14:02.237 "is_configured": true, 00:14:02.237 "data_offset": 0, 00:14:02.237 "data_size": 65536 00:14:02.237 } 00:14:02.237 ] 00:14:02.237 } 00:14:02.237 } 00:14:02.237 }' 00:14:02.237 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:02.237 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:02.237 BaseBdev2' 00:14:02.237 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:02.237 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:02.237 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:02.495 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:02.495 "name": "BaseBdev1", 00:14:02.495 "aliases": [ 00:14:02.495 "4a979c33-8e8e-420e-9083-0d996f021dc5" 00:14:02.495 ], 00:14:02.495 "product_name": "Malloc disk", 00:14:02.495 "block_size": 512, 00:14:02.495 "num_blocks": 65536, 00:14:02.495 "uuid": "4a979c33-8e8e-420e-9083-0d996f021dc5", 00:14:02.495 "assigned_rate_limits": { 00:14:02.495 "rw_ios_per_sec": 0, 00:14:02.495 "rw_mbytes_per_sec": 0, 00:14:02.495 "r_mbytes_per_sec": 0, 00:14:02.495 "w_mbytes_per_sec": 0 00:14:02.495 }, 00:14:02.495 "claimed": true, 00:14:02.495 "claim_type": "exclusive_write", 00:14:02.495 "zoned": false, 00:14:02.495 "supported_io_types": { 00:14:02.495 "read": true, 00:14:02.495 "write": true, 00:14:02.495 "unmap": true, 00:14:02.495 "flush": true, 00:14:02.495 "reset": true, 00:14:02.495 "nvme_admin": false, 00:14:02.495 "nvme_io": false, 00:14:02.495 "nvme_io_md": false, 00:14:02.495 "write_zeroes": true, 00:14:02.495 "zcopy": true, 00:14:02.495 "get_zone_info": false, 00:14:02.495 "zone_management": false, 00:14:02.495 "zone_append": false, 00:14:02.495 "compare": false, 00:14:02.495 "compare_and_write": false, 00:14:02.495 "abort": true, 00:14:02.495 "seek_hole": false, 00:14:02.495 "seek_data": false, 00:14:02.495 "copy": true, 00:14:02.495 "nvme_iov_md": false 00:14:02.495 }, 00:14:02.495 "memory_domains": [ 00:14:02.495 { 00:14:02.495 "dma_device_id": "system", 00:14:02.495 "dma_device_type": 1 00:14:02.495 }, 00:14:02.495 { 00:14:02.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.495 "dma_device_type": 2 00:14:02.495 } 00:14:02.495 ], 00:14:02.495 "driver_specific": {} 00:14:02.495 }' 00:14:02.495 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:02.495 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:02.495 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:02.495 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:02.495 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:02.495 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:02.495 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:02.495 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:02.495 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:02.495 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:02.495 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:02.495 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:02.495 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:02.495 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:02.495 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:02.754 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:02.754 "name": "BaseBdev2", 00:14:02.754 "aliases": [ 00:14:02.754 "7786f029-bc6b-451e-aee5-fc4ff9e1e80a" 00:14:02.754 ], 00:14:02.754 "product_name": "Malloc disk", 00:14:02.754 "block_size": 512, 00:14:02.754 "num_blocks": 65536, 00:14:02.754 "uuid": "7786f029-bc6b-451e-aee5-fc4ff9e1e80a", 00:14:02.754 "assigned_rate_limits": { 00:14:02.754 "rw_ios_per_sec": 0, 00:14:02.754 "rw_mbytes_per_sec": 0, 00:14:02.754 "r_mbytes_per_sec": 0, 00:14:02.754 "w_mbytes_per_sec": 0 00:14:02.754 }, 00:14:02.754 "claimed": true, 00:14:02.754 "claim_type": "exclusive_write", 00:14:02.754 "zoned": false, 00:14:02.754 "supported_io_types": { 00:14:02.754 "read": true, 00:14:02.754 "write": true, 00:14:02.754 "unmap": true, 00:14:02.754 "flush": true, 00:14:02.754 "reset": true, 00:14:02.754 "nvme_admin": false, 00:14:02.754 "nvme_io": false, 00:14:02.754 "nvme_io_md": false, 00:14:02.754 "write_zeroes": true, 00:14:02.754 "zcopy": true, 00:14:02.754 "get_zone_info": false, 00:14:02.754 "zone_management": false, 00:14:02.754 "zone_append": false, 00:14:02.754 "compare": false, 00:14:02.754 "compare_and_write": false, 00:14:02.754 "abort": true, 00:14:02.754 "seek_hole": false, 00:14:02.754 "seek_data": false, 00:14:02.754 "copy": true, 00:14:02.754 "nvme_iov_md": false 00:14:02.754 }, 00:14:02.754 "memory_domains": [ 00:14:02.754 { 00:14:02.754 "dma_device_id": "system", 00:14:02.754 "dma_device_type": 1 00:14:02.754 }, 00:14:02.754 { 00:14:02.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.754 "dma_device_type": 2 00:14:02.754 } 00:14:02.754 ], 00:14:02.754 "driver_specific": {} 00:14:02.754 }' 00:14:02.754 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:02.754 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:02.754 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:02.754 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:02.754 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:02.754 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:02.754 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:02.754 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:02.754 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:02.754 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:02.754 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.013 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:03.013 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:03.013 [2024-07-24 23:57:58.869302] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:03.013 [2024-07-24 23:57:58.869578] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.013 [2024-07-24 23:57:58.869771] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.271 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:03.271 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:03.271 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:03.271 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:03.271 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:03.272 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:03.272 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:03.272 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:03.272 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:03.272 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:03.272 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:03.272 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:03.272 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:03.272 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:03.272 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:03.272 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.272 23:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.530 23:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:03.530 "name": "Existed_Raid", 00:14:03.530 "uuid": "ba1e49f7-f79d-4bcd-9bc1-61f63470dcbe", 00:14:03.530 "strip_size_kb": 64, 00:14:03.530 "state": "offline", 00:14:03.530 "raid_level": "raid0", 00:14:03.530 "superblock": false, 00:14:03.530 "num_base_bdevs": 2, 00:14:03.530 "num_base_bdevs_discovered": 1, 00:14:03.530 "num_base_bdevs_operational": 1, 00:14:03.530 "base_bdevs_list": [ 00:14:03.530 { 00:14:03.530 "name": null, 00:14:03.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.530 "is_configured": false, 00:14:03.530 "data_offset": 0, 00:14:03.530 "data_size": 65536 00:14:03.530 }, 00:14:03.530 { 00:14:03.530 "name": "BaseBdev2", 00:14:03.530 "uuid": "7786f029-bc6b-451e-aee5-fc4ff9e1e80a", 00:14:03.530 "is_configured": true, 00:14:03.530 "data_offset": 0, 00:14:03.530 "data_size": 65536 00:14:03.530 } 00:14:03.530 ] 00:14:03.530 }' 00:14:03.530 23:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:03.530 23:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.788 23:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:03.788 23:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:03.788 23:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.788 23:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:04.047 23:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:04.047 23:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:04.047 23:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:04.306 [2024-07-24 23:58:00.037349] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:04.306 [2024-07-24 23:58:00.037638] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:14:04.306 23:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:04.306 23:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:04.306 23:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:04.306 23:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.565 23:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:04.565 23:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:04.565 23:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:04.565 23:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 75835 00:14:04.565 23:58:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 75835 ']' 00:14:04.565 23:58:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 75835 00:14:04.565 23:58:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:04.565 23:58:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:04.565 23:58:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75835 00:14:04.565 killing process with pid 75835 00:14:04.565 23:58:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:04.565 23:58:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:04.565 23:58:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75835' 00:14:04.565 23:58:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 75835 00:14:04.565 [2024-07-24 23:58:00.374471] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:04.565 23:58:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 75835 00:14:04.565 [2024-07-24 23:58:00.374585] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:05.941 ************************************ 00:14:05.941 END TEST raid_state_function_test 00:14:05.941 ************************************ 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:05.941 00:14:05.941 real 0m9.546s 00:14:05.941 user 0m15.691s 00:14:05.941 sys 0m1.501s 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.941 23:58:01 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:14:05.941 23:58:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:05.941 23:58:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:05.941 23:58:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:05.941 ************************************ 00:14:05.941 START TEST raid_state_function_test_sb 00:14:05.941 ************************************ 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:05.941 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:05.942 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:05.942 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:05.942 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:05.942 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:05.942 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=76176 00:14:05.942 Process raid pid: 76176 00:14:05.942 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 76176' 00:14:05.942 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:05.942 23:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 76176 /var/tmp/spdk-raid.sock 00:14:05.942 23:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 76176 ']' 00:14:05.942 23:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:05.942 23:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:05.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:05.942 23:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:05.942 23:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:05.942 23:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.942 [2024-07-24 23:58:01.550339] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:14:05.942 [2024-07-24 23:58:01.550516] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.942 [2024-07-24 23:58:01.709732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.201 [2024-07-24 23:58:01.897331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.201 [2024-07-24 23:58:02.069104] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.768 23:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:06.768 23:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:06.768 23:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:07.027 [2024-07-24 23:58:02.734613] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:07.027 [2024-07-24 23:58:02.734738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:07.027 [2024-07-24 23:58:02.734755] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:07.027 [2024-07-24 23:58:02.734772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:07.027 23:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:07.027 23:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:07.027 23:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:07.027 23:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:07.027 23:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:07.027 23:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:07.027 23:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:07.027 23:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:07.027 23:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:07.027 23:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:07.027 23:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.027 23:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.286 23:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:07.286 "name": "Existed_Raid", 00:14:07.286 "uuid": "119b8fea-37d9-4814-b34d-c7d2671b0324", 00:14:07.286 "strip_size_kb": 64, 00:14:07.286 "state": "configuring", 00:14:07.286 "raid_level": "raid0", 00:14:07.286 "superblock": true, 00:14:07.286 "num_base_bdevs": 2, 00:14:07.286 "num_base_bdevs_discovered": 0, 00:14:07.286 "num_base_bdevs_operational": 2, 00:14:07.286 "base_bdevs_list": [ 00:14:07.286 { 00:14:07.286 "name": "BaseBdev1", 00:14:07.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.286 "is_configured": false, 00:14:07.286 "data_offset": 0, 00:14:07.286 "data_size": 0 00:14:07.286 }, 00:14:07.286 { 00:14:07.286 "name": "BaseBdev2", 00:14:07.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.286 "is_configured": false, 00:14:07.286 "data_offset": 0, 00:14:07.286 "data_size": 0 00:14:07.286 } 00:14:07.286 ] 00:14:07.286 }' 00:14:07.286 23:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:07.286 23:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.545 23:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:07.833 [2024-07-24 23:58:03.526740] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:07.833 [2024-07-24 23:58:03.526810] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:14:07.834 23:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:08.097 [2024-07-24 23:58:03.738859] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:08.097 [2024-07-24 23:58:03.738923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:08.097 [2024-07-24 23:58:03.738939] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:08.097 [2024-07-24 23:58:03.738955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:08.097 23:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:08.356 [2024-07-24 23:58:03.985376] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:08.356 BaseBdev1 00:14:08.356 23:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:08.356 23:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:08.356 23:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:08.356 23:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:08.356 23:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:08.356 23:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:08.356 23:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:08.356 23:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:08.616 [ 00:14:08.616 { 00:14:08.616 "name": "BaseBdev1", 00:14:08.616 "aliases": [ 00:14:08.616 "47f0b951-35fa-4ce8-80cf-352e854126b7" 00:14:08.616 ], 00:14:08.616 "product_name": "Malloc disk", 00:14:08.616 "block_size": 512, 00:14:08.616 "num_blocks": 65536, 00:14:08.616 "uuid": "47f0b951-35fa-4ce8-80cf-352e854126b7", 00:14:08.616 "assigned_rate_limits": { 00:14:08.616 "rw_ios_per_sec": 0, 00:14:08.616 "rw_mbytes_per_sec": 0, 00:14:08.616 "r_mbytes_per_sec": 0, 00:14:08.616 "w_mbytes_per_sec": 0 00:14:08.616 }, 00:14:08.616 "claimed": true, 00:14:08.616 "claim_type": "exclusive_write", 00:14:08.616 "zoned": false, 00:14:08.616 "supported_io_types": { 00:14:08.616 "read": true, 00:14:08.616 "write": true, 00:14:08.616 "unmap": true, 00:14:08.616 "flush": true, 00:14:08.616 "reset": true, 00:14:08.616 "nvme_admin": false, 00:14:08.616 "nvme_io": false, 00:14:08.616 "nvme_io_md": false, 00:14:08.616 "write_zeroes": true, 00:14:08.616 "zcopy": true, 00:14:08.616 "get_zone_info": false, 00:14:08.616 "zone_management": false, 00:14:08.616 "zone_append": false, 00:14:08.616 "compare": false, 00:14:08.616 "compare_and_write": false, 00:14:08.616 "abort": true, 00:14:08.616 "seek_hole": false, 00:14:08.616 "seek_data": false, 00:14:08.616 "copy": true, 00:14:08.616 "nvme_iov_md": false 00:14:08.616 }, 00:14:08.616 "memory_domains": [ 00:14:08.616 { 00:14:08.616 "dma_device_id": "system", 00:14:08.616 "dma_device_type": 1 00:14:08.616 }, 00:14:08.616 { 00:14:08.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.616 "dma_device_type": 2 00:14:08.616 } 00:14:08.616 ], 00:14:08.616 "driver_specific": {} 00:14:08.616 } 00:14:08.616 ] 00:14:08.616 23:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:08.616 23:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:08.616 23:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:08.616 23:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:08.616 23:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:08.616 23:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:08.616 23:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:08.616 23:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:08.616 23:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:08.616 23:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:08.616 23:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:08.616 23:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.616 23:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.875 23:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:08.875 "name": "Existed_Raid", 00:14:08.875 "uuid": "77a7c741-2cac-433d-bf32-68f3cf32b222", 00:14:08.875 "strip_size_kb": 64, 00:14:08.875 "state": "configuring", 00:14:08.875 "raid_level": "raid0", 00:14:08.875 "superblock": true, 00:14:08.875 "num_base_bdevs": 2, 00:14:08.875 "num_base_bdevs_discovered": 1, 00:14:08.875 "num_base_bdevs_operational": 2, 00:14:08.875 "base_bdevs_list": [ 00:14:08.875 { 00:14:08.875 "name": "BaseBdev1", 00:14:08.875 "uuid": "47f0b951-35fa-4ce8-80cf-352e854126b7", 00:14:08.875 "is_configured": true, 00:14:08.875 "data_offset": 2048, 00:14:08.875 "data_size": 63488 00:14:08.875 }, 00:14:08.875 { 00:14:08.875 "name": "BaseBdev2", 00:14:08.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.875 "is_configured": false, 00:14:08.875 "data_offset": 0, 00:14:08.875 "data_size": 0 00:14:08.875 } 00:14:08.875 ] 00:14:08.875 }' 00:14:08.875 23:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:08.875 23:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.133 23:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:09.391 [2024-07-24 23:58:05.177737] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:09.391 [2024-07-24 23:58:05.177824] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:14:09.391 23:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:09.650 [2024-07-24 23:58:05.445873] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.650 [2024-07-24 23:58:05.447987] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:09.650 [2024-07-24 23:58:05.448070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:09.650 23:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:09.650 23:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:09.650 23:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:09.650 23:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:09.650 23:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:09.650 23:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:09.650 23:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:09.650 23:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:09.650 23:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:09.650 23:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:09.650 23:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:09.650 23:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:09.650 23:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.650 23:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.917 23:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:09.917 "name": "Existed_Raid", 00:14:09.917 "uuid": "0bd0ea86-e311-48ff-99cc-2adce9aea52f", 00:14:09.917 "strip_size_kb": 64, 00:14:09.917 "state": "configuring", 00:14:09.917 "raid_level": "raid0", 00:14:09.917 "superblock": true, 00:14:09.917 "num_base_bdevs": 2, 00:14:09.917 "num_base_bdevs_discovered": 1, 00:14:09.917 "num_base_bdevs_operational": 2, 00:14:09.917 "base_bdevs_list": [ 00:14:09.917 { 00:14:09.917 "name": "BaseBdev1", 00:14:09.917 "uuid": "47f0b951-35fa-4ce8-80cf-352e854126b7", 00:14:09.917 "is_configured": true, 00:14:09.917 "data_offset": 2048, 00:14:09.917 "data_size": 63488 00:14:09.917 }, 00:14:09.917 { 00:14:09.917 "name": "BaseBdev2", 00:14:09.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.917 "is_configured": false, 00:14:09.917 "data_offset": 0, 00:14:09.917 "data_size": 0 00:14:09.917 } 00:14:09.917 ] 00:14:09.917 }' 00:14:09.917 23:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:09.917 23:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.180 23:58:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:10.439 [2024-07-24 23:58:06.303526] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.439 [2024-07-24 23:58:06.303950] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:14:10.439 [2024-07-24 23:58:06.303972] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:10.439 [2024-07-24 23:58:06.304092] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:14:10.439 [2024-07-24 23:58:06.304479] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:14:10.439 [2024-07-24 23:58:06.304514] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:14:10.439 [2024-07-24 23:58:06.304680] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.439 BaseBdev2 00:14:10.697 23:58:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:10.697 23:58:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:10.697 23:58:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:10.697 23:58:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:10.697 23:58:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:10.697 23:58:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:10.697 23:58:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:10.956 23:58:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:10.956 [ 00:14:10.956 { 00:14:10.956 "name": "BaseBdev2", 00:14:10.956 "aliases": [ 00:14:10.956 "2372d3c7-1e35-496c-bc77-afc9c1f1d47d" 00:14:10.956 ], 00:14:10.956 "product_name": "Malloc disk", 00:14:10.956 "block_size": 512, 00:14:10.956 "num_blocks": 65536, 00:14:10.956 "uuid": "2372d3c7-1e35-496c-bc77-afc9c1f1d47d", 00:14:10.956 "assigned_rate_limits": { 00:14:10.956 "rw_ios_per_sec": 0, 00:14:10.956 "rw_mbytes_per_sec": 0, 00:14:10.956 "r_mbytes_per_sec": 0, 00:14:10.956 "w_mbytes_per_sec": 0 00:14:10.956 }, 00:14:10.956 "claimed": true, 00:14:10.956 "claim_type": "exclusive_write", 00:14:10.956 "zoned": false, 00:14:10.956 "supported_io_types": { 00:14:10.956 "read": true, 00:14:10.956 "write": true, 00:14:10.956 "unmap": true, 00:14:10.956 "flush": true, 00:14:10.956 "reset": true, 00:14:10.956 "nvme_admin": false, 00:14:10.956 "nvme_io": false, 00:14:10.956 "nvme_io_md": false, 00:14:10.956 "write_zeroes": true, 00:14:10.956 "zcopy": true, 00:14:10.956 "get_zone_info": false, 00:14:10.956 "zone_management": false, 00:14:10.956 "zone_append": false, 00:14:10.956 "compare": false, 00:14:10.956 "compare_and_write": false, 00:14:10.956 "abort": true, 00:14:10.956 "seek_hole": false, 00:14:10.956 "seek_data": false, 00:14:10.956 "copy": true, 00:14:10.956 "nvme_iov_md": false 00:14:10.956 }, 00:14:10.956 "memory_domains": [ 00:14:10.956 { 00:14:10.956 "dma_device_id": "system", 00:14:10.956 "dma_device_type": 1 00:14:10.956 }, 00:14:10.956 { 00:14:10.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.956 "dma_device_type": 2 00:14:10.956 } 00:14:10.956 ], 00:14:10.956 "driver_specific": {} 00:14:10.956 } 00:14:10.956 ] 00:14:10.956 23:58:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:10.956 23:58:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:10.956 23:58:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:10.956 23:58:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:10.956 23:58:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:10.956 23:58:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:10.956 23:58:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:10.956 23:58:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:10.956 23:58:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:10.956 23:58:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:10.956 23:58:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:10.956 23:58:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:10.956 23:58:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:10.956 23:58:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.956 23:58:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.215 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:11.215 "name": "Existed_Raid", 00:14:11.215 "uuid": "0bd0ea86-e311-48ff-99cc-2adce9aea52f", 00:14:11.215 "strip_size_kb": 64, 00:14:11.215 "state": "online", 00:14:11.215 "raid_level": "raid0", 00:14:11.215 "superblock": true, 00:14:11.215 "num_base_bdevs": 2, 00:14:11.215 "num_base_bdevs_discovered": 2, 00:14:11.215 "num_base_bdevs_operational": 2, 00:14:11.215 "base_bdevs_list": [ 00:14:11.215 { 00:14:11.215 "name": "BaseBdev1", 00:14:11.215 "uuid": "47f0b951-35fa-4ce8-80cf-352e854126b7", 00:14:11.215 "is_configured": true, 00:14:11.215 "data_offset": 2048, 00:14:11.215 "data_size": 63488 00:14:11.215 }, 00:14:11.215 { 00:14:11.215 "name": "BaseBdev2", 00:14:11.215 "uuid": "2372d3c7-1e35-496c-bc77-afc9c1f1d47d", 00:14:11.215 "is_configured": true, 00:14:11.215 "data_offset": 2048, 00:14:11.215 "data_size": 63488 00:14:11.215 } 00:14:11.215 ] 00:14:11.215 }' 00:14:11.215 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:11.215 23:58:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.782 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:11.782 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:11.782 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:11.782 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:11.782 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:11.783 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:11.783 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:11.783 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:11.783 [2024-07-24 23:58:07.628341] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.041 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:12.041 "name": "Existed_Raid", 00:14:12.041 "aliases": [ 00:14:12.041 "0bd0ea86-e311-48ff-99cc-2adce9aea52f" 00:14:12.041 ], 00:14:12.041 "product_name": "Raid Volume", 00:14:12.041 "block_size": 512, 00:14:12.041 "num_blocks": 126976, 00:14:12.041 "uuid": "0bd0ea86-e311-48ff-99cc-2adce9aea52f", 00:14:12.041 "assigned_rate_limits": { 00:14:12.041 "rw_ios_per_sec": 0, 00:14:12.041 "rw_mbytes_per_sec": 0, 00:14:12.041 "r_mbytes_per_sec": 0, 00:14:12.041 "w_mbytes_per_sec": 0 00:14:12.041 }, 00:14:12.041 "claimed": false, 00:14:12.041 "zoned": false, 00:14:12.041 "supported_io_types": { 00:14:12.041 "read": true, 00:14:12.041 "write": true, 00:14:12.041 "unmap": true, 00:14:12.041 "flush": true, 00:14:12.041 "reset": true, 00:14:12.041 "nvme_admin": false, 00:14:12.041 "nvme_io": false, 00:14:12.041 "nvme_io_md": false, 00:14:12.041 "write_zeroes": true, 00:14:12.041 "zcopy": false, 00:14:12.041 "get_zone_info": false, 00:14:12.041 "zone_management": false, 00:14:12.041 "zone_append": false, 00:14:12.041 "compare": false, 00:14:12.041 "compare_and_write": false, 00:14:12.041 "abort": false, 00:14:12.041 "seek_hole": false, 00:14:12.041 "seek_data": false, 00:14:12.041 "copy": false, 00:14:12.041 "nvme_iov_md": false 00:14:12.041 }, 00:14:12.041 "memory_domains": [ 00:14:12.041 { 00:14:12.041 "dma_device_id": "system", 00:14:12.041 "dma_device_type": 1 00:14:12.041 }, 00:14:12.041 { 00:14:12.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.041 "dma_device_type": 2 00:14:12.041 }, 00:14:12.041 { 00:14:12.041 "dma_device_id": "system", 00:14:12.041 "dma_device_type": 1 00:14:12.041 }, 00:14:12.041 { 00:14:12.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.041 "dma_device_type": 2 00:14:12.041 } 00:14:12.041 ], 00:14:12.041 "driver_specific": { 00:14:12.041 "raid": { 00:14:12.041 "uuid": "0bd0ea86-e311-48ff-99cc-2adce9aea52f", 00:14:12.041 "strip_size_kb": 64, 00:14:12.041 "state": "online", 00:14:12.041 "raid_level": "raid0", 00:14:12.041 "superblock": true, 00:14:12.041 "num_base_bdevs": 2, 00:14:12.042 "num_base_bdevs_discovered": 2, 00:14:12.042 "num_base_bdevs_operational": 2, 00:14:12.042 "base_bdevs_list": [ 00:14:12.042 { 00:14:12.042 "name": "BaseBdev1", 00:14:12.042 "uuid": "47f0b951-35fa-4ce8-80cf-352e854126b7", 00:14:12.042 "is_configured": true, 00:14:12.042 "data_offset": 2048, 00:14:12.042 "data_size": 63488 00:14:12.042 }, 00:14:12.042 { 00:14:12.042 "name": "BaseBdev2", 00:14:12.042 "uuid": "2372d3c7-1e35-496c-bc77-afc9c1f1d47d", 00:14:12.042 "is_configured": true, 00:14:12.042 "data_offset": 2048, 00:14:12.042 "data_size": 63488 00:14:12.042 } 00:14:12.042 ] 00:14:12.042 } 00:14:12.042 } 00:14:12.042 }' 00:14:12.042 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:12.042 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:12.042 BaseBdev2' 00:14:12.042 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:12.042 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:12.042 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:12.300 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:12.300 "name": "BaseBdev1", 00:14:12.300 "aliases": [ 00:14:12.300 "47f0b951-35fa-4ce8-80cf-352e854126b7" 00:14:12.300 ], 00:14:12.300 "product_name": "Malloc disk", 00:14:12.300 "block_size": 512, 00:14:12.300 "num_blocks": 65536, 00:14:12.300 "uuid": "47f0b951-35fa-4ce8-80cf-352e854126b7", 00:14:12.300 "assigned_rate_limits": { 00:14:12.300 "rw_ios_per_sec": 0, 00:14:12.300 "rw_mbytes_per_sec": 0, 00:14:12.300 "r_mbytes_per_sec": 0, 00:14:12.300 "w_mbytes_per_sec": 0 00:14:12.300 }, 00:14:12.300 "claimed": true, 00:14:12.300 "claim_type": "exclusive_write", 00:14:12.300 "zoned": false, 00:14:12.300 "supported_io_types": { 00:14:12.300 "read": true, 00:14:12.300 "write": true, 00:14:12.300 "unmap": true, 00:14:12.300 "flush": true, 00:14:12.300 "reset": true, 00:14:12.300 "nvme_admin": false, 00:14:12.300 "nvme_io": false, 00:14:12.300 "nvme_io_md": false, 00:14:12.300 "write_zeroes": true, 00:14:12.300 "zcopy": true, 00:14:12.300 "get_zone_info": false, 00:14:12.300 "zone_management": false, 00:14:12.300 "zone_append": false, 00:14:12.300 "compare": false, 00:14:12.300 "compare_and_write": false, 00:14:12.300 "abort": true, 00:14:12.300 "seek_hole": false, 00:14:12.300 "seek_data": false, 00:14:12.300 "copy": true, 00:14:12.300 "nvme_iov_md": false 00:14:12.300 }, 00:14:12.300 "memory_domains": [ 00:14:12.300 { 00:14:12.300 "dma_device_id": "system", 00:14:12.300 "dma_device_type": 1 00:14:12.300 }, 00:14:12.300 { 00:14:12.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.300 "dma_device_type": 2 00:14:12.300 } 00:14:12.300 ], 00:14:12.300 "driver_specific": {} 00:14:12.300 }' 00:14:12.301 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:12.301 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:12.301 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:12.301 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:12.301 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:12.301 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:12.301 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:12.301 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:12.301 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:12.301 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:12.301 23:58:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:12.301 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:12.301 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:12.301 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:12.301 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:12.559 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:12.559 "name": "BaseBdev2", 00:14:12.559 "aliases": [ 00:14:12.559 "2372d3c7-1e35-496c-bc77-afc9c1f1d47d" 00:14:12.559 ], 00:14:12.559 "product_name": "Malloc disk", 00:14:12.559 "block_size": 512, 00:14:12.559 "num_blocks": 65536, 00:14:12.559 "uuid": "2372d3c7-1e35-496c-bc77-afc9c1f1d47d", 00:14:12.559 "assigned_rate_limits": { 00:14:12.559 "rw_ios_per_sec": 0, 00:14:12.559 "rw_mbytes_per_sec": 0, 00:14:12.559 "r_mbytes_per_sec": 0, 00:14:12.559 "w_mbytes_per_sec": 0 00:14:12.559 }, 00:14:12.559 "claimed": true, 00:14:12.559 "claim_type": "exclusive_write", 00:14:12.559 "zoned": false, 00:14:12.559 "supported_io_types": { 00:14:12.559 "read": true, 00:14:12.559 "write": true, 00:14:12.559 "unmap": true, 00:14:12.559 "flush": true, 00:14:12.559 "reset": true, 00:14:12.559 "nvme_admin": false, 00:14:12.559 "nvme_io": false, 00:14:12.559 "nvme_io_md": false, 00:14:12.559 "write_zeroes": true, 00:14:12.559 "zcopy": true, 00:14:12.559 "get_zone_info": false, 00:14:12.559 "zone_management": false, 00:14:12.559 "zone_append": false, 00:14:12.559 "compare": false, 00:14:12.559 "compare_and_write": false, 00:14:12.559 "abort": true, 00:14:12.559 "seek_hole": false, 00:14:12.559 "seek_data": false, 00:14:12.559 "copy": true, 00:14:12.559 "nvme_iov_md": false 00:14:12.559 }, 00:14:12.559 "memory_domains": [ 00:14:12.559 { 00:14:12.559 "dma_device_id": "system", 00:14:12.559 "dma_device_type": 1 00:14:12.559 }, 00:14:12.559 { 00:14:12.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.559 "dma_device_type": 2 00:14:12.559 } 00:14:12.559 ], 00:14:12.559 "driver_specific": {} 00:14:12.559 }' 00:14:12.559 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:12.559 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:12.559 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:12.559 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:12.559 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:12.559 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:12.559 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:12.559 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:12.559 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:12.559 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:12.559 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:12.560 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:12.560 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:12.817 [2024-07-24 23:58:08.540264] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:12.817 [2024-07-24 23:58:08.540320] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:12.817 [2024-07-24 23:58:08.540390] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.817 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:12.817 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:12.817 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:12.817 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:12.817 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:12.817 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:12.817 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:12.817 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:12.817 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:12.817 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:12.817 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:12.818 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:12.818 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:12.818 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:12.818 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:12.818 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.818 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.075 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:13.075 "name": "Existed_Raid", 00:14:13.075 "uuid": "0bd0ea86-e311-48ff-99cc-2adce9aea52f", 00:14:13.075 "strip_size_kb": 64, 00:14:13.075 "state": "offline", 00:14:13.075 "raid_level": "raid0", 00:14:13.075 "superblock": true, 00:14:13.075 "num_base_bdevs": 2, 00:14:13.075 "num_base_bdevs_discovered": 1, 00:14:13.075 "num_base_bdevs_operational": 1, 00:14:13.075 "base_bdevs_list": [ 00:14:13.075 { 00:14:13.075 "name": null, 00:14:13.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.075 "is_configured": false, 00:14:13.075 "data_offset": 2048, 00:14:13.075 "data_size": 63488 00:14:13.075 }, 00:14:13.075 { 00:14:13.075 "name": "BaseBdev2", 00:14:13.075 "uuid": "2372d3c7-1e35-496c-bc77-afc9c1f1d47d", 00:14:13.075 "is_configured": true, 00:14:13.075 "data_offset": 2048, 00:14:13.075 "data_size": 63488 00:14:13.075 } 00:14:13.075 ] 00:14:13.075 }' 00:14:13.075 23:58:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:13.075 23:58:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.332 23:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:13.332 23:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:13.332 23:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.332 23:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:13.590 23:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:13.590 23:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:13.590 23:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:13.849 [2024-07-24 23:58:09.696257] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:13.849 [2024-07-24 23:58:09.696337] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:14:14.107 23:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:14.107 23:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:14.107 23:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:14.107 23:58:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.366 23:58:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:14.366 23:58:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:14.366 23:58:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:14.366 23:58:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 76176 00:14:14.366 23:58:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 76176 ']' 00:14:14.366 23:58:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 76176 00:14:14.366 23:58:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:14.366 23:58:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:14.366 23:58:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76176 00:14:14.366 23:58:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:14.366 23:58:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:14.366 killing process with pid 76176 00:14:14.366 23:58:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76176' 00:14:14.366 23:58:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 76176 00:14:14.366 [2024-07-24 23:58:10.050871] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:14.366 23:58:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 76176 00:14:14.366 [2024-07-24 23:58:10.050990] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:15.333 23:58:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:15.333 00:14:15.333 real 0m9.617s 00:14:15.333 user 0m15.913s 00:14:15.333 sys 0m1.451s 00:14:15.333 23:58:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:15.333 23:58:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.333 ************************************ 00:14:15.333 END TEST raid_state_function_test_sb 00:14:15.333 ************************************ 00:14:15.333 23:58:11 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:14:15.333 23:58:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:15.333 23:58:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:15.333 23:58:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:15.333 ************************************ 00:14:15.333 START TEST raid_superblock_test 00:14:15.333 ************************************ 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=76515 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 76515 /var/tmp/spdk-raid.sock 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 76515 ']' 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:15.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:15.333 23:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.592 [2024-07-24 23:58:11.231505] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:14:15.592 [2024-07-24 23:58:11.231698] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76515 ] 00:14:15.592 [2024-07-24 23:58:11.404044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.850 [2024-07-24 23:58:11.579718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.109 [2024-07-24 23:58:11.745742] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.367 23:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:16.367 23:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:16.367 23:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:14:16.367 23:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:16.367 23:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:14:16.367 23:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:14:16.367 23:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:16.368 23:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:16.368 23:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:16.368 23:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:16.368 23:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:16.626 malloc1 00:14:16.626 23:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:16.885 [2024-07-24 23:58:12.606689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:16.885 [2024-07-24 23:58:12.606802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.885 [2024-07-24 23:58:12.606861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:14:16.885 [2024-07-24 23:58:12.606877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.885 [2024-07-24 23:58:12.609266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.885 [2024-07-24 23:58:12.609323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:16.885 pt1 00:14:16.885 23:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:16.885 23:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:16.885 23:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:14:16.885 23:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:14:16.885 23:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:16.885 23:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:16.885 23:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:16.885 23:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:16.885 23:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:17.144 malloc2 00:14:17.144 23:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:17.402 [2024-07-24 23:58:13.098533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:17.402 [2024-07-24 23:58:13.098672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.402 [2024-07-24 23:58:13.098709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:14:17.402 [2024-07-24 23:58:13.098739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.402 [2024-07-24 23:58:13.101354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.402 [2024-07-24 23:58:13.101412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:17.402 pt2 00:14:17.403 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:17.403 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:17.403 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:17.660 [2024-07-24 23:58:13.314726] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:17.660 [2024-07-24 23:58:13.316914] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:17.660 [2024-07-24 23:58:13.317167] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007e80 00:14:17.660 [2024-07-24 23:58:13.317186] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:17.660 [2024-07-24 23:58:13.317368] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:14:17.660 [2024-07-24 23:58:13.317731] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007e80 00:14:17.661 [2024-07-24 23:58:13.317767] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007e80 00:14:17.661 [2024-07-24 23:58:13.318001] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.661 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:17.661 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:17.661 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:17.661 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:17.661 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:17.661 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:17.661 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:17.661 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:17.661 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:17.661 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:17.661 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.661 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.919 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:17.919 "name": "raid_bdev1", 00:14:17.919 "uuid": "5557d354-7601-46e5-aea7-c991d32ee72f", 00:14:17.919 "strip_size_kb": 64, 00:14:17.919 "state": "online", 00:14:17.919 "raid_level": "raid0", 00:14:17.919 "superblock": true, 00:14:17.919 "num_base_bdevs": 2, 00:14:17.919 "num_base_bdevs_discovered": 2, 00:14:17.919 "num_base_bdevs_operational": 2, 00:14:17.919 "base_bdevs_list": [ 00:14:17.919 { 00:14:17.919 "name": "pt1", 00:14:17.919 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:17.919 "is_configured": true, 00:14:17.919 "data_offset": 2048, 00:14:17.919 "data_size": 63488 00:14:17.919 }, 00:14:17.919 { 00:14:17.919 "name": "pt2", 00:14:17.919 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:17.919 "is_configured": true, 00:14:17.919 "data_offset": 2048, 00:14:17.919 "data_size": 63488 00:14:17.919 } 00:14:17.919 ] 00:14:17.919 }' 00:14:17.919 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:17.919 23:58:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.177 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:14:18.177 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:18.177 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:18.177 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:18.177 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:18.177 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:18.177 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:18.177 23:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:18.436 [2024-07-24 23:58:14.183173] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.436 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:18.436 "name": "raid_bdev1", 00:14:18.436 "aliases": [ 00:14:18.436 "5557d354-7601-46e5-aea7-c991d32ee72f" 00:14:18.436 ], 00:14:18.436 "product_name": "Raid Volume", 00:14:18.436 "block_size": 512, 00:14:18.436 "num_blocks": 126976, 00:14:18.436 "uuid": "5557d354-7601-46e5-aea7-c991d32ee72f", 00:14:18.436 "assigned_rate_limits": { 00:14:18.436 "rw_ios_per_sec": 0, 00:14:18.436 "rw_mbytes_per_sec": 0, 00:14:18.436 "r_mbytes_per_sec": 0, 00:14:18.436 "w_mbytes_per_sec": 0 00:14:18.436 }, 00:14:18.436 "claimed": false, 00:14:18.436 "zoned": false, 00:14:18.436 "supported_io_types": { 00:14:18.436 "read": true, 00:14:18.436 "write": true, 00:14:18.436 "unmap": true, 00:14:18.436 "flush": true, 00:14:18.436 "reset": true, 00:14:18.436 "nvme_admin": false, 00:14:18.436 "nvme_io": false, 00:14:18.436 "nvme_io_md": false, 00:14:18.436 "write_zeroes": true, 00:14:18.436 "zcopy": false, 00:14:18.436 "get_zone_info": false, 00:14:18.436 "zone_management": false, 00:14:18.436 "zone_append": false, 00:14:18.436 "compare": false, 00:14:18.436 "compare_and_write": false, 00:14:18.436 "abort": false, 00:14:18.436 "seek_hole": false, 00:14:18.436 "seek_data": false, 00:14:18.436 "copy": false, 00:14:18.436 "nvme_iov_md": false 00:14:18.436 }, 00:14:18.436 "memory_domains": [ 00:14:18.436 { 00:14:18.436 "dma_device_id": "system", 00:14:18.436 "dma_device_type": 1 00:14:18.436 }, 00:14:18.436 { 00:14:18.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.436 "dma_device_type": 2 00:14:18.436 }, 00:14:18.436 { 00:14:18.436 "dma_device_id": "system", 00:14:18.436 "dma_device_type": 1 00:14:18.436 }, 00:14:18.436 { 00:14:18.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.436 "dma_device_type": 2 00:14:18.436 } 00:14:18.436 ], 00:14:18.436 "driver_specific": { 00:14:18.436 "raid": { 00:14:18.436 "uuid": "5557d354-7601-46e5-aea7-c991d32ee72f", 00:14:18.436 "strip_size_kb": 64, 00:14:18.436 "state": "online", 00:14:18.436 "raid_level": "raid0", 00:14:18.436 "superblock": true, 00:14:18.436 "num_base_bdevs": 2, 00:14:18.436 "num_base_bdevs_discovered": 2, 00:14:18.436 "num_base_bdevs_operational": 2, 00:14:18.436 "base_bdevs_list": [ 00:14:18.436 { 00:14:18.436 "name": "pt1", 00:14:18.436 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:18.436 "is_configured": true, 00:14:18.436 "data_offset": 2048, 00:14:18.436 "data_size": 63488 00:14:18.436 }, 00:14:18.436 { 00:14:18.436 "name": "pt2", 00:14:18.436 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:18.436 "is_configured": true, 00:14:18.436 "data_offset": 2048, 00:14:18.436 "data_size": 63488 00:14:18.436 } 00:14:18.436 ] 00:14:18.436 } 00:14:18.436 } 00:14:18.436 }' 00:14:18.436 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:18.436 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:18.436 pt2' 00:14:18.436 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:18.436 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:18.436 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:18.695 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:18.695 "name": "pt1", 00:14:18.695 "aliases": [ 00:14:18.695 "00000000-0000-0000-0000-000000000001" 00:14:18.695 ], 00:14:18.695 "product_name": "passthru", 00:14:18.695 "block_size": 512, 00:14:18.695 "num_blocks": 65536, 00:14:18.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:18.695 "assigned_rate_limits": { 00:14:18.695 "rw_ios_per_sec": 0, 00:14:18.695 "rw_mbytes_per_sec": 0, 00:14:18.695 "r_mbytes_per_sec": 0, 00:14:18.695 "w_mbytes_per_sec": 0 00:14:18.695 }, 00:14:18.695 "claimed": true, 00:14:18.695 "claim_type": "exclusive_write", 00:14:18.695 "zoned": false, 00:14:18.695 "supported_io_types": { 00:14:18.695 "read": true, 00:14:18.695 "write": true, 00:14:18.695 "unmap": true, 00:14:18.695 "flush": true, 00:14:18.695 "reset": true, 00:14:18.695 "nvme_admin": false, 00:14:18.695 "nvme_io": false, 00:14:18.695 "nvme_io_md": false, 00:14:18.695 "write_zeroes": true, 00:14:18.695 "zcopy": true, 00:14:18.695 "get_zone_info": false, 00:14:18.695 "zone_management": false, 00:14:18.695 "zone_append": false, 00:14:18.695 "compare": false, 00:14:18.695 "compare_and_write": false, 00:14:18.695 "abort": true, 00:14:18.695 "seek_hole": false, 00:14:18.695 "seek_data": false, 00:14:18.695 "copy": true, 00:14:18.695 "nvme_iov_md": false 00:14:18.695 }, 00:14:18.695 "memory_domains": [ 00:14:18.695 { 00:14:18.695 "dma_device_id": "system", 00:14:18.695 "dma_device_type": 1 00:14:18.695 }, 00:14:18.695 { 00:14:18.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.695 "dma_device_type": 2 00:14:18.695 } 00:14:18.695 ], 00:14:18.695 "driver_specific": { 00:14:18.695 "passthru": { 00:14:18.695 "name": "pt1", 00:14:18.695 "base_bdev_name": "malloc1" 00:14:18.695 } 00:14:18.695 } 00:14:18.695 }' 00:14:18.695 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:18.695 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:18.695 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:18.695 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:18.695 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:18.695 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:18.695 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:18.695 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:18.695 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:18.695 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:18.695 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:18.695 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:18.695 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:18.695 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:18.695 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:18.954 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:18.954 "name": "pt2", 00:14:18.954 "aliases": [ 00:14:18.954 "00000000-0000-0000-0000-000000000002" 00:14:18.954 ], 00:14:18.954 "product_name": "passthru", 00:14:18.954 "block_size": 512, 00:14:18.954 "num_blocks": 65536, 00:14:18.954 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:18.954 "assigned_rate_limits": { 00:14:18.954 "rw_ios_per_sec": 0, 00:14:18.954 "rw_mbytes_per_sec": 0, 00:14:18.954 "r_mbytes_per_sec": 0, 00:14:18.954 "w_mbytes_per_sec": 0 00:14:18.954 }, 00:14:18.954 "claimed": true, 00:14:18.954 "claim_type": "exclusive_write", 00:14:18.954 "zoned": false, 00:14:18.954 "supported_io_types": { 00:14:18.954 "read": true, 00:14:18.954 "write": true, 00:14:18.954 "unmap": true, 00:14:18.954 "flush": true, 00:14:18.954 "reset": true, 00:14:18.954 "nvme_admin": false, 00:14:18.954 "nvme_io": false, 00:14:18.954 "nvme_io_md": false, 00:14:18.954 "write_zeroes": true, 00:14:18.954 "zcopy": true, 00:14:18.954 "get_zone_info": false, 00:14:18.954 "zone_management": false, 00:14:18.954 "zone_append": false, 00:14:18.954 "compare": false, 00:14:18.954 "compare_and_write": false, 00:14:18.954 "abort": true, 00:14:18.954 "seek_hole": false, 00:14:18.954 "seek_data": false, 00:14:18.954 "copy": true, 00:14:18.954 "nvme_iov_md": false 00:14:18.954 }, 00:14:18.954 "memory_domains": [ 00:14:18.954 { 00:14:18.954 "dma_device_id": "system", 00:14:18.954 "dma_device_type": 1 00:14:18.954 }, 00:14:18.954 { 00:14:18.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.954 "dma_device_type": 2 00:14:18.954 } 00:14:18.954 ], 00:14:18.954 "driver_specific": { 00:14:18.954 "passthru": { 00:14:18.954 "name": "pt2", 00:14:18.954 "base_bdev_name": "malloc2" 00:14:18.954 } 00:14:18.954 } 00:14:18.954 }' 00:14:18.954 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:18.954 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:18.954 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:18.954 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:18.954 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:18.954 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:18.954 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:18.955 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:19.213 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:19.213 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:19.213 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:19.213 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:19.213 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:19.213 23:58:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:14:19.472 [2024-07-24 23:58:15.099610] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:19.472 23:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=5557d354-7601-46e5-aea7-c991d32ee72f 00:14:19.472 23:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 5557d354-7601-46e5-aea7-c991d32ee72f ']' 00:14:19.472 23:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:19.731 [2024-07-24 23:58:15.359292] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:19.731 [2024-07-24 23:58:15.359329] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:19.731 [2024-07-24 23:58:15.359428] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.731 [2024-07-24 23:58:15.359484] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:19.731 [2024-07-24 23:58:15.359505] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state offline 00:14:19.731 23:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.731 23:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:14:19.989 23:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:14:19.989 23:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:14:19.989 23:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:14:19.989 23:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:20.247 23:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:14:20.247 23:58:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:20.506 23:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:20.506 23:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:20.765 23:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:14:20.765 23:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:20.765 23:58:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:20.765 23:58:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:20.765 23:58:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.765 23:58:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.765 23:58:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.765 23:58:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.765 23:58:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.765 23:58:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.765 23:58:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.765 23:58:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:20.765 23:58:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:20.765 [2024-07-24 23:58:16.603624] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:20.765 [2024-07-24 23:58:16.605875] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:20.765 [2024-07-24 23:58:16.606027] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:20.765 [2024-07-24 23:58:16.606101] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:20.765 [2024-07-24 23:58:16.606127] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:20.765 [2024-07-24 23:58:16.606146] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008480 name raid_bdev1, state configuring 00:14:20.765 request: 00:14:20.765 { 00:14:20.765 "name": "raid_bdev1", 00:14:20.765 "raid_level": "raid0", 00:14:20.765 "base_bdevs": [ 00:14:20.765 "malloc1", 00:14:20.765 "malloc2" 00:14:20.765 ], 00:14:20.765 "strip_size_kb": 64, 00:14:20.765 "superblock": false, 00:14:20.765 "method": "bdev_raid_create", 00:14:20.765 "req_id": 1 00:14:20.765 } 00:14:20.765 Got JSON-RPC error response 00:14:20.765 response: 00:14:20.765 { 00:14:20.765 "code": -17, 00:14:20.765 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:20.766 } 00:14:20.766 23:58:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:20.766 23:58:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:20.766 23:58:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:20.766 23:58:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:20.766 23:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.766 23:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:14:21.024 23:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:14:21.024 23:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:14:21.024 23:58:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:21.284 [2024-07-24 23:58:17.083641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:21.284 [2024-07-24 23:58:17.083756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.284 [2024-07-24 23:58:17.083786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:14:21.284 [2024-07-24 23:58:17.083803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.284 [2024-07-24 23:58:17.086241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.284 [2024-07-24 23:58:17.086302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:21.284 [2024-07-24 23:58:17.086451] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:21.284 [2024-07-24 23:58:17.086524] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:21.284 pt1 00:14:21.284 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:14:21.284 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:21.284 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:21.284 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:21.284 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:21.284 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:21.284 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:21.284 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:21.284 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:21.284 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:21.284 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.284 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.543 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:21.543 "name": "raid_bdev1", 00:14:21.543 "uuid": "5557d354-7601-46e5-aea7-c991d32ee72f", 00:14:21.543 "strip_size_kb": 64, 00:14:21.543 "state": "configuring", 00:14:21.543 "raid_level": "raid0", 00:14:21.543 "superblock": true, 00:14:21.543 "num_base_bdevs": 2, 00:14:21.543 "num_base_bdevs_discovered": 1, 00:14:21.543 "num_base_bdevs_operational": 2, 00:14:21.543 "base_bdevs_list": [ 00:14:21.543 { 00:14:21.543 "name": "pt1", 00:14:21.543 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:21.543 "is_configured": true, 00:14:21.543 "data_offset": 2048, 00:14:21.543 "data_size": 63488 00:14:21.543 }, 00:14:21.543 { 00:14:21.543 "name": null, 00:14:21.543 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:21.543 "is_configured": false, 00:14:21.543 "data_offset": 2048, 00:14:21.543 "data_size": 63488 00:14:21.543 } 00:14:21.543 ] 00:14:21.543 }' 00:14:21.543 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:21.543 23:58:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.802 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:14:21.802 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:14:21.802 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:14:21.802 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:22.075 [2024-07-24 23:58:17.835905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:22.075 [2024-07-24 23:58:17.835997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.075 [2024-07-24 23:58:17.836025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:14:22.075 [2024-07-24 23:58:17.836054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.075 [2024-07-24 23:58:17.836596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.076 [2024-07-24 23:58:17.836668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:22.076 [2024-07-24 23:58:17.836772] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:22.076 [2024-07-24 23:58:17.836805] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:22.076 [2024-07-24 23:58:17.837011] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009080 00:14:22.076 [2024-07-24 23:58:17.837034] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:22.076 [2024-07-24 23:58:17.837153] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:14:22.076 [2024-07-24 23:58:17.837553] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009080 00:14:22.076 [2024-07-24 23:58:17.837597] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009080 00:14:22.076 [2024-07-24 23:58:17.837815] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.076 pt2 00:14:22.076 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:14:22.076 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:14:22.076 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:22.076 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:22.076 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:22.076 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:22.076 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:22.076 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:22.076 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:22.076 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:22.076 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:22.076 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:22.076 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.076 23:58:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.344 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:22.344 "name": "raid_bdev1", 00:14:22.344 "uuid": "5557d354-7601-46e5-aea7-c991d32ee72f", 00:14:22.344 "strip_size_kb": 64, 00:14:22.344 "state": "online", 00:14:22.344 "raid_level": "raid0", 00:14:22.344 "superblock": true, 00:14:22.344 "num_base_bdevs": 2, 00:14:22.344 "num_base_bdevs_discovered": 2, 00:14:22.344 "num_base_bdevs_operational": 2, 00:14:22.344 "base_bdevs_list": [ 00:14:22.344 { 00:14:22.344 "name": "pt1", 00:14:22.344 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:22.344 "is_configured": true, 00:14:22.344 "data_offset": 2048, 00:14:22.344 "data_size": 63488 00:14:22.344 }, 00:14:22.344 { 00:14:22.344 "name": "pt2", 00:14:22.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:22.344 "is_configured": true, 00:14:22.344 "data_offset": 2048, 00:14:22.344 "data_size": 63488 00:14:22.344 } 00:14:22.344 ] 00:14:22.344 }' 00:14:22.344 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:22.344 23:58:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.603 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:14:22.603 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:22.603 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:22.603 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:22.603 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:22.603 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:22.603 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:22.603 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:22.862 [2024-07-24 23:58:18.620416] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:22.862 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:22.862 "name": "raid_bdev1", 00:14:22.862 "aliases": [ 00:14:22.862 "5557d354-7601-46e5-aea7-c991d32ee72f" 00:14:22.862 ], 00:14:22.862 "product_name": "Raid Volume", 00:14:22.862 "block_size": 512, 00:14:22.862 "num_blocks": 126976, 00:14:22.862 "uuid": "5557d354-7601-46e5-aea7-c991d32ee72f", 00:14:22.862 "assigned_rate_limits": { 00:14:22.862 "rw_ios_per_sec": 0, 00:14:22.862 "rw_mbytes_per_sec": 0, 00:14:22.862 "r_mbytes_per_sec": 0, 00:14:22.862 "w_mbytes_per_sec": 0 00:14:22.862 }, 00:14:22.862 "claimed": false, 00:14:22.862 "zoned": false, 00:14:22.862 "supported_io_types": { 00:14:22.862 "read": true, 00:14:22.862 "write": true, 00:14:22.862 "unmap": true, 00:14:22.862 "flush": true, 00:14:22.862 "reset": true, 00:14:22.862 "nvme_admin": false, 00:14:22.862 "nvme_io": false, 00:14:22.862 "nvme_io_md": false, 00:14:22.862 "write_zeroes": true, 00:14:22.862 "zcopy": false, 00:14:22.862 "get_zone_info": false, 00:14:22.862 "zone_management": false, 00:14:22.862 "zone_append": false, 00:14:22.862 "compare": false, 00:14:22.862 "compare_and_write": false, 00:14:22.862 "abort": false, 00:14:22.862 "seek_hole": false, 00:14:22.862 "seek_data": false, 00:14:22.862 "copy": false, 00:14:22.862 "nvme_iov_md": false 00:14:22.862 }, 00:14:22.862 "memory_domains": [ 00:14:22.862 { 00:14:22.862 "dma_device_id": "system", 00:14:22.862 "dma_device_type": 1 00:14:22.862 }, 00:14:22.862 { 00:14:22.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.862 "dma_device_type": 2 00:14:22.862 }, 00:14:22.862 { 00:14:22.862 "dma_device_id": "system", 00:14:22.862 "dma_device_type": 1 00:14:22.862 }, 00:14:22.862 { 00:14:22.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.862 "dma_device_type": 2 00:14:22.862 } 00:14:22.862 ], 00:14:22.862 "driver_specific": { 00:14:22.862 "raid": { 00:14:22.862 "uuid": "5557d354-7601-46e5-aea7-c991d32ee72f", 00:14:22.862 "strip_size_kb": 64, 00:14:22.862 "state": "online", 00:14:22.862 "raid_level": "raid0", 00:14:22.862 "superblock": true, 00:14:22.862 "num_base_bdevs": 2, 00:14:22.862 "num_base_bdevs_discovered": 2, 00:14:22.862 "num_base_bdevs_operational": 2, 00:14:22.862 "base_bdevs_list": [ 00:14:22.862 { 00:14:22.862 "name": "pt1", 00:14:22.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:22.862 "is_configured": true, 00:14:22.862 "data_offset": 2048, 00:14:22.862 "data_size": 63488 00:14:22.862 }, 00:14:22.862 { 00:14:22.862 "name": "pt2", 00:14:22.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:22.862 "is_configured": true, 00:14:22.862 "data_offset": 2048, 00:14:22.862 "data_size": 63488 00:14:22.862 } 00:14:22.862 ] 00:14:22.862 } 00:14:22.862 } 00:14:22.862 }' 00:14:22.862 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:22.862 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:22.863 pt2' 00:14:22.863 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:22.863 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:22.863 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:23.122 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:23.122 "name": "pt1", 00:14:23.122 "aliases": [ 00:14:23.122 "00000000-0000-0000-0000-000000000001" 00:14:23.122 ], 00:14:23.122 "product_name": "passthru", 00:14:23.122 "block_size": 512, 00:14:23.122 "num_blocks": 65536, 00:14:23.122 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:23.122 "assigned_rate_limits": { 00:14:23.122 "rw_ios_per_sec": 0, 00:14:23.122 "rw_mbytes_per_sec": 0, 00:14:23.122 "r_mbytes_per_sec": 0, 00:14:23.122 "w_mbytes_per_sec": 0 00:14:23.122 }, 00:14:23.122 "claimed": true, 00:14:23.122 "claim_type": "exclusive_write", 00:14:23.122 "zoned": false, 00:14:23.122 "supported_io_types": { 00:14:23.122 "read": true, 00:14:23.122 "write": true, 00:14:23.122 "unmap": true, 00:14:23.122 "flush": true, 00:14:23.122 "reset": true, 00:14:23.122 "nvme_admin": false, 00:14:23.122 "nvme_io": false, 00:14:23.122 "nvme_io_md": false, 00:14:23.122 "write_zeroes": true, 00:14:23.122 "zcopy": true, 00:14:23.122 "get_zone_info": false, 00:14:23.122 "zone_management": false, 00:14:23.122 "zone_append": false, 00:14:23.122 "compare": false, 00:14:23.122 "compare_and_write": false, 00:14:23.122 "abort": true, 00:14:23.122 "seek_hole": false, 00:14:23.122 "seek_data": false, 00:14:23.122 "copy": true, 00:14:23.122 "nvme_iov_md": false 00:14:23.122 }, 00:14:23.122 "memory_domains": [ 00:14:23.122 { 00:14:23.122 "dma_device_id": "system", 00:14:23.122 "dma_device_type": 1 00:14:23.122 }, 00:14:23.122 { 00:14:23.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.122 "dma_device_type": 2 00:14:23.122 } 00:14:23.122 ], 00:14:23.122 "driver_specific": { 00:14:23.122 "passthru": { 00:14:23.122 "name": "pt1", 00:14:23.122 "base_bdev_name": "malloc1" 00:14:23.122 } 00:14:23.122 } 00:14:23.122 }' 00:14:23.122 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.122 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.122 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:23.122 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.122 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.122 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:23.122 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.122 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.122 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:23.122 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.122 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.122 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:23.122 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:23.122 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:23.122 23:58:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:23.381 23:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:23.381 "name": "pt2", 00:14:23.381 "aliases": [ 00:14:23.381 "00000000-0000-0000-0000-000000000002" 00:14:23.381 ], 00:14:23.381 "product_name": "passthru", 00:14:23.381 "block_size": 512, 00:14:23.381 "num_blocks": 65536, 00:14:23.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:23.381 "assigned_rate_limits": { 00:14:23.381 "rw_ios_per_sec": 0, 00:14:23.381 "rw_mbytes_per_sec": 0, 00:14:23.381 "r_mbytes_per_sec": 0, 00:14:23.381 "w_mbytes_per_sec": 0 00:14:23.381 }, 00:14:23.381 "claimed": true, 00:14:23.381 "claim_type": "exclusive_write", 00:14:23.381 "zoned": false, 00:14:23.381 "supported_io_types": { 00:14:23.381 "read": true, 00:14:23.381 "write": true, 00:14:23.381 "unmap": true, 00:14:23.381 "flush": true, 00:14:23.381 "reset": true, 00:14:23.381 "nvme_admin": false, 00:14:23.381 "nvme_io": false, 00:14:23.381 "nvme_io_md": false, 00:14:23.381 "write_zeroes": true, 00:14:23.381 "zcopy": true, 00:14:23.382 "get_zone_info": false, 00:14:23.382 "zone_management": false, 00:14:23.382 "zone_append": false, 00:14:23.382 "compare": false, 00:14:23.382 "compare_and_write": false, 00:14:23.382 "abort": true, 00:14:23.382 "seek_hole": false, 00:14:23.382 "seek_data": false, 00:14:23.382 "copy": true, 00:14:23.382 "nvme_iov_md": false 00:14:23.382 }, 00:14:23.382 "memory_domains": [ 00:14:23.382 { 00:14:23.382 "dma_device_id": "system", 00:14:23.382 "dma_device_type": 1 00:14:23.382 }, 00:14:23.382 { 00:14:23.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.382 "dma_device_type": 2 00:14:23.382 } 00:14:23.382 ], 00:14:23.382 "driver_specific": { 00:14:23.382 "passthru": { 00:14:23.382 "name": "pt2", 00:14:23.382 "base_bdev_name": "malloc2" 00:14:23.382 } 00:14:23.382 } 00:14:23.382 }' 00:14:23.382 23:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.382 23:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.382 23:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:23.382 23:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.382 23:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.382 23:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:23.382 23:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.382 23:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.382 23:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:23.382 23:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.382 23:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.641 23:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:23.641 23:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:23.641 23:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:14:23.641 [2024-07-24 23:58:19.508721] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.900 23:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 5557d354-7601-46e5-aea7-c991d32ee72f '!=' 5557d354-7601-46e5-aea7-c991d32ee72f ']' 00:14:23.900 23:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:14:23.900 23:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:23.900 23:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:23.900 23:58:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 76515 00:14:23.900 23:58:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 76515 ']' 00:14:23.900 23:58:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 76515 00:14:23.900 23:58:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:23.900 23:58:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:23.900 23:58:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76515 00:14:23.900 23:58:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:23.900 23:58:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:23.900 23:58:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76515' 00:14:23.900 killing process with pid 76515 00:14:23.900 23:58:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 76515 00:14:23.900 [2024-07-24 23:58:19.574036] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:23.900 23:58:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 76515 00:14:23.900 [2024-07-24 23:58:19.574146] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.900 [2024-07-24 23:58:19.574204] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:23.900 [2024-07-24 23:58:19.574225] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state offline 00:14:23.900 [2024-07-24 23:58:19.719102] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:25.283 23:58:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:14:25.283 00:14:25.283 real 0m9.620s 00:14:25.283 user 0m15.890s 00:14:25.283 sys 0m1.472s 00:14:25.283 23:58:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:25.283 23:58:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.283 ************************************ 00:14:25.283 END TEST raid_superblock_test 00:14:25.283 ************************************ 00:14:25.283 23:58:20 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:14:25.283 23:58:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:25.283 23:58:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:25.283 23:58:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:25.283 ************************************ 00:14:25.283 START TEST raid_read_error_test 00:14:25.283 ************************************ 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.tzz5IWLh0L 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=76843 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 76843 /var/tmp/spdk-raid.sock 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 76843 ']' 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:25.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:25.283 23:58:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.283 [2024-07-24 23:58:20.919461] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:14:25.283 [2024-07-24 23:58:20.919708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76843 ] 00:14:25.283 [2024-07-24 23:58:21.093847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.541 [2024-07-24 23:58:21.274591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.800 [2024-07-24 23:58:21.442334] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.058 23:58:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:26.058 23:58:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:26.058 23:58:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:26.058 23:58:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:26.316 BaseBdev1_malloc 00:14:26.316 23:58:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:26.575 true 00:14:26.575 23:58:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:26.832 [2024-07-24 23:58:22.589875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:26.832 [2024-07-24 23:58:22.589982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.833 [2024-07-24 23:58:22.590016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:14:26.833 [2024-07-24 23:58:22.590034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.833 [2024-07-24 23:58:22.592515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.833 [2024-07-24 23:58:22.592595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:26.833 BaseBdev1 00:14:26.833 23:58:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:26.833 23:58:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:27.091 BaseBdev2_malloc 00:14:27.091 23:58:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:27.350 true 00:14:27.350 23:58:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:27.609 [2024-07-24 23:58:23.272325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:27.609 [2024-07-24 23:58:23.272473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.609 [2024-07-24 23:58:23.272507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:14:27.609 [2024-07-24 23:58:23.272534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.609 [2024-07-24 23:58:23.275300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.609 [2024-07-24 23:58:23.275397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:27.609 BaseBdev2 00:14:27.609 23:58:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:14:27.867 [2024-07-24 23:58:23.480480] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.867 [2024-07-24 23:58:23.482686] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:27.867 [2024-07-24 23:58:23.483064] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008480 00:14:27.867 [2024-07-24 23:58:23.483104] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:27.867 [2024-07-24 23:58:23.483234] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:14:27.867 [2024-07-24 23:58:23.483615] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008480 00:14:27.867 [2024-07-24 23:58:23.483656] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008480 00:14:27.867 [2024-07-24 23:58:23.483879] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.867 23:58:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:27.867 23:58:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:27.867 23:58:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:27.867 23:58:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:27.867 23:58:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:27.867 23:58:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:27.867 23:58:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:27.867 23:58:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:27.867 23:58:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:27.867 23:58:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:27.867 23:58:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.867 23:58:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.867 23:58:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:27.867 "name": "raid_bdev1", 00:14:27.867 "uuid": "5019fe9c-0494-45f1-ad11-d5dd55e86e76", 00:14:27.867 "strip_size_kb": 64, 00:14:27.867 "state": "online", 00:14:27.867 "raid_level": "raid0", 00:14:27.867 "superblock": true, 00:14:27.867 "num_base_bdevs": 2, 00:14:27.867 "num_base_bdevs_discovered": 2, 00:14:27.867 "num_base_bdevs_operational": 2, 00:14:27.867 "base_bdevs_list": [ 00:14:27.867 { 00:14:27.867 "name": "BaseBdev1", 00:14:27.867 "uuid": "569775f8-3f5b-50fa-9b64-6fa48824f925", 00:14:27.867 "is_configured": true, 00:14:27.867 "data_offset": 2048, 00:14:27.867 "data_size": 63488 00:14:27.867 }, 00:14:27.867 { 00:14:27.867 "name": "BaseBdev2", 00:14:27.867 "uuid": "0f24ad23-0bbb-51a1-b471-b37cb2916dbf", 00:14:27.867 "is_configured": true, 00:14:27.867 "data_offset": 2048, 00:14:27.867 "data_size": 63488 00:14:27.867 } 00:14:27.867 ] 00:14:27.867 }' 00:14:27.867 23:58:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:27.867 23:58:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.434 23:58:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:14:28.434 23:58:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:28.434 [2024-07-24 23:58:24.197715] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:14:29.369 23:58:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:29.628 23:58:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:14:29.628 23:58:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:29.628 23:58:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:14:29.628 23:58:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:29.628 23:58:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:29.628 23:58:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:29.628 23:58:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:29.628 23:58:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:29.628 23:58:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:29.628 23:58:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:29.628 23:58:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:29.628 23:58:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:29.628 23:58:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:29.628 23:58:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.628 23:58:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.887 23:58:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:29.887 "name": "raid_bdev1", 00:14:29.887 "uuid": "5019fe9c-0494-45f1-ad11-d5dd55e86e76", 00:14:29.887 "strip_size_kb": 64, 00:14:29.887 "state": "online", 00:14:29.887 "raid_level": "raid0", 00:14:29.887 "superblock": true, 00:14:29.887 "num_base_bdevs": 2, 00:14:29.887 "num_base_bdevs_discovered": 2, 00:14:29.887 "num_base_bdevs_operational": 2, 00:14:29.887 "base_bdevs_list": [ 00:14:29.887 { 00:14:29.887 "name": "BaseBdev1", 00:14:29.887 "uuid": "569775f8-3f5b-50fa-9b64-6fa48824f925", 00:14:29.887 "is_configured": true, 00:14:29.887 "data_offset": 2048, 00:14:29.887 "data_size": 63488 00:14:29.887 }, 00:14:29.887 { 00:14:29.887 "name": "BaseBdev2", 00:14:29.887 "uuid": "0f24ad23-0bbb-51a1-b471-b37cb2916dbf", 00:14:29.887 "is_configured": true, 00:14:29.887 "data_offset": 2048, 00:14:29.887 "data_size": 63488 00:14:29.887 } 00:14:29.887 ] 00:14:29.887 }' 00:14:29.887 23:58:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:29.887 23:58:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.145 23:58:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:30.404 [2024-07-24 23:58:26.120380] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:30.404 [2024-07-24 23:58:26.120446] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:30.404 [2024-07-24 23:58:26.123698] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:30.404 [2024-07-24 23:58:26.123787] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.404 [2024-07-24 23:58:26.123860] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:30.404 [2024-07-24 23:58:26.123881] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008480 name raid_bdev1, state offline 00:14:30.404 0 00:14:30.404 23:58:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 76843 00:14:30.404 23:58:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 76843 ']' 00:14:30.404 23:58:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 76843 00:14:30.404 23:58:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:14:30.404 23:58:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:30.404 23:58:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76843 00:14:30.404 23:58:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:30.404 killing process with pid 76843 00:14:30.404 23:58:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:30.404 23:58:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76843' 00:14:30.404 23:58:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 76843 00:14:30.404 23:58:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 76843 00:14:30.404 [2024-07-24 23:58:26.178698] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:30.664 [2024-07-24 23:58:26.279880] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:31.599 23:58:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:14:31.599 23:58:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.tzz5IWLh0L 00:14:31.599 23:58:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:14:31.599 23:58:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.52 00:14:31.600 23:58:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:14:31.600 23:58:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:31.600 23:58:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:31.600 23:58:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.52 != \0\.\0\0 ]] 00:14:31.600 00:14:31.600 real 0m6.544s 00:14:31.600 user 0m9.471s 00:14:31.600 sys 0m0.829s 00:14:31.600 23:58:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:31.600 23:58:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.600 ************************************ 00:14:31.600 END TEST raid_read_error_test 00:14:31.600 ************************************ 00:14:31.600 23:58:27 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:14:31.600 23:58:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:31.600 23:58:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:31.600 23:58:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:31.600 ************************************ 00:14:31.600 START TEST raid_write_error_test 00:14:31.600 ************************************ 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.hamg2zKZRY 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=77020 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 77020 /var/tmp/spdk-raid.sock 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 77020 ']' 00:14:31.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:31.600 23:58:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.860 [2024-07-24 23:58:27.507663] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:14:31.860 [2024-07-24 23:58:27.508099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77020 ] 00:14:31.860 [2024-07-24 23:58:27.668331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.120 [2024-07-24 23:58:27.837376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.379 [2024-07-24 23:58:28.001592] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.638 23:58:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:32.638 23:58:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:32.638 23:58:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:32.638 23:58:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:33.205 BaseBdev1_malloc 00:14:33.205 23:58:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:33.205 true 00:14:33.206 23:58:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:33.464 [2024-07-24 23:58:29.221214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:33.464 [2024-07-24 23:58:29.221300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.464 [2024-07-24 23:58:29.221330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:14:33.464 [2024-07-24 23:58:29.221362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.464 [2024-07-24 23:58:29.223826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.464 [2024-07-24 23:58:29.223883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:33.464 BaseBdev1 00:14:33.464 23:58:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:33.464 23:58:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:33.723 BaseBdev2_malloc 00:14:33.723 23:58:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:33.982 true 00:14:33.982 23:58:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:34.241 [2024-07-24 23:58:29.919287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:34.241 [2024-07-24 23:58:29.919569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.241 [2024-07-24 23:58:29.919612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:14:34.241 [2024-07-24 23:58:29.919635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.241 [2024-07-24 23:58:29.922090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.241 [2024-07-24 23:58:29.922169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:34.241 BaseBdev2 00:14:34.241 23:58:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:14:34.500 [2024-07-24 23:58:30.127405] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.500 [2024-07-24 23:58:30.130081] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:34.500 [2024-07-24 23:58:30.130391] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008480 00:14:34.500 [2024-07-24 23:58:30.130413] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:34.500 [2024-07-24 23:58:30.130538] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:14:34.500 [2024-07-24 23:58:30.131161] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008480 00:14:34.500 [2024-07-24 23:58:30.131380] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008480 00:14:34.500 [2024-07-24 23:58:30.131768] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.500 23:58:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:34.500 23:58:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:34.500 23:58:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:34.500 23:58:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:34.500 23:58:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:34.500 23:58:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:34.500 23:58:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:34.500 23:58:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:34.500 23:58:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:34.500 23:58:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:34.500 23:58:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.500 23:58:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.500 23:58:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:34.500 "name": "raid_bdev1", 00:14:34.500 "uuid": "155706ea-3c5e-4d2a-bf3d-ea3d4745145d", 00:14:34.500 "strip_size_kb": 64, 00:14:34.500 "state": "online", 00:14:34.500 "raid_level": "raid0", 00:14:34.500 "superblock": true, 00:14:34.500 "num_base_bdevs": 2, 00:14:34.500 "num_base_bdevs_discovered": 2, 00:14:34.500 "num_base_bdevs_operational": 2, 00:14:34.500 "base_bdevs_list": [ 00:14:34.500 { 00:14:34.500 "name": "BaseBdev1", 00:14:34.500 "uuid": "f28370f1-ee6d-5ae1-b64a-2b30100eb034", 00:14:34.500 "is_configured": true, 00:14:34.500 "data_offset": 2048, 00:14:34.500 "data_size": 63488 00:14:34.500 }, 00:14:34.500 { 00:14:34.500 "name": "BaseBdev2", 00:14:34.500 "uuid": "8b026f75-42eb-5edf-a4da-9016fcc5dd08", 00:14:34.500 "is_configured": true, 00:14:34.500 "data_offset": 2048, 00:14:34.500 "data_size": 63488 00:14:34.500 } 00:14:34.500 ] 00:14:34.500 }' 00:14:34.500 23:58:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:34.500 23:58:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.067 23:58:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:14:35.067 23:58:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:35.067 [2024-07-24 23:58:30.785171] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:14:36.004 23:58:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:36.263 23:58:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:14:36.263 23:58:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:36.263 23:58:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:14:36.263 23:58:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:36.263 23:58:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:36.263 23:58:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:36.263 23:58:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:36.263 23:58:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:36.263 23:58:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:36.263 23:58:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:36.263 23:58:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:36.263 23:58:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:36.263 23:58:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:36.263 23:58:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.263 23:58:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.521 23:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:36.521 "name": "raid_bdev1", 00:14:36.521 "uuid": "155706ea-3c5e-4d2a-bf3d-ea3d4745145d", 00:14:36.521 "strip_size_kb": 64, 00:14:36.521 "state": "online", 00:14:36.521 "raid_level": "raid0", 00:14:36.521 "superblock": true, 00:14:36.521 "num_base_bdevs": 2, 00:14:36.521 "num_base_bdevs_discovered": 2, 00:14:36.521 "num_base_bdevs_operational": 2, 00:14:36.521 "base_bdevs_list": [ 00:14:36.521 { 00:14:36.521 "name": "BaseBdev1", 00:14:36.521 "uuid": "f28370f1-ee6d-5ae1-b64a-2b30100eb034", 00:14:36.521 "is_configured": true, 00:14:36.521 "data_offset": 2048, 00:14:36.521 "data_size": 63488 00:14:36.521 }, 00:14:36.521 { 00:14:36.521 "name": "BaseBdev2", 00:14:36.522 "uuid": "8b026f75-42eb-5edf-a4da-9016fcc5dd08", 00:14:36.522 "is_configured": true, 00:14:36.522 "data_offset": 2048, 00:14:36.522 "data_size": 63488 00:14:36.522 } 00:14:36.522 ] 00:14:36.522 }' 00:14:36.522 23:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:36.522 23:58:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.780 23:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:37.056 [2024-07-24 23:58:32.795140] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:37.056 [2024-07-24 23:58:32.795196] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.056 [2024-07-24 23:58:32.798163] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.056 [2024-07-24 23:58:32.798387] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.056 [2024-07-24 23:58:32.798442] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.056 [2024-07-24 23:58:32.798463] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008480 name raid_bdev1, state offline 00:14:37.056 0 00:14:37.056 23:58:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 77020 00:14:37.056 23:58:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 77020 ']' 00:14:37.056 23:58:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 77020 00:14:37.056 23:58:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:14:37.056 23:58:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:37.057 23:58:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77020 00:14:37.057 killing process with pid 77020 00:14:37.057 23:58:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:37.057 23:58:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:37.057 23:58:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77020' 00:14:37.057 23:58:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 77020 00:14:37.057 [2024-07-24 23:58:32.845363] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:37.057 23:58:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 77020 00:14:37.325 [2024-07-24 23:58:32.944687] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:38.261 23:58:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.hamg2zKZRY 00:14:38.261 23:58:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:14:38.261 23:58:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:14:38.261 ************************************ 00:14:38.261 END TEST raid_write_error_test 00:14:38.261 ************************************ 00:14:38.261 23:58:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.50 00:14:38.261 23:58:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:14:38.261 23:58:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:38.261 23:58:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:38.261 23:58:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.50 != \0\.\0\0 ]] 00:14:38.261 00:14:38.261 real 0m6.614s 00:14:38.261 user 0m9.653s 00:14:38.261 sys 0m0.784s 00:14:38.261 23:58:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:38.261 23:58:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.261 23:58:34 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:14:38.261 23:58:34 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:14:38.261 23:58:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:38.261 23:58:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:38.261 23:58:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:38.261 ************************************ 00:14:38.261 START TEST raid_state_function_test 00:14:38.261 ************************************ 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=77189 00:14:38.261 Process raid pid: 77189 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 77189' 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 77189 /var/tmp/spdk-raid.sock 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 77189 ']' 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:38.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:38.261 23:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.518 [2024-07-24 23:58:34.165249] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:14:38.518 [2024-07-24 23:58:34.165384] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.519 [2024-07-24 23:58:34.325865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.776 [2024-07-24 23:58:34.499671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.035 [2024-07-24 23:58:34.666930] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.293 23:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:39.293 23:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:39.293 23:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:39.552 [2024-07-24 23:58:35.283543] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:39.552 [2024-07-24 23:58:35.283832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:39.552 [2024-07-24 23:58:35.283860] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:39.552 [2024-07-24 23:58:35.283878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:39.552 23:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:39.552 23:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:39.552 23:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:39.552 23:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:39.552 23:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:39.552 23:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:39.552 23:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:39.552 23:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:39.552 23:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:39.552 23:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:39.552 23:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.552 23:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.811 23:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:39.811 "name": "Existed_Raid", 00:14:39.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.811 "strip_size_kb": 64, 00:14:39.811 "state": "configuring", 00:14:39.811 "raid_level": "concat", 00:14:39.811 "superblock": false, 00:14:39.811 "num_base_bdevs": 2, 00:14:39.811 "num_base_bdevs_discovered": 0, 00:14:39.811 "num_base_bdevs_operational": 2, 00:14:39.811 "base_bdevs_list": [ 00:14:39.811 { 00:14:39.811 "name": "BaseBdev1", 00:14:39.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.811 "is_configured": false, 00:14:39.811 "data_offset": 0, 00:14:39.811 "data_size": 0 00:14:39.811 }, 00:14:39.811 { 00:14:39.811 "name": "BaseBdev2", 00:14:39.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.811 "is_configured": false, 00:14:39.811 "data_offset": 0, 00:14:39.811 "data_size": 0 00:14:39.811 } 00:14:39.811 ] 00:14:39.811 }' 00:14:39.811 23:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:39.811 23:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.070 23:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:40.329 [2024-07-24 23:58:36.075639] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:40.329 [2024-07-24 23:58:36.075688] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:14:40.329 23:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:40.588 [2024-07-24 23:58:36.319702] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:40.588 [2024-07-24 23:58:36.319965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:40.588 [2024-07-24 23:58:36.319992] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:40.588 [2024-07-24 23:58:36.320010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:40.588 23:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:40.847 [2024-07-24 23:58:36.582809] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:40.847 BaseBdev1 00:14:40.847 23:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:40.847 23:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:40.847 23:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:40.847 23:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:40.847 23:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:40.847 23:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:40.847 23:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:41.119 23:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:41.377 [ 00:14:41.377 { 00:14:41.377 "name": "BaseBdev1", 00:14:41.377 "aliases": [ 00:14:41.377 "3889d782-bc65-4633-beae-b4e2cc2b7156" 00:14:41.377 ], 00:14:41.377 "product_name": "Malloc disk", 00:14:41.377 "block_size": 512, 00:14:41.377 "num_blocks": 65536, 00:14:41.377 "uuid": "3889d782-bc65-4633-beae-b4e2cc2b7156", 00:14:41.377 "assigned_rate_limits": { 00:14:41.377 "rw_ios_per_sec": 0, 00:14:41.377 "rw_mbytes_per_sec": 0, 00:14:41.377 "r_mbytes_per_sec": 0, 00:14:41.377 "w_mbytes_per_sec": 0 00:14:41.377 }, 00:14:41.378 "claimed": true, 00:14:41.378 "claim_type": "exclusive_write", 00:14:41.378 "zoned": false, 00:14:41.378 "supported_io_types": { 00:14:41.378 "read": true, 00:14:41.378 "write": true, 00:14:41.378 "unmap": true, 00:14:41.378 "flush": true, 00:14:41.378 "reset": true, 00:14:41.378 "nvme_admin": false, 00:14:41.378 "nvme_io": false, 00:14:41.378 "nvme_io_md": false, 00:14:41.378 "write_zeroes": true, 00:14:41.378 "zcopy": true, 00:14:41.378 "get_zone_info": false, 00:14:41.378 "zone_management": false, 00:14:41.378 "zone_append": false, 00:14:41.378 "compare": false, 00:14:41.378 "compare_and_write": false, 00:14:41.378 "abort": true, 00:14:41.378 "seek_hole": false, 00:14:41.378 "seek_data": false, 00:14:41.378 "copy": true, 00:14:41.378 "nvme_iov_md": false 00:14:41.378 }, 00:14:41.378 "memory_domains": [ 00:14:41.378 { 00:14:41.378 "dma_device_id": "system", 00:14:41.378 "dma_device_type": 1 00:14:41.378 }, 00:14:41.378 { 00:14:41.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.378 "dma_device_type": 2 00:14:41.378 } 00:14:41.378 ], 00:14:41.378 "driver_specific": {} 00:14:41.378 } 00:14:41.378 ] 00:14:41.378 23:58:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:41.378 23:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:41.378 23:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:41.378 23:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:41.378 23:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:41.378 23:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:41.378 23:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:41.378 23:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:41.378 23:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:41.378 23:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:41.378 23:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:41.378 23:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.378 23:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.636 23:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:41.636 "name": "Existed_Raid", 00:14:41.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.636 "strip_size_kb": 64, 00:14:41.636 "state": "configuring", 00:14:41.636 "raid_level": "concat", 00:14:41.636 "superblock": false, 00:14:41.636 "num_base_bdevs": 2, 00:14:41.636 "num_base_bdevs_discovered": 1, 00:14:41.636 "num_base_bdevs_operational": 2, 00:14:41.636 "base_bdevs_list": [ 00:14:41.636 { 00:14:41.636 "name": "BaseBdev1", 00:14:41.636 "uuid": "3889d782-bc65-4633-beae-b4e2cc2b7156", 00:14:41.636 "is_configured": true, 00:14:41.636 "data_offset": 0, 00:14:41.636 "data_size": 65536 00:14:41.636 }, 00:14:41.636 { 00:14:41.636 "name": "BaseBdev2", 00:14:41.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.636 "is_configured": false, 00:14:41.636 "data_offset": 0, 00:14:41.636 "data_size": 0 00:14:41.636 } 00:14:41.636 ] 00:14:41.636 }' 00:14:41.636 23:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:41.636 23:58:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.894 23:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:42.153 [2024-07-24 23:58:37.827296] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:42.153 [2024-07-24 23:58:37.827571] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:14:42.153 23:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:42.412 [2024-07-24 23:58:38.039394] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.412 [2024-07-24 23:58:38.041909] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.412 [2024-07-24 23:58:38.042120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.412 23:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:42.412 23:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:42.412 23:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:42.412 23:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:42.412 23:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:42.412 23:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:42.412 23:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:42.412 23:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:42.412 23:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:42.412 23:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:42.412 23:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:42.412 23:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:42.412 23:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.412 23:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.671 23:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:42.671 "name": "Existed_Raid", 00:14:42.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.671 "strip_size_kb": 64, 00:14:42.671 "state": "configuring", 00:14:42.671 "raid_level": "concat", 00:14:42.671 "superblock": false, 00:14:42.671 "num_base_bdevs": 2, 00:14:42.671 "num_base_bdevs_discovered": 1, 00:14:42.671 "num_base_bdevs_operational": 2, 00:14:42.671 "base_bdevs_list": [ 00:14:42.671 { 00:14:42.671 "name": "BaseBdev1", 00:14:42.671 "uuid": "3889d782-bc65-4633-beae-b4e2cc2b7156", 00:14:42.671 "is_configured": true, 00:14:42.671 "data_offset": 0, 00:14:42.671 "data_size": 65536 00:14:42.671 }, 00:14:42.671 { 00:14:42.671 "name": "BaseBdev2", 00:14:42.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.671 "is_configured": false, 00:14:42.671 "data_offset": 0, 00:14:42.671 "data_size": 0 00:14:42.671 } 00:14:42.671 ] 00:14:42.671 }' 00:14:42.671 23:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:42.671 23:58:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.929 23:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:43.188 [2024-07-24 23:58:38.928546] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:43.188 [2024-07-24 23:58:38.928599] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:14:43.188 [2024-07-24 23:58:38.928617] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:43.188 [2024-07-24 23:58:38.928729] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:14:43.188 [2024-07-24 23:58:38.929151] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:14:43.188 [2024-07-24 23:58:38.929172] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:14:43.188 [2024-07-24 23:58:38.929545] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.188 BaseBdev2 00:14:43.188 23:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:43.188 23:58:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:43.188 23:58:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:43.188 23:58:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:43.188 23:58:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:43.188 23:58:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:43.188 23:58:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:43.447 23:58:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:43.706 [ 00:14:43.706 { 00:14:43.706 "name": "BaseBdev2", 00:14:43.706 "aliases": [ 00:14:43.706 "ad5de259-423c-4c11-b250-7e2a7c81c5c2" 00:14:43.706 ], 00:14:43.706 "product_name": "Malloc disk", 00:14:43.706 "block_size": 512, 00:14:43.706 "num_blocks": 65536, 00:14:43.706 "uuid": "ad5de259-423c-4c11-b250-7e2a7c81c5c2", 00:14:43.706 "assigned_rate_limits": { 00:14:43.706 "rw_ios_per_sec": 0, 00:14:43.706 "rw_mbytes_per_sec": 0, 00:14:43.706 "r_mbytes_per_sec": 0, 00:14:43.706 "w_mbytes_per_sec": 0 00:14:43.706 }, 00:14:43.706 "claimed": true, 00:14:43.706 "claim_type": "exclusive_write", 00:14:43.706 "zoned": false, 00:14:43.706 "supported_io_types": { 00:14:43.706 "read": true, 00:14:43.706 "write": true, 00:14:43.706 "unmap": true, 00:14:43.706 "flush": true, 00:14:43.706 "reset": true, 00:14:43.706 "nvme_admin": false, 00:14:43.706 "nvme_io": false, 00:14:43.706 "nvme_io_md": false, 00:14:43.706 "write_zeroes": true, 00:14:43.706 "zcopy": true, 00:14:43.706 "get_zone_info": false, 00:14:43.706 "zone_management": false, 00:14:43.706 "zone_append": false, 00:14:43.706 "compare": false, 00:14:43.706 "compare_and_write": false, 00:14:43.706 "abort": true, 00:14:43.706 "seek_hole": false, 00:14:43.706 "seek_data": false, 00:14:43.706 "copy": true, 00:14:43.706 "nvme_iov_md": false 00:14:43.706 }, 00:14:43.706 "memory_domains": [ 00:14:43.706 { 00:14:43.706 "dma_device_id": "system", 00:14:43.706 "dma_device_type": 1 00:14:43.706 }, 00:14:43.706 { 00:14:43.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.706 "dma_device_type": 2 00:14:43.706 } 00:14:43.706 ], 00:14:43.706 "driver_specific": {} 00:14:43.706 } 00:14:43.706 ] 00:14:43.706 23:58:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:43.706 23:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:43.706 23:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:43.706 23:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:43.706 23:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:43.706 23:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:43.706 23:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:43.706 23:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:43.706 23:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:43.706 23:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:43.706 23:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:43.706 23:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:43.706 23:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:43.706 23:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.706 23:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.994 23:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:43.994 "name": "Existed_Raid", 00:14:43.994 "uuid": "f7b9693e-0c6a-4dc5-a347-3ebd8476b2fe", 00:14:43.994 "strip_size_kb": 64, 00:14:43.994 "state": "online", 00:14:43.994 "raid_level": "concat", 00:14:43.994 "superblock": false, 00:14:43.994 "num_base_bdevs": 2, 00:14:43.994 "num_base_bdevs_discovered": 2, 00:14:43.994 "num_base_bdevs_operational": 2, 00:14:43.994 "base_bdevs_list": [ 00:14:43.994 { 00:14:43.994 "name": "BaseBdev1", 00:14:43.994 "uuid": "3889d782-bc65-4633-beae-b4e2cc2b7156", 00:14:43.994 "is_configured": true, 00:14:43.994 "data_offset": 0, 00:14:43.994 "data_size": 65536 00:14:43.994 }, 00:14:43.994 { 00:14:43.994 "name": "BaseBdev2", 00:14:43.994 "uuid": "ad5de259-423c-4c11-b250-7e2a7c81c5c2", 00:14:43.994 "is_configured": true, 00:14:43.994 "data_offset": 0, 00:14:43.994 "data_size": 65536 00:14:43.994 } 00:14:43.994 ] 00:14:43.994 }' 00:14:43.994 23:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:43.994 23:58:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.253 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:44.253 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:44.253 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:44.253 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:44.253 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:44.253 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:44.253 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:44.253 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:44.512 [2024-07-24 23:58:40.233208] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.512 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:44.512 "name": "Existed_Raid", 00:14:44.512 "aliases": [ 00:14:44.512 "f7b9693e-0c6a-4dc5-a347-3ebd8476b2fe" 00:14:44.512 ], 00:14:44.512 "product_name": "Raid Volume", 00:14:44.512 "block_size": 512, 00:14:44.512 "num_blocks": 131072, 00:14:44.512 "uuid": "f7b9693e-0c6a-4dc5-a347-3ebd8476b2fe", 00:14:44.512 "assigned_rate_limits": { 00:14:44.512 "rw_ios_per_sec": 0, 00:14:44.512 "rw_mbytes_per_sec": 0, 00:14:44.512 "r_mbytes_per_sec": 0, 00:14:44.512 "w_mbytes_per_sec": 0 00:14:44.512 }, 00:14:44.512 "claimed": false, 00:14:44.512 "zoned": false, 00:14:44.512 "supported_io_types": { 00:14:44.512 "read": true, 00:14:44.512 "write": true, 00:14:44.512 "unmap": true, 00:14:44.512 "flush": true, 00:14:44.512 "reset": true, 00:14:44.512 "nvme_admin": false, 00:14:44.512 "nvme_io": false, 00:14:44.512 "nvme_io_md": false, 00:14:44.512 "write_zeroes": true, 00:14:44.512 "zcopy": false, 00:14:44.512 "get_zone_info": false, 00:14:44.512 "zone_management": false, 00:14:44.512 "zone_append": false, 00:14:44.512 "compare": false, 00:14:44.512 "compare_and_write": false, 00:14:44.512 "abort": false, 00:14:44.512 "seek_hole": false, 00:14:44.512 "seek_data": false, 00:14:44.512 "copy": false, 00:14:44.512 "nvme_iov_md": false 00:14:44.512 }, 00:14:44.512 "memory_domains": [ 00:14:44.512 { 00:14:44.512 "dma_device_id": "system", 00:14:44.512 "dma_device_type": 1 00:14:44.512 }, 00:14:44.512 { 00:14:44.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.512 "dma_device_type": 2 00:14:44.512 }, 00:14:44.512 { 00:14:44.512 "dma_device_id": "system", 00:14:44.512 "dma_device_type": 1 00:14:44.512 }, 00:14:44.512 { 00:14:44.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.512 "dma_device_type": 2 00:14:44.512 } 00:14:44.512 ], 00:14:44.512 "driver_specific": { 00:14:44.512 "raid": { 00:14:44.512 "uuid": "f7b9693e-0c6a-4dc5-a347-3ebd8476b2fe", 00:14:44.512 "strip_size_kb": 64, 00:14:44.512 "state": "online", 00:14:44.512 "raid_level": "concat", 00:14:44.512 "superblock": false, 00:14:44.512 "num_base_bdevs": 2, 00:14:44.512 "num_base_bdevs_discovered": 2, 00:14:44.512 "num_base_bdevs_operational": 2, 00:14:44.512 "base_bdevs_list": [ 00:14:44.512 { 00:14:44.512 "name": "BaseBdev1", 00:14:44.512 "uuid": "3889d782-bc65-4633-beae-b4e2cc2b7156", 00:14:44.512 "is_configured": true, 00:14:44.512 "data_offset": 0, 00:14:44.512 "data_size": 65536 00:14:44.512 }, 00:14:44.512 { 00:14:44.512 "name": "BaseBdev2", 00:14:44.512 "uuid": "ad5de259-423c-4c11-b250-7e2a7c81c5c2", 00:14:44.512 "is_configured": true, 00:14:44.512 "data_offset": 0, 00:14:44.512 "data_size": 65536 00:14:44.512 } 00:14:44.512 ] 00:14:44.512 } 00:14:44.512 } 00:14:44.512 }' 00:14:44.512 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:44.512 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:44.512 BaseBdev2' 00:14:44.512 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:44.512 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:44.512 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:44.772 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:44.772 "name": "BaseBdev1", 00:14:44.772 "aliases": [ 00:14:44.772 "3889d782-bc65-4633-beae-b4e2cc2b7156" 00:14:44.772 ], 00:14:44.772 "product_name": "Malloc disk", 00:14:44.772 "block_size": 512, 00:14:44.772 "num_blocks": 65536, 00:14:44.772 "uuid": "3889d782-bc65-4633-beae-b4e2cc2b7156", 00:14:44.772 "assigned_rate_limits": { 00:14:44.772 "rw_ios_per_sec": 0, 00:14:44.772 "rw_mbytes_per_sec": 0, 00:14:44.772 "r_mbytes_per_sec": 0, 00:14:44.772 "w_mbytes_per_sec": 0 00:14:44.772 }, 00:14:44.772 "claimed": true, 00:14:44.772 "claim_type": "exclusive_write", 00:14:44.772 "zoned": false, 00:14:44.772 "supported_io_types": { 00:14:44.772 "read": true, 00:14:44.772 "write": true, 00:14:44.772 "unmap": true, 00:14:44.772 "flush": true, 00:14:44.772 "reset": true, 00:14:44.772 "nvme_admin": false, 00:14:44.772 "nvme_io": false, 00:14:44.772 "nvme_io_md": false, 00:14:44.772 "write_zeroes": true, 00:14:44.772 "zcopy": true, 00:14:44.772 "get_zone_info": false, 00:14:44.772 "zone_management": false, 00:14:44.772 "zone_append": false, 00:14:44.772 "compare": false, 00:14:44.772 "compare_and_write": false, 00:14:44.772 "abort": true, 00:14:44.772 "seek_hole": false, 00:14:44.772 "seek_data": false, 00:14:44.772 "copy": true, 00:14:44.772 "nvme_iov_md": false 00:14:44.772 }, 00:14:44.772 "memory_domains": [ 00:14:44.772 { 00:14:44.772 "dma_device_id": "system", 00:14:44.772 "dma_device_type": 1 00:14:44.772 }, 00:14:44.772 { 00:14:44.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.772 "dma_device_type": 2 00:14:44.772 } 00:14:44.772 ], 00:14:44.772 "driver_specific": {} 00:14:44.772 }' 00:14:44.772 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.772 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.772 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:44.772 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.772 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.772 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:44.772 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.772 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.772 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:44.772 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.772 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.772 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:44.772 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:44.772 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:44.772 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:45.031 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:45.031 "name": "BaseBdev2", 00:14:45.031 "aliases": [ 00:14:45.031 "ad5de259-423c-4c11-b250-7e2a7c81c5c2" 00:14:45.031 ], 00:14:45.031 "product_name": "Malloc disk", 00:14:45.031 "block_size": 512, 00:14:45.031 "num_blocks": 65536, 00:14:45.031 "uuid": "ad5de259-423c-4c11-b250-7e2a7c81c5c2", 00:14:45.031 "assigned_rate_limits": { 00:14:45.031 "rw_ios_per_sec": 0, 00:14:45.031 "rw_mbytes_per_sec": 0, 00:14:45.031 "r_mbytes_per_sec": 0, 00:14:45.031 "w_mbytes_per_sec": 0 00:14:45.031 }, 00:14:45.031 "claimed": true, 00:14:45.031 "claim_type": "exclusive_write", 00:14:45.031 "zoned": false, 00:14:45.031 "supported_io_types": { 00:14:45.031 "read": true, 00:14:45.031 "write": true, 00:14:45.031 "unmap": true, 00:14:45.031 "flush": true, 00:14:45.031 "reset": true, 00:14:45.031 "nvme_admin": false, 00:14:45.031 "nvme_io": false, 00:14:45.031 "nvme_io_md": false, 00:14:45.031 "write_zeroes": true, 00:14:45.031 "zcopy": true, 00:14:45.031 "get_zone_info": false, 00:14:45.031 "zone_management": false, 00:14:45.031 "zone_append": false, 00:14:45.031 "compare": false, 00:14:45.031 "compare_and_write": false, 00:14:45.031 "abort": true, 00:14:45.031 "seek_hole": false, 00:14:45.031 "seek_data": false, 00:14:45.031 "copy": true, 00:14:45.031 "nvme_iov_md": false 00:14:45.031 }, 00:14:45.031 "memory_domains": [ 00:14:45.031 { 00:14:45.031 "dma_device_id": "system", 00:14:45.031 "dma_device_type": 1 00:14:45.031 }, 00:14:45.031 { 00:14:45.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.031 "dma_device_type": 2 00:14:45.031 } 00:14:45.031 ], 00:14:45.031 "driver_specific": {} 00:14:45.031 }' 00:14:45.031 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:45.031 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:45.031 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:45.031 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:45.031 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:45.031 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:45.031 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:45.031 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:45.031 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:45.031 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:45.031 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:45.031 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:45.031 23:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:45.290 [2024-07-24 23:58:41.097275] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:45.290 [2024-07-24 23:58:41.097314] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.290 [2024-07-24 23:58:41.097382] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.549 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:45.549 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:45.549 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:45.549 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:45.549 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:45.549 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:45.550 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:45.550 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:45.550 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:45.550 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:45.550 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:45.550 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:45.550 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:45.550 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:45.550 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:45.550 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.550 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.808 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:45.808 "name": "Existed_Raid", 00:14:45.808 "uuid": "f7b9693e-0c6a-4dc5-a347-3ebd8476b2fe", 00:14:45.808 "strip_size_kb": 64, 00:14:45.808 "state": "offline", 00:14:45.808 "raid_level": "concat", 00:14:45.808 "superblock": false, 00:14:45.808 "num_base_bdevs": 2, 00:14:45.808 "num_base_bdevs_discovered": 1, 00:14:45.808 "num_base_bdevs_operational": 1, 00:14:45.808 "base_bdevs_list": [ 00:14:45.808 { 00:14:45.808 "name": null, 00:14:45.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.808 "is_configured": false, 00:14:45.808 "data_offset": 0, 00:14:45.809 "data_size": 65536 00:14:45.809 }, 00:14:45.809 { 00:14:45.809 "name": "BaseBdev2", 00:14:45.809 "uuid": "ad5de259-423c-4c11-b250-7e2a7c81c5c2", 00:14:45.809 "is_configured": true, 00:14:45.809 "data_offset": 0, 00:14:45.809 "data_size": 65536 00:14:45.809 } 00:14:45.809 ] 00:14:45.809 }' 00:14:45.809 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:45.809 23:58:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.067 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:46.067 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:46.067 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.067 23:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:46.326 23:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:46.326 23:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:46.326 23:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:46.585 [2024-07-24 23:58:42.233333] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:46.585 [2024-07-24 23:58:42.233397] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:14:46.585 23:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:46.585 23:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:46.585 23:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:46.585 23:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.844 23:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:46.844 23:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:46.844 23:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:46.844 23:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 77189 00:14:46.844 23:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 77189 ']' 00:14:46.844 23:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 77189 00:14:46.844 23:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:46.844 23:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:46.844 23:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77189 00:14:46.844 killing process with pid 77189 00:14:46.844 23:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:46.844 23:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:46.844 23:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77189' 00:14:46.844 23:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 77189 00:14:46.844 [2024-07-24 23:58:42.639640] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:46.844 23:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 77189 00:14:46.844 [2024-07-24 23:58:42.639745] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:48.221 23:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:48.221 ************************************ 00:14:48.221 END TEST raid_state_function_test 00:14:48.221 ************************************ 00:14:48.221 00:14:48.221 real 0m9.594s 00:14:48.221 user 0m15.832s 00:14:48.221 sys 0m1.454s 00:14:48.221 23:58:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:48.221 23:58:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.221 23:58:43 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:14:48.221 23:58:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:48.221 23:58:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:48.221 23:58:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:48.221 ************************************ 00:14:48.221 START TEST raid_state_function_test_sb 00:14:48.221 ************************************ 00:14:48.221 23:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:14:48.221 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:48.221 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:48.221 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:48.221 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:48.221 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:48.221 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:48.221 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:48.221 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:48.221 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:48.221 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:48.221 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:48.221 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:48.222 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:48.222 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:48.222 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:48.222 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:48.222 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:48.222 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:48.222 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:48.222 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:48.222 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:48.222 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:48.222 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:48.222 Process raid pid: 77530 00:14:48.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:48.222 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=77530 00:14:48.222 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 77530' 00:14:48.222 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:48.222 23:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 77530 /var/tmp/spdk-raid.sock 00:14:48.222 23:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77530 ']' 00:14:48.222 23:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:48.222 23:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:48.222 23:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:48.222 23:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:48.222 23:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.222 [2024-07-24 23:58:43.811667] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:14:48.222 [2024-07-24 23:58:43.812086] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.222 [2024-07-24 23:58:43.977951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.480 [2024-07-24 23:58:44.153183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.480 [2024-07-24 23:58:44.317178] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.048 23:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:49.048 23:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:49.048 23:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:49.307 [2024-07-24 23:58:44.941284] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:49.307 [2024-07-24 23:58:44.941359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:49.307 [2024-07-24 23:58:44.941375] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:49.307 [2024-07-24 23:58:44.941389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:49.307 23:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:49.307 23:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:49.307 23:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:49.307 23:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:49.307 23:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:49.307 23:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:49.307 23:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:49.307 23:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:49.307 23:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:49.307 23:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:49.307 23:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.307 23:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.307 23:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:49.307 "name": "Existed_Raid", 00:14:49.307 "uuid": "08be6c57-be88-40b0-972b-d3926573492d", 00:14:49.307 "strip_size_kb": 64, 00:14:49.307 "state": "configuring", 00:14:49.307 "raid_level": "concat", 00:14:49.307 "superblock": true, 00:14:49.307 "num_base_bdevs": 2, 00:14:49.307 "num_base_bdevs_discovered": 0, 00:14:49.307 "num_base_bdevs_operational": 2, 00:14:49.307 "base_bdevs_list": [ 00:14:49.307 { 00:14:49.307 "name": "BaseBdev1", 00:14:49.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.307 "is_configured": false, 00:14:49.307 "data_offset": 0, 00:14:49.307 "data_size": 0 00:14:49.307 }, 00:14:49.307 { 00:14:49.307 "name": "BaseBdev2", 00:14:49.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.307 "is_configured": false, 00:14:49.307 "data_offset": 0, 00:14:49.307 "data_size": 0 00:14:49.307 } 00:14:49.307 ] 00:14:49.307 }' 00:14:49.307 23:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:49.307 23:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.874 23:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:49.874 [2024-07-24 23:58:45.697437] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:49.874 [2024-07-24 23:58:45.697484] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:14:49.874 23:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:50.132 [2024-07-24 23:58:45.969533] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.132 [2024-07-24 23:58:45.969615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.132 [2024-07-24 23:58:45.969630] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:50.132 [2024-07-24 23:58:45.969645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:50.132 23:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:50.390 [2024-07-24 23:58:46.232938] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:50.390 BaseBdev1 00:14:50.390 23:58:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:50.390 23:58:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:50.390 23:58:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:50.390 23:58:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:50.390 23:58:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:50.390 23:58:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:50.390 23:58:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:50.649 23:58:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:50.907 [ 00:14:50.907 { 00:14:50.907 "name": "BaseBdev1", 00:14:50.907 "aliases": [ 00:14:50.907 "0254686a-ba13-44ca-8e52-a9b71fd40e3a" 00:14:50.907 ], 00:14:50.907 "product_name": "Malloc disk", 00:14:50.907 "block_size": 512, 00:14:50.907 "num_blocks": 65536, 00:14:50.907 "uuid": "0254686a-ba13-44ca-8e52-a9b71fd40e3a", 00:14:50.907 "assigned_rate_limits": { 00:14:50.907 "rw_ios_per_sec": 0, 00:14:50.907 "rw_mbytes_per_sec": 0, 00:14:50.907 "r_mbytes_per_sec": 0, 00:14:50.907 "w_mbytes_per_sec": 0 00:14:50.907 }, 00:14:50.907 "claimed": true, 00:14:50.907 "claim_type": "exclusive_write", 00:14:50.907 "zoned": false, 00:14:50.907 "supported_io_types": { 00:14:50.907 "read": true, 00:14:50.907 "write": true, 00:14:50.907 "unmap": true, 00:14:50.907 "flush": true, 00:14:50.907 "reset": true, 00:14:50.907 "nvme_admin": false, 00:14:50.907 "nvme_io": false, 00:14:50.907 "nvme_io_md": false, 00:14:50.907 "write_zeroes": true, 00:14:50.907 "zcopy": true, 00:14:50.907 "get_zone_info": false, 00:14:50.907 "zone_management": false, 00:14:50.907 "zone_append": false, 00:14:50.907 "compare": false, 00:14:50.907 "compare_and_write": false, 00:14:50.907 "abort": true, 00:14:50.907 "seek_hole": false, 00:14:50.907 "seek_data": false, 00:14:50.907 "copy": true, 00:14:50.907 "nvme_iov_md": false 00:14:50.907 }, 00:14:50.907 "memory_domains": [ 00:14:50.907 { 00:14:50.907 "dma_device_id": "system", 00:14:50.907 "dma_device_type": 1 00:14:50.908 }, 00:14:50.908 { 00:14:50.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.908 "dma_device_type": 2 00:14:50.908 } 00:14:50.908 ], 00:14:50.908 "driver_specific": {} 00:14:50.908 } 00:14:50.908 ] 00:14:50.908 23:58:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:50.908 23:58:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:50.908 23:58:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:50.908 23:58:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:50.908 23:58:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:50.908 23:58:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:50.908 23:58:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:50.908 23:58:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:50.908 23:58:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:50.908 23:58:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:50.908 23:58:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:50.908 23:58:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.908 23:58:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.166 23:58:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:51.166 "name": "Existed_Raid", 00:14:51.166 "uuid": "53ef96af-5298-4292-baa6-779141363051", 00:14:51.166 "strip_size_kb": 64, 00:14:51.166 "state": "configuring", 00:14:51.166 "raid_level": "concat", 00:14:51.166 "superblock": true, 00:14:51.166 "num_base_bdevs": 2, 00:14:51.166 "num_base_bdevs_discovered": 1, 00:14:51.166 "num_base_bdevs_operational": 2, 00:14:51.166 "base_bdevs_list": [ 00:14:51.166 { 00:14:51.166 "name": "BaseBdev1", 00:14:51.166 "uuid": "0254686a-ba13-44ca-8e52-a9b71fd40e3a", 00:14:51.166 "is_configured": true, 00:14:51.166 "data_offset": 2048, 00:14:51.166 "data_size": 63488 00:14:51.166 }, 00:14:51.166 { 00:14:51.166 "name": "BaseBdev2", 00:14:51.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.166 "is_configured": false, 00:14:51.166 "data_offset": 0, 00:14:51.166 "data_size": 0 00:14:51.166 } 00:14:51.166 ] 00:14:51.166 }' 00:14:51.166 23:58:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:51.166 23:58:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.427 23:58:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:51.734 [2024-07-24 23:58:47.477343] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:51.734 [2024-07-24 23:58:47.477405] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:14:51.734 23:58:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:51.992 [2024-07-24 23:58:47.693552] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.992 [2024-07-24 23:58:47.695788] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:51.992 [2024-07-24 23:58:47.695868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:51.992 23:58:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:51.992 23:58:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:51.992 23:58:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:51.992 23:58:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:51.992 23:58:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:51.992 23:58:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:51.993 23:58:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:51.993 23:58:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:51.993 23:58:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:51.993 23:58:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:51.993 23:58:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:51.993 23:58:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:51.993 23:58:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.993 23:58:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.251 23:58:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:52.251 "name": "Existed_Raid", 00:14:52.251 "uuid": "293a5028-9f6b-4a78-94ce-cd2c5e23ed3d", 00:14:52.251 "strip_size_kb": 64, 00:14:52.251 "state": "configuring", 00:14:52.251 "raid_level": "concat", 00:14:52.251 "superblock": true, 00:14:52.251 "num_base_bdevs": 2, 00:14:52.251 "num_base_bdevs_discovered": 1, 00:14:52.251 "num_base_bdevs_operational": 2, 00:14:52.251 "base_bdevs_list": [ 00:14:52.251 { 00:14:52.251 "name": "BaseBdev1", 00:14:52.251 "uuid": "0254686a-ba13-44ca-8e52-a9b71fd40e3a", 00:14:52.251 "is_configured": true, 00:14:52.251 "data_offset": 2048, 00:14:52.251 "data_size": 63488 00:14:52.251 }, 00:14:52.251 { 00:14:52.251 "name": "BaseBdev2", 00:14:52.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.251 "is_configured": false, 00:14:52.251 "data_offset": 0, 00:14:52.251 "data_size": 0 00:14:52.251 } 00:14:52.251 ] 00:14:52.251 }' 00:14:52.251 23:58:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:52.251 23:58:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.509 23:58:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:52.768 [2024-07-24 23:58:48.455260] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.768 [2024-07-24 23:58:48.455808] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:14:52.768 [2024-07-24 23:58:48.456005] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:52.768 [2024-07-24 23:58:48.456174] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:14:52.768 [2024-07-24 23:58:48.456576] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:14:52.768 BaseBdev2 00:14:52.768 [2024-07-24 23:58:48.456743] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:14:52.768 [2024-07-24 23:58:48.456970] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.768 23:58:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:52.768 23:58:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:52.768 23:58:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:52.768 23:58:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:52.768 23:58:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:52.768 23:58:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:52.768 23:58:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:53.026 23:58:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:53.285 [ 00:14:53.285 { 00:14:53.285 "name": "BaseBdev2", 00:14:53.285 "aliases": [ 00:14:53.285 "9a90e24e-b501-42f3-842c-7e4ebbfedb4d" 00:14:53.285 ], 00:14:53.285 "product_name": "Malloc disk", 00:14:53.285 "block_size": 512, 00:14:53.285 "num_blocks": 65536, 00:14:53.285 "uuid": "9a90e24e-b501-42f3-842c-7e4ebbfedb4d", 00:14:53.285 "assigned_rate_limits": { 00:14:53.285 "rw_ios_per_sec": 0, 00:14:53.285 "rw_mbytes_per_sec": 0, 00:14:53.285 "r_mbytes_per_sec": 0, 00:14:53.285 "w_mbytes_per_sec": 0 00:14:53.285 }, 00:14:53.285 "claimed": true, 00:14:53.285 "claim_type": "exclusive_write", 00:14:53.285 "zoned": false, 00:14:53.285 "supported_io_types": { 00:14:53.285 "read": true, 00:14:53.285 "write": true, 00:14:53.285 "unmap": true, 00:14:53.285 "flush": true, 00:14:53.285 "reset": true, 00:14:53.285 "nvme_admin": false, 00:14:53.285 "nvme_io": false, 00:14:53.285 "nvme_io_md": false, 00:14:53.285 "write_zeroes": true, 00:14:53.285 "zcopy": true, 00:14:53.285 "get_zone_info": false, 00:14:53.285 "zone_management": false, 00:14:53.285 "zone_append": false, 00:14:53.285 "compare": false, 00:14:53.285 "compare_and_write": false, 00:14:53.285 "abort": true, 00:14:53.285 "seek_hole": false, 00:14:53.285 "seek_data": false, 00:14:53.285 "copy": true, 00:14:53.285 "nvme_iov_md": false 00:14:53.285 }, 00:14:53.285 "memory_domains": [ 00:14:53.285 { 00:14:53.285 "dma_device_id": "system", 00:14:53.285 "dma_device_type": 1 00:14:53.285 }, 00:14:53.285 { 00:14:53.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.285 "dma_device_type": 2 00:14:53.285 } 00:14:53.285 ], 00:14:53.285 "driver_specific": {} 00:14:53.285 } 00:14:53.285 ] 00:14:53.285 23:58:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:53.285 23:58:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:53.285 23:58:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:53.285 23:58:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:53.285 23:58:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:53.285 23:58:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:53.285 23:58:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:53.285 23:58:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:53.285 23:58:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:53.285 23:58:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:53.285 23:58:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:53.285 23:58:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:53.285 23:58:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:53.285 23:58:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.285 23:58:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.544 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:53.544 "name": "Existed_Raid", 00:14:53.544 "uuid": "293a5028-9f6b-4a78-94ce-cd2c5e23ed3d", 00:14:53.544 "strip_size_kb": 64, 00:14:53.544 "state": "online", 00:14:53.544 "raid_level": "concat", 00:14:53.544 "superblock": true, 00:14:53.544 "num_base_bdevs": 2, 00:14:53.544 "num_base_bdevs_discovered": 2, 00:14:53.544 "num_base_bdevs_operational": 2, 00:14:53.544 "base_bdevs_list": [ 00:14:53.544 { 00:14:53.544 "name": "BaseBdev1", 00:14:53.544 "uuid": "0254686a-ba13-44ca-8e52-a9b71fd40e3a", 00:14:53.544 "is_configured": true, 00:14:53.544 "data_offset": 2048, 00:14:53.544 "data_size": 63488 00:14:53.544 }, 00:14:53.544 { 00:14:53.544 "name": "BaseBdev2", 00:14:53.544 "uuid": "9a90e24e-b501-42f3-842c-7e4ebbfedb4d", 00:14:53.544 "is_configured": true, 00:14:53.544 "data_offset": 2048, 00:14:53.544 "data_size": 63488 00:14:53.544 } 00:14:53.544 ] 00:14:53.544 }' 00:14:53.544 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:53.544 23:58:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.803 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:53.803 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:53.803 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:53.803 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:53.803 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:53.803 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:53.803 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:53.803 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:54.061 [2024-07-24 23:58:49.719999] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.061 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:54.061 "name": "Existed_Raid", 00:14:54.062 "aliases": [ 00:14:54.062 "293a5028-9f6b-4a78-94ce-cd2c5e23ed3d" 00:14:54.062 ], 00:14:54.062 "product_name": "Raid Volume", 00:14:54.062 "block_size": 512, 00:14:54.062 "num_blocks": 126976, 00:14:54.062 "uuid": "293a5028-9f6b-4a78-94ce-cd2c5e23ed3d", 00:14:54.062 "assigned_rate_limits": { 00:14:54.062 "rw_ios_per_sec": 0, 00:14:54.062 "rw_mbytes_per_sec": 0, 00:14:54.062 "r_mbytes_per_sec": 0, 00:14:54.062 "w_mbytes_per_sec": 0 00:14:54.062 }, 00:14:54.062 "claimed": false, 00:14:54.062 "zoned": false, 00:14:54.062 "supported_io_types": { 00:14:54.062 "read": true, 00:14:54.062 "write": true, 00:14:54.062 "unmap": true, 00:14:54.062 "flush": true, 00:14:54.062 "reset": true, 00:14:54.062 "nvme_admin": false, 00:14:54.062 "nvme_io": false, 00:14:54.062 "nvme_io_md": false, 00:14:54.062 "write_zeroes": true, 00:14:54.062 "zcopy": false, 00:14:54.062 "get_zone_info": false, 00:14:54.062 "zone_management": false, 00:14:54.062 "zone_append": false, 00:14:54.062 "compare": false, 00:14:54.062 "compare_and_write": false, 00:14:54.062 "abort": false, 00:14:54.062 "seek_hole": false, 00:14:54.062 "seek_data": false, 00:14:54.062 "copy": false, 00:14:54.062 "nvme_iov_md": false 00:14:54.062 }, 00:14:54.062 "memory_domains": [ 00:14:54.062 { 00:14:54.062 "dma_device_id": "system", 00:14:54.062 "dma_device_type": 1 00:14:54.062 }, 00:14:54.062 { 00:14:54.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.062 "dma_device_type": 2 00:14:54.062 }, 00:14:54.062 { 00:14:54.062 "dma_device_id": "system", 00:14:54.062 "dma_device_type": 1 00:14:54.062 }, 00:14:54.062 { 00:14:54.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.062 "dma_device_type": 2 00:14:54.062 } 00:14:54.062 ], 00:14:54.062 "driver_specific": { 00:14:54.062 "raid": { 00:14:54.062 "uuid": "293a5028-9f6b-4a78-94ce-cd2c5e23ed3d", 00:14:54.062 "strip_size_kb": 64, 00:14:54.062 "state": "online", 00:14:54.062 "raid_level": "concat", 00:14:54.062 "superblock": true, 00:14:54.062 "num_base_bdevs": 2, 00:14:54.062 "num_base_bdevs_discovered": 2, 00:14:54.062 "num_base_bdevs_operational": 2, 00:14:54.062 "base_bdevs_list": [ 00:14:54.062 { 00:14:54.062 "name": "BaseBdev1", 00:14:54.062 "uuid": "0254686a-ba13-44ca-8e52-a9b71fd40e3a", 00:14:54.062 "is_configured": true, 00:14:54.062 "data_offset": 2048, 00:14:54.062 "data_size": 63488 00:14:54.062 }, 00:14:54.062 { 00:14:54.062 "name": "BaseBdev2", 00:14:54.062 "uuid": "9a90e24e-b501-42f3-842c-7e4ebbfedb4d", 00:14:54.062 "is_configured": true, 00:14:54.062 "data_offset": 2048, 00:14:54.062 "data_size": 63488 00:14:54.062 } 00:14:54.062 ] 00:14:54.062 } 00:14:54.062 } 00:14:54.062 }' 00:14:54.062 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:54.062 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:54.062 BaseBdev2' 00:14:54.062 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:54.062 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:54.062 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:54.321 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:54.321 "name": "BaseBdev1", 00:14:54.321 "aliases": [ 00:14:54.321 "0254686a-ba13-44ca-8e52-a9b71fd40e3a" 00:14:54.321 ], 00:14:54.321 "product_name": "Malloc disk", 00:14:54.321 "block_size": 512, 00:14:54.321 "num_blocks": 65536, 00:14:54.321 "uuid": "0254686a-ba13-44ca-8e52-a9b71fd40e3a", 00:14:54.321 "assigned_rate_limits": { 00:14:54.321 "rw_ios_per_sec": 0, 00:14:54.321 "rw_mbytes_per_sec": 0, 00:14:54.321 "r_mbytes_per_sec": 0, 00:14:54.321 "w_mbytes_per_sec": 0 00:14:54.321 }, 00:14:54.321 "claimed": true, 00:14:54.321 "claim_type": "exclusive_write", 00:14:54.321 "zoned": false, 00:14:54.321 "supported_io_types": { 00:14:54.321 "read": true, 00:14:54.321 "write": true, 00:14:54.321 "unmap": true, 00:14:54.321 "flush": true, 00:14:54.321 "reset": true, 00:14:54.321 "nvme_admin": false, 00:14:54.321 "nvme_io": false, 00:14:54.321 "nvme_io_md": false, 00:14:54.321 "write_zeroes": true, 00:14:54.321 "zcopy": true, 00:14:54.321 "get_zone_info": false, 00:14:54.321 "zone_management": false, 00:14:54.321 "zone_append": false, 00:14:54.321 "compare": false, 00:14:54.321 "compare_and_write": false, 00:14:54.321 "abort": true, 00:14:54.321 "seek_hole": false, 00:14:54.321 "seek_data": false, 00:14:54.321 "copy": true, 00:14:54.321 "nvme_iov_md": false 00:14:54.321 }, 00:14:54.321 "memory_domains": [ 00:14:54.321 { 00:14:54.321 "dma_device_id": "system", 00:14:54.321 "dma_device_type": 1 00:14:54.321 }, 00:14:54.321 { 00:14:54.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.321 "dma_device_type": 2 00:14:54.321 } 00:14:54.321 ], 00:14:54.321 "driver_specific": {} 00:14:54.321 }' 00:14:54.321 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:54.321 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:54.321 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:54.321 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:54.321 23:58:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:54.321 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:54.321 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:54.321 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:54.321 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:54.321 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:54.321 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:54.321 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:54.321 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:54.322 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:54.322 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:54.581 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:54.581 "name": "BaseBdev2", 00:14:54.581 "aliases": [ 00:14:54.581 "9a90e24e-b501-42f3-842c-7e4ebbfedb4d" 00:14:54.581 ], 00:14:54.581 "product_name": "Malloc disk", 00:14:54.581 "block_size": 512, 00:14:54.581 "num_blocks": 65536, 00:14:54.581 "uuid": "9a90e24e-b501-42f3-842c-7e4ebbfedb4d", 00:14:54.581 "assigned_rate_limits": { 00:14:54.581 "rw_ios_per_sec": 0, 00:14:54.581 "rw_mbytes_per_sec": 0, 00:14:54.581 "r_mbytes_per_sec": 0, 00:14:54.581 "w_mbytes_per_sec": 0 00:14:54.581 }, 00:14:54.581 "claimed": true, 00:14:54.581 "claim_type": "exclusive_write", 00:14:54.581 "zoned": false, 00:14:54.581 "supported_io_types": { 00:14:54.581 "read": true, 00:14:54.581 "write": true, 00:14:54.581 "unmap": true, 00:14:54.581 "flush": true, 00:14:54.581 "reset": true, 00:14:54.581 "nvme_admin": false, 00:14:54.581 "nvme_io": false, 00:14:54.581 "nvme_io_md": false, 00:14:54.581 "write_zeroes": true, 00:14:54.581 "zcopy": true, 00:14:54.581 "get_zone_info": false, 00:14:54.581 "zone_management": false, 00:14:54.581 "zone_append": false, 00:14:54.581 "compare": false, 00:14:54.581 "compare_and_write": false, 00:14:54.581 "abort": true, 00:14:54.581 "seek_hole": false, 00:14:54.581 "seek_data": false, 00:14:54.581 "copy": true, 00:14:54.581 "nvme_iov_md": false 00:14:54.581 }, 00:14:54.581 "memory_domains": [ 00:14:54.581 { 00:14:54.581 "dma_device_id": "system", 00:14:54.581 "dma_device_type": 1 00:14:54.581 }, 00:14:54.581 { 00:14:54.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.581 "dma_device_type": 2 00:14:54.581 } 00:14:54.581 ], 00:14:54.581 "driver_specific": {} 00:14:54.581 }' 00:14:54.581 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:54.581 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:54.581 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:54.581 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:54.581 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:54.581 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:54.581 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:54.581 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:54.581 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:54.581 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:54.581 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:54.581 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:54.581 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:54.840 [2024-07-24 23:58:50.592104] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:54.840 [2024-07-24 23:58:50.592144] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.840 [2024-07-24 23:58:50.592201] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.840 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:54.840 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:54.840 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:54.840 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:54.840 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:54.840 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:54.840 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:54.840 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:54.840 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:54.840 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:54.840 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:54.840 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:54.840 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:54.840 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:54.840 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:54.840 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.840 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.099 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:55.099 "name": "Existed_Raid", 00:14:55.099 "uuid": "293a5028-9f6b-4a78-94ce-cd2c5e23ed3d", 00:14:55.099 "strip_size_kb": 64, 00:14:55.099 "state": "offline", 00:14:55.099 "raid_level": "concat", 00:14:55.099 "superblock": true, 00:14:55.099 "num_base_bdevs": 2, 00:14:55.099 "num_base_bdevs_discovered": 1, 00:14:55.099 "num_base_bdevs_operational": 1, 00:14:55.099 "base_bdevs_list": [ 00:14:55.099 { 00:14:55.099 "name": null, 00:14:55.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.099 "is_configured": false, 00:14:55.099 "data_offset": 2048, 00:14:55.099 "data_size": 63488 00:14:55.099 }, 00:14:55.099 { 00:14:55.099 "name": "BaseBdev2", 00:14:55.099 "uuid": "9a90e24e-b501-42f3-842c-7e4ebbfedb4d", 00:14:55.099 "is_configured": true, 00:14:55.099 "data_offset": 2048, 00:14:55.099 "data_size": 63488 00:14:55.099 } 00:14:55.099 ] 00:14:55.099 }' 00:14:55.099 23:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:55.099 23:58:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.667 23:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:55.667 23:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:55.667 23:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.667 23:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:55.926 23:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:55.926 23:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:55.926 23:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:55.926 [2024-07-24 23:58:51.786607] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:55.926 [2024-07-24 23:58:51.786676] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:14:56.185 23:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:56.185 23:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:56.185 23:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.185 23:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:56.444 23:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:56.444 23:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:56.444 23:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:56.444 23:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 77530 00:14:56.444 23:58:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77530 ']' 00:14:56.444 23:58:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77530 00:14:56.444 23:58:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:56.444 23:58:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:56.444 23:58:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77530 00:14:56.444 killing process with pid 77530 00:14:56.444 23:58:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:56.444 23:58:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:56.444 23:58:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77530' 00:14:56.444 23:58:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77530 00:14:56.444 [2024-07-24 23:58:52.140771] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:56.444 23:58:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77530 00:14:56.444 [2024-07-24 23:58:52.140963] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:57.382 23:58:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:57.382 00:14:57.382 real 0m9.445s 00:14:57.382 user 0m15.608s 00:14:57.382 sys 0m1.415s 00:14:57.382 23:58:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:57.382 ************************************ 00:14:57.382 END TEST raid_state_function_test_sb 00:14:57.382 ************************************ 00:14:57.382 23:58:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.382 23:58:53 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:14:57.382 23:58:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:57.382 23:58:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:57.382 23:58:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:57.641 ************************************ 00:14:57.641 START TEST raid_superblock_test 00:14:57.641 ************************************ 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=77864 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 77864 /var/tmp/spdk-raid.sock 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 77864 ']' 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:57.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:57.641 23:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.641 [2024-07-24 23:58:53.322049] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:14:57.641 [2024-07-24 23:58:53.322232] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77864 ] 00:14:57.641 [2024-07-24 23:58:53.494491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.900 [2024-07-24 23:58:53.681905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.158 [2024-07-24 23:58:53.851420] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.417 23:58:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:58.417 23:58:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:58.417 23:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:14:58.417 23:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:58.417 23:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:14:58.417 23:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:14:58.417 23:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:58.417 23:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:58.417 23:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:58.417 23:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:58.417 23:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:58.676 malloc1 00:14:58.935 23:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:58.935 [2024-07-24 23:58:54.742504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:58.935 [2024-07-24 23:58:54.742618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.935 [2024-07-24 23:58:54.742660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:14:58.935 [2024-07-24 23:58:54.742677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.935 [2024-07-24 23:58:54.745902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.935 [2024-07-24 23:58:54.745948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:58.935 pt1 00:14:58.935 23:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:58.935 23:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:58.935 23:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:14:58.935 23:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:14:58.935 23:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:58.935 23:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:58.935 23:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:58.935 23:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:58.935 23:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:59.194 malloc2 00:14:59.194 23:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:59.453 [2024-07-24 23:58:55.274437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:59.453 [2024-07-24 23:58:55.274527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.453 [2024-07-24 23:58:55.274558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:14:59.453 [2024-07-24 23:58:55.274573] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.453 [2024-07-24 23:58:55.277015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.453 [2024-07-24 23:58:55.277054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:59.453 pt2 00:14:59.453 23:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:59.454 23:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:59.454 23:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:14:59.712 [2024-07-24 23:58:55.486554] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:59.712 [2024-07-24 23:58:55.488666] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:59.712 [2024-07-24 23:58:55.488924] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007e80 00:14:59.712 [2024-07-24 23:58:55.488943] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:59.712 [2024-07-24 23:58:55.489112] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:14:59.713 [2024-07-24 23:58:55.489499] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007e80 00:14:59.713 [2024-07-24 23:58:55.489521] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007e80 00:14:59.713 [2024-07-24 23:58:55.489716] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.713 23:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:59.713 23:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:59.713 23:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:59.713 23:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:59.713 23:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:59.713 23:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:59.713 23:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:59.713 23:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:59.713 23:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:59.713 23:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:59.713 23:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.713 23:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.972 23:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:59.972 "name": "raid_bdev1", 00:14:59.972 "uuid": "b8d60711-c021-4e06-877c-0199b34ce832", 00:14:59.972 "strip_size_kb": 64, 00:14:59.972 "state": "online", 00:14:59.972 "raid_level": "concat", 00:14:59.972 "superblock": true, 00:14:59.972 "num_base_bdevs": 2, 00:14:59.972 "num_base_bdevs_discovered": 2, 00:14:59.972 "num_base_bdevs_operational": 2, 00:14:59.972 "base_bdevs_list": [ 00:14:59.972 { 00:14:59.972 "name": "pt1", 00:14:59.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:59.972 "is_configured": true, 00:14:59.972 "data_offset": 2048, 00:14:59.972 "data_size": 63488 00:14:59.972 }, 00:14:59.972 { 00:14:59.972 "name": "pt2", 00:14:59.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.972 "is_configured": true, 00:14:59.972 "data_offset": 2048, 00:14:59.972 "data_size": 63488 00:14:59.972 } 00:14:59.972 ] 00:14:59.972 }' 00:14:59.972 23:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:59.972 23:58:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.539 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:15:00.539 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:00.539 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:00.539 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:00.539 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:00.539 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:00.539 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:00.539 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:00.539 [2024-07-24 23:58:56.303078] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.539 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:00.539 "name": "raid_bdev1", 00:15:00.539 "aliases": [ 00:15:00.539 "b8d60711-c021-4e06-877c-0199b34ce832" 00:15:00.539 ], 00:15:00.539 "product_name": "Raid Volume", 00:15:00.539 "block_size": 512, 00:15:00.539 "num_blocks": 126976, 00:15:00.539 "uuid": "b8d60711-c021-4e06-877c-0199b34ce832", 00:15:00.539 "assigned_rate_limits": { 00:15:00.539 "rw_ios_per_sec": 0, 00:15:00.539 "rw_mbytes_per_sec": 0, 00:15:00.539 "r_mbytes_per_sec": 0, 00:15:00.539 "w_mbytes_per_sec": 0 00:15:00.539 }, 00:15:00.539 "claimed": false, 00:15:00.539 "zoned": false, 00:15:00.539 "supported_io_types": { 00:15:00.539 "read": true, 00:15:00.539 "write": true, 00:15:00.539 "unmap": true, 00:15:00.539 "flush": true, 00:15:00.539 "reset": true, 00:15:00.539 "nvme_admin": false, 00:15:00.539 "nvme_io": false, 00:15:00.539 "nvme_io_md": false, 00:15:00.539 "write_zeroes": true, 00:15:00.539 "zcopy": false, 00:15:00.539 "get_zone_info": false, 00:15:00.539 "zone_management": false, 00:15:00.539 "zone_append": false, 00:15:00.539 "compare": false, 00:15:00.539 "compare_and_write": false, 00:15:00.539 "abort": false, 00:15:00.539 "seek_hole": false, 00:15:00.539 "seek_data": false, 00:15:00.539 "copy": false, 00:15:00.539 "nvme_iov_md": false 00:15:00.539 }, 00:15:00.539 "memory_domains": [ 00:15:00.539 { 00:15:00.539 "dma_device_id": "system", 00:15:00.539 "dma_device_type": 1 00:15:00.539 }, 00:15:00.539 { 00:15:00.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.539 "dma_device_type": 2 00:15:00.539 }, 00:15:00.539 { 00:15:00.539 "dma_device_id": "system", 00:15:00.539 "dma_device_type": 1 00:15:00.539 }, 00:15:00.539 { 00:15:00.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.539 "dma_device_type": 2 00:15:00.539 } 00:15:00.539 ], 00:15:00.539 "driver_specific": { 00:15:00.539 "raid": { 00:15:00.539 "uuid": "b8d60711-c021-4e06-877c-0199b34ce832", 00:15:00.539 "strip_size_kb": 64, 00:15:00.539 "state": "online", 00:15:00.539 "raid_level": "concat", 00:15:00.539 "superblock": true, 00:15:00.539 "num_base_bdevs": 2, 00:15:00.539 "num_base_bdevs_discovered": 2, 00:15:00.539 "num_base_bdevs_operational": 2, 00:15:00.539 "base_bdevs_list": [ 00:15:00.539 { 00:15:00.539 "name": "pt1", 00:15:00.539 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:00.539 "is_configured": true, 00:15:00.539 "data_offset": 2048, 00:15:00.539 "data_size": 63488 00:15:00.539 }, 00:15:00.539 { 00:15:00.539 "name": "pt2", 00:15:00.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.539 "is_configured": true, 00:15:00.539 "data_offset": 2048, 00:15:00.539 "data_size": 63488 00:15:00.539 } 00:15:00.539 ] 00:15:00.539 } 00:15:00.539 } 00:15:00.539 }' 00:15:00.540 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:00.540 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:00.540 pt2' 00:15:00.540 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:00.540 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:00.540 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:00.799 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:00.799 "name": "pt1", 00:15:00.799 "aliases": [ 00:15:00.799 "00000000-0000-0000-0000-000000000001" 00:15:00.799 ], 00:15:00.799 "product_name": "passthru", 00:15:00.799 "block_size": 512, 00:15:00.799 "num_blocks": 65536, 00:15:00.799 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:00.799 "assigned_rate_limits": { 00:15:00.799 "rw_ios_per_sec": 0, 00:15:00.799 "rw_mbytes_per_sec": 0, 00:15:00.799 "r_mbytes_per_sec": 0, 00:15:00.799 "w_mbytes_per_sec": 0 00:15:00.799 }, 00:15:00.799 "claimed": true, 00:15:00.799 "claim_type": "exclusive_write", 00:15:00.799 "zoned": false, 00:15:00.799 "supported_io_types": { 00:15:00.799 "read": true, 00:15:00.799 "write": true, 00:15:00.799 "unmap": true, 00:15:00.799 "flush": true, 00:15:00.799 "reset": true, 00:15:00.799 "nvme_admin": false, 00:15:00.799 "nvme_io": false, 00:15:00.799 "nvme_io_md": false, 00:15:00.799 "write_zeroes": true, 00:15:00.799 "zcopy": true, 00:15:00.799 "get_zone_info": false, 00:15:00.799 "zone_management": false, 00:15:00.799 "zone_append": false, 00:15:00.799 "compare": false, 00:15:00.799 "compare_and_write": false, 00:15:00.799 "abort": true, 00:15:00.799 "seek_hole": false, 00:15:00.799 "seek_data": false, 00:15:00.799 "copy": true, 00:15:00.799 "nvme_iov_md": false 00:15:00.799 }, 00:15:00.799 "memory_domains": [ 00:15:00.799 { 00:15:00.799 "dma_device_id": "system", 00:15:00.799 "dma_device_type": 1 00:15:00.799 }, 00:15:00.799 { 00:15:00.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.799 "dma_device_type": 2 00:15:00.799 } 00:15:00.799 ], 00:15:00.799 "driver_specific": { 00:15:00.799 "passthru": { 00:15:00.799 "name": "pt1", 00:15:00.799 "base_bdev_name": "malloc1" 00:15:00.799 } 00:15:00.799 } 00:15:00.799 }' 00:15:00.799 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:00.799 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:00.799 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:00.799 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:00.799 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:00.799 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:00.799 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:00.799 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:00.799 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:00.799 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:00.799 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:00.799 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:00.799 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:00.799 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:00.799 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:01.058 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:01.058 "name": "pt2", 00:15:01.058 "aliases": [ 00:15:01.058 "00000000-0000-0000-0000-000000000002" 00:15:01.058 ], 00:15:01.058 "product_name": "passthru", 00:15:01.058 "block_size": 512, 00:15:01.058 "num_blocks": 65536, 00:15:01.058 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.058 "assigned_rate_limits": { 00:15:01.058 "rw_ios_per_sec": 0, 00:15:01.058 "rw_mbytes_per_sec": 0, 00:15:01.058 "r_mbytes_per_sec": 0, 00:15:01.058 "w_mbytes_per_sec": 0 00:15:01.058 }, 00:15:01.058 "claimed": true, 00:15:01.058 "claim_type": "exclusive_write", 00:15:01.058 "zoned": false, 00:15:01.058 "supported_io_types": { 00:15:01.058 "read": true, 00:15:01.058 "write": true, 00:15:01.058 "unmap": true, 00:15:01.058 "flush": true, 00:15:01.058 "reset": true, 00:15:01.058 "nvme_admin": false, 00:15:01.058 "nvme_io": false, 00:15:01.058 "nvme_io_md": false, 00:15:01.058 "write_zeroes": true, 00:15:01.058 "zcopy": true, 00:15:01.058 "get_zone_info": false, 00:15:01.058 "zone_management": false, 00:15:01.058 "zone_append": false, 00:15:01.058 "compare": false, 00:15:01.058 "compare_and_write": false, 00:15:01.058 "abort": true, 00:15:01.058 "seek_hole": false, 00:15:01.058 "seek_data": false, 00:15:01.058 "copy": true, 00:15:01.058 "nvme_iov_md": false 00:15:01.058 }, 00:15:01.058 "memory_domains": [ 00:15:01.058 { 00:15:01.058 "dma_device_id": "system", 00:15:01.058 "dma_device_type": 1 00:15:01.058 }, 00:15:01.058 { 00:15:01.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.058 "dma_device_type": 2 00:15:01.058 } 00:15:01.058 ], 00:15:01.058 "driver_specific": { 00:15:01.058 "passthru": { 00:15:01.058 "name": "pt2", 00:15:01.058 "base_bdev_name": "malloc2" 00:15:01.058 } 00:15:01.058 } 00:15:01.058 }' 00:15:01.058 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.058 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.058 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:01.058 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.058 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.058 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:01.058 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.058 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.058 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:01.058 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.058 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.058 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:01.058 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:01.058 23:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:15:01.625 [2024-07-24 23:58:57.191340] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.625 23:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=b8d60711-c021-4e06-877c-0199b34ce832 00:15:01.625 23:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z b8d60711-c021-4e06-877c-0199b34ce832 ']' 00:15:01.625 23:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:01.625 [2024-07-24 23:58:57.415084] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:01.625 [2024-07-24 23:58:57.415133] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:01.625 [2024-07-24 23:58:57.415249] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.625 [2024-07-24 23:58:57.415311] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:01.625 [2024-07-24 23:58:57.415335] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state offline 00:15:01.625 23:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.625 23:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:15:01.884 23:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:15:01.884 23:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:15:01.884 23:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:01.884 23:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:02.142 23:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:02.142 23:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:02.400 23:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:02.400 23:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:02.658 23:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:15:02.658 23:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:02.658 23:58:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:02.658 23:58:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:02.658 23:58:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:02.658 23:58:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:02.658 23:58:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:02.658 23:58:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:02.658 23:58:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:02.658 23:58:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:02.658 23:58:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:02.658 23:58:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:02.658 23:58:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:02.658 [2024-07-24 23:58:58.515453] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:02.658 [2024-07-24 23:58:58.517424] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:02.658 [2024-07-24 23:58:58.517518] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:02.658 [2024-07-24 23:58:58.517583] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:02.658 [2024-07-24 23:58:58.517606] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:02.658 [2024-07-24 23:58:58.517621] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008480 name raid_bdev1, state configuring 00:15:02.658 request: 00:15:02.658 { 00:15:02.658 "name": "raid_bdev1", 00:15:02.658 "raid_level": "concat", 00:15:02.658 "base_bdevs": [ 00:15:02.658 "malloc1", 00:15:02.658 "malloc2" 00:15:02.658 ], 00:15:02.658 "strip_size_kb": 64, 00:15:02.658 "superblock": false, 00:15:02.658 "method": "bdev_raid_create", 00:15:02.658 "req_id": 1 00:15:02.658 } 00:15:02.658 Got JSON-RPC error response 00:15:02.658 response: 00:15:02.658 { 00:15:02.658 "code": -17, 00:15:02.658 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:02.658 } 00:15:02.916 23:58:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:02.916 23:58:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:02.916 23:58:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:02.916 23:58:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:02.916 23:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:15:02.916 23:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.174 23:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:15:03.174 23:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:15:03.174 23:58:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:03.174 [2024-07-24 23:58:58.987562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:03.174 [2024-07-24 23:58:58.987967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.174 [2024-07-24 23:58:58.988011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:15:03.174 [2024-07-24 23:58:58.988030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.174 [2024-07-24 23:58:58.990702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.174 [2024-07-24 23:58:58.990781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:03.174 [2024-07-24 23:58:58.990915] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:03.174 [2024-07-24 23:58:58.991000] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:03.174 pt1 00:15:03.174 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:15:03.174 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:03.174 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:03.174 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:03.174 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:03.174 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:03.174 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:03.174 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:03.174 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:03.174 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:03.174 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.174 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.432 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:03.432 "name": "raid_bdev1", 00:15:03.432 "uuid": "b8d60711-c021-4e06-877c-0199b34ce832", 00:15:03.432 "strip_size_kb": 64, 00:15:03.432 "state": "configuring", 00:15:03.432 "raid_level": "concat", 00:15:03.432 "superblock": true, 00:15:03.432 "num_base_bdevs": 2, 00:15:03.432 "num_base_bdevs_discovered": 1, 00:15:03.432 "num_base_bdevs_operational": 2, 00:15:03.432 "base_bdevs_list": [ 00:15:03.432 { 00:15:03.432 "name": "pt1", 00:15:03.432 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:03.432 "is_configured": true, 00:15:03.432 "data_offset": 2048, 00:15:03.432 "data_size": 63488 00:15:03.432 }, 00:15:03.432 { 00:15:03.432 "name": null, 00:15:03.432 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.432 "is_configured": false, 00:15:03.432 "data_offset": 2048, 00:15:03.432 "data_size": 63488 00:15:03.432 } 00:15:03.432 ] 00:15:03.432 }' 00:15:03.432 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:03.432 23:58:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.691 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:15:03.691 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:15:03.691 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:15:03.691 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:03.950 [2024-07-24 23:58:59.731727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:03.950 [2024-07-24 23:58:59.732047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.950 [2024-07-24 23:58:59.732088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:15:03.950 [2024-07-24 23:58:59.732119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.950 [2024-07-24 23:58:59.732664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.950 [2024-07-24 23:58:59.732693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:03.950 [2024-07-24 23:58:59.732783] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:03.950 [2024-07-24 23:58:59.732830] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:03.950 [2024-07-24 23:58:59.732975] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009080 00:15:03.950 [2024-07-24 23:58:59.732996] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:03.950 [2024-07-24 23:58:59.733118] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:15:03.950 [2024-07-24 23:58:59.733462] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009080 00:15:03.950 [2024-07-24 23:58:59.733485] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009080 00:15:03.950 [2024-07-24 23:58:59.733656] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.950 pt2 00:15:03.950 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:15:03.950 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:15:03.950 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:03.950 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:03.950 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:03.950 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:03.950 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:03.950 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:03.950 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:03.950 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:03.950 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:03.950 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:03.950 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.950 23:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.209 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:04.209 "name": "raid_bdev1", 00:15:04.209 "uuid": "b8d60711-c021-4e06-877c-0199b34ce832", 00:15:04.209 "strip_size_kb": 64, 00:15:04.209 "state": "online", 00:15:04.209 "raid_level": "concat", 00:15:04.209 "superblock": true, 00:15:04.209 "num_base_bdevs": 2, 00:15:04.209 "num_base_bdevs_discovered": 2, 00:15:04.209 "num_base_bdevs_operational": 2, 00:15:04.209 "base_bdevs_list": [ 00:15:04.209 { 00:15:04.209 "name": "pt1", 00:15:04.209 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.209 "is_configured": true, 00:15:04.209 "data_offset": 2048, 00:15:04.209 "data_size": 63488 00:15:04.209 }, 00:15:04.209 { 00:15:04.209 "name": "pt2", 00:15:04.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.209 "is_configured": true, 00:15:04.209 "data_offset": 2048, 00:15:04.209 "data_size": 63488 00:15:04.209 } 00:15:04.209 ] 00:15:04.209 }' 00:15:04.209 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:04.209 23:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.775 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:15:04.775 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:04.775 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:04.775 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:04.775 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:04.775 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:04.775 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:04.775 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:04.775 [2024-07-24 23:59:00.536254] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.775 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:04.775 "name": "raid_bdev1", 00:15:04.775 "aliases": [ 00:15:04.775 "b8d60711-c021-4e06-877c-0199b34ce832" 00:15:04.775 ], 00:15:04.775 "product_name": "Raid Volume", 00:15:04.775 "block_size": 512, 00:15:04.775 "num_blocks": 126976, 00:15:04.775 "uuid": "b8d60711-c021-4e06-877c-0199b34ce832", 00:15:04.775 "assigned_rate_limits": { 00:15:04.775 "rw_ios_per_sec": 0, 00:15:04.775 "rw_mbytes_per_sec": 0, 00:15:04.775 "r_mbytes_per_sec": 0, 00:15:04.775 "w_mbytes_per_sec": 0 00:15:04.775 }, 00:15:04.775 "claimed": false, 00:15:04.775 "zoned": false, 00:15:04.775 "supported_io_types": { 00:15:04.775 "read": true, 00:15:04.775 "write": true, 00:15:04.775 "unmap": true, 00:15:04.775 "flush": true, 00:15:04.775 "reset": true, 00:15:04.775 "nvme_admin": false, 00:15:04.775 "nvme_io": false, 00:15:04.775 "nvme_io_md": false, 00:15:04.775 "write_zeroes": true, 00:15:04.775 "zcopy": false, 00:15:04.775 "get_zone_info": false, 00:15:04.775 "zone_management": false, 00:15:04.775 "zone_append": false, 00:15:04.775 "compare": false, 00:15:04.775 "compare_and_write": false, 00:15:04.775 "abort": false, 00:15:04.775 "seek_hole": false, 00:15:04.775 "seek_data": false, 00:15:04.775 "copy": false, 00:15:04.775 "nvme_iov_md": false 00:15:04.775 }, 00:15:04.775 "memory_domains": [ 00:15:04.775 { 00:15:04.775 "dma_device_id": "system", 00:15:04.775 "dma_device_type": 1 00:15:04.775 }, 00:15:04.775 { 00:15:04.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.775 "dma_device_type": 2 00:15:04.775 }, 00:15:04.775 { 00:15:04.775 "dma_device_id": "system", 00:15:04.775 "dma_device_type": 1 00:15:04.775 }, 00:15:04.775 { 00:15:04.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.775 "dma_device_type": 2 00:15:04.775 } 00:15:04.775 ], 00:15:04.775 "driver_specific": { 00:15:04.775 "raid": { 00:15:04.775 "uuid": "b8d60711-c021-4e06-877c-0199b34ce832", 00:15:04.775 "strip_size_kb": 64, 00:15:04.775 "state": "online", 00:15:04.775 "raid_level": "concat", 00:15:04.775 "superblock": true, 00:15:04.775 "num_base_bdevs": 2, 00:15:04.775 "num_base_bdevs_discovered": 2, 00:15:04.775 "num_base_bdevs_operational": 2, 00:15:04.775 "base_bdevs_list": [ 00:15:04.775 { 00:15:04.775 "name": "pt1", 00:15:04.775 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.775 "is_configured": true, 00:15:04.775 "data_offset": 2048, 00:15:04.775 "data_size": 63488 00:15:04.775 }, 00:15:04.775 { 00:15:04.775 "name": "pt2", 00:15:04.775 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.775 "is_configured": true, 00:15:04.775 "data_offset": 2048, 00:15:04.775 "data_size": 63488 00:15:04.775 } 00:15:04.775 ] 00:15:04.775 } 00:15:04.775 } 00:15:04.775 }' 00:15:04.775 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:04.775 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:04.775 pt2' 00:15:04.775 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:04.775 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:04.775 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:05.042 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:05.042 "name": "pt1", 00:15:05.042 "aliases": [ 00:15:05.042 "00000000-0000-0000-0000-000000000001" 00:15:05.042 ], 00:15:05.042 "product_name": "passthru", 00:15:05.042 "block_size": 512, 00:15:05.042 "num_blocks": 65536, 00:15:05.042 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:05.042 "assigned_rate_limits": { 00:15:05.042 "rw_ios_per_sec": 0, 00:15:05.042 "rw_mbytes_per_sec": 0, 00:15:05.042 "r_mbytes_per_sec": 0, 00:15:05.042 "w_mbytes_per_sec": 0 00:15:05.042 }, 00:15:05.042 "claimed": true, 00:15:05.042 "claim_type": "exclusive_write", 00:15:05.042 "zoned": false, 00:15:05.042 "supported_io_types": { 00:15:05.042 "read": true, 00:15:05.042 "write": true, 00:15:05.042 "unmap": true, 00:15:05.042 "flush": true, 00:15:05.042 "reset": true, 00:15:05.042 "nvme_admin": false, 00:15:05.042 "nvme_io": false, 00:15:05.042 "nvme_io_md": false, 00:15:05.042 "write_zeroes": true, 00:15:05.042 "zcopy": true, 00:15:05.042 "get_zone_info": false, 00:15:05.042 "zone_management": false, 00:15:05.042 "zone_append": false, 00:15:05.042 "compare": false, 00:15:05.042 "compare_and_write": false, 00:15:05.042 "abort": true, 00:15:05.042 "seek_hole": false, 00:15:05.042 "seek_data": false, 00:15:05.042 "copy": true, 00:15:05.042 "nvme_iov_md": false 00:15:05.042 }, 00:15:05.042 "memory_domains": [ 00:15:05.042 { 00:15:05.042 "dma_device_id": "system", 00:15:05.042 "dma_device_type": 1 00:15:05.042 }, 00:15:05.042 { 00:15:05.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.042 "dma_device_type": 2 00:15:05.042 } 00:15:05.042 ], 00:15:05.042 "driver_specific": { 00:15:05.042 "passthru": { 00:15:05.042 "name": "pt1", 00:15:05.042 "base_bdev_name": "malloc1" 00:15:05.042 } 00:15:05.042 } 00:15:05.042 }' 00:15:05.042 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:05.042 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:05.042 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:05.042 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:05.042 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:05.042 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:05.042 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:05.042 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:05.042 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:05.042 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:05.042 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:05.042 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:05.042 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:05.042 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:05.042 23:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:05.316 23:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:05.316 "name": "pt2", 00:15:05.316 "aliases": [ 00:15:05.316 "00000000-0000-0000-0000-000000000002" 00:15:05.316 ], 00:15:05.316 "product_name": "passthru", 00:15:05.316 "block_size": 512, 00:15:05.316 "num_blocks": 65536, 00:15:05.316 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.316 "assigned_rate_limits": { 00:15:05.316 "rw_ios_per_sec": 0, 00:15:05.316 "rw_mbytes_per_sec": 0, 00:15:05.316 "r_mbytes_per_sec": 0, 00:15:05.316 "w_mbytes_per_sec": 0 00:15:05.316 }, 00:15:05.316 "claimed": true, 00:15:05.316 "claim_type": "exclusive_write", 00:15:05.316 "zoned": false, 00:15:05.316 "supported_io_types": { 00:15:05.316 "read": true, 00:15:05.316 "write": true, 00:15:05.316 "unmap": true, 00:15:05.316 "flush": true, 00:15:05.316 "reset": true, 00:15:05.316 "nvme_admin": false, 00:15:05.316 "nvme_io": false, 00:15:05.316 "nvme_io_md": false, 00:15:05.316 "write_zeroes": true, 00:15:05.316 "zcopy": true, 00:15:05.316 "get_zone_info": false, 00:15:05.316 "zone_management": false, 00:15:05.316 "zone_append": false, 00:15:05.316 "compare": false, 00:15:05.316 "compare_and_write": false, 00:15:05.316 "abort": true, 00:15:05.316 "seek_hole": false, 00:15:05.316 "seek_data": false, 00:15:05.316 "copy": true, 00:15:05.316 "nvme_iov_md": false 00:15:05.316 }, 00:15:05.316 "memory_domains": [ 00:15:05.316 { 00:15:05.316 "dma_device_id": "system", 00:15:05.316 "dma_device_type": 1 00:15:05.316 }, 00:15:05.316 { 00:15:05.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.316 "dma_device_type": 2 00:15:05.316 } 00:15:05.316 ], 00:15:05.316 "driver_specific": { 00:15:05.316 "passthru": { 00:15:05.316 "name": "pt2", 00:15:05.316 "base_bdev_name": "malloc2" 00:15:05.316 } 00:15:05.316 } 00:15:05.316 }' 00:15:05.316 23:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:05.316 23:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:05.316 23:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:05.316 23:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:05.316 23:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:05.316 23:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:05.316 23:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:05.316 23:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:05.316 23:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:05.316 23:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:05.316 23:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:05.316 23:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:05.316 23:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:15:05.316 23:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:05.574 [2024-07-24 23:59:01.424472] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.831 23:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' b8d60711-c021-4e06-877c-0199b34ce832 '!=' b8d60711-c021-4e06-877c-0199b34ce832 ']' 00:15:05.831 23:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:15:05.831 23:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:05.831 23:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:05.831 23:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 77864 00:15:05.831 23:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 77864 ']' 00:15:05.831 23:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 77864 00:15:05.832 23:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:05.832 23:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:05.832 23:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77864 00:15:05.832 killing process with pid 77864 00:15:05.832 23:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:05.832 23:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:05.832 23:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77864' 00:15:05.832 23:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 77864 00:15:05.832 23:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 77864 00:15:05.832 [2024-07-24 23:59:01.478022] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:05.832 [2024-07-24 23:59:01.478172] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.832 [2024-07-24 23:59:01.478275] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.832 [2024-07-24 23:59:01.478305] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state offline 00:15:05.832 [2024-07-24 23:59:01.675666] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:07.206 23:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:15:07.206 00:15:07.206 real 0m9.486s 00:15:07.206 user 0m15.506s 00:15:07.206 sys 0m1.501s 00:15:07.206 23:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:07.206 23:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.206 ************************************ 00:15:07.206 END TEST raid_superblock_test 00:15:07.206 ************************************ 00:15:07.206 23:59:02 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:15:07.206 23:59:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:07.206 23:59:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:07.206 23:59:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:07.206 ************************************ 00:15:07.206 START TEST raid_read_error_test 00:15:07.206 ************************************ 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.DVY4gBN0a5 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=78201 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 78201 /var/tmp/spdk-raid.sock 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 78201 ']' 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:07.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:07.206 23:59:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.206 [2024-07-24 23:59:02.877758] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:15:07.206 [2024-07-24 23:59:02.878000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78201 ] 00:15:07.206 [2024-07-24 23:59:03.050638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.464 [2024-07-24 23:59:03.278232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.722 [2024-07-24 23:59:03.442930] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.980 23:59:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:07.980 23:59:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:15:07.980 23:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:07.980 23:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:08.238 BaseBdev1_malloc 00:15:08.238 23:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:08.497 true 00:15:08.497 23:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:08.756 [2024-07-24 23:59:04.529558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:08.756 [2024-07-24 23:59:04.529643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.756 [2024-07-24 23:59:04.529674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:15:08.756 [2024-07-24 23:59:04.529690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.756 [2024-07-24 23:59:04.532696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.756 [2024-07-24 23:59:04.532758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:08.756 BaseBdev1 00:15:08.756 23:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:08.756 23:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:09.014 BaseBdev2_malloc 00:15:09.014 23:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:09.273 true 00:15:09.273 23:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:09.531 [2024-07-24 23:59:05.195260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:09.531 [2024-07-24 23:59:05.195352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.531 [2024-07-24 23:59:05.195382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:15:09.531 [2024-07-24 23:59:05.195400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.531 [2024-07-24 23:59:05.198057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.531 [2024-07-24 23:59:05.198121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:09.531 BaseBdev2 00:15:09.531 23:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:09.790 [2024-07-24 23:59:05.467460] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:09.790 [2024-07-24 23:59:05.469574] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:09.790 [2024-07-24 23:59:05.469880] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008480 00:15:09.790 [2024-07-24 23:59:05.469904] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:09.790 [2024-07-24 23:59:05.470041] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:15:09.790 [2024-07-24 23:59:05.470476] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008480 00:15:09.790 [2024-07-24 23:59:05.470495] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008480 00:15:09.790 [2024-07-24 23:59:05.470684] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.790 23:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:09.790 23:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:09.790 23:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:09.790 23:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:09.790 23:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:09.790 23:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:09.790 23:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:09.790 23:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:09.790 23:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:09.790 23:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:09.790 23:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.790 23:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.049 23:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:10.049 "name": "raid_bdev1", 00:15:10.049 "uuid": "925e2c1c-9c52-4132-b7f6-76ee5d04dd47", 00:15:10.049 "strip_size_kb": 64, 00:15:10.049 "state": "online", 00:15:10.049 "raid_level": "concat", 00:15:10.049 "superblock": true, 00:15:10.049 "num_base_bdevs": 2, 00:15:10.049 "num_base_bdevs_discovered": 2, 00:15:10.049 "num_base_bdevs_operational": 2, 00:15:10.049 "base_bdevs_list": [ 00:15:10.049 { 00:15:10.049 "name": "BaseBdev1", 00:15:10.049 "uuid": "446d48f9-6477-5f19-9c9a-046809d67fb8", 00:15:10.049 "is_configured": true, 00:15:10.049 "data_offset": 2048, 00:15:10.049 "data_size": 63488 00:15:10.049 }, 00:15:10.049 { 00:15:10.049 "name": "BaseBdev2", 00:15:10.049 "uuid": "1fad6ec8-8287-524b-8ed4-349a49f48e12", 00:15:10.049 "is_configured": true, 00:15:10.049 "data_offset": 2048, 00:15:10.049 "data_size": 63488 00:15:10.049 } 00:15:10.049 ] 00:15:10.049 }' 00:15:10.049 23:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:10.049 23:59:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.308 23:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:15:10.308 23:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:10.308 [2024-07-24 23:59:06.084618] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:15:11.244 23:59:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:11.510 23:59:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:15:11.510 23:59:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:15:11.510 23:59:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:15:11.510 23:59:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:11.510 23:59:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:11.510 23:59:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:11.510 23:59:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:11.510 23:59:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:11.510 23:59:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:11.510 23:59:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:11.510 23:59:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:11.510 23:59:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:11.510 23:59:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:11.510 23:59:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.510 23:59:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.813 23:59:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:11.813 "name": "raid_bdev1", 00:15:11.813 "uuid": "925e2c1c-9c52-4132-b7f6-76ee5d04dd47", 00:15:11.813 "strip_size_kb": 64, 00:15:11.813 "state": "online", 00:15:11.813 "raid_level": "concat", 00:15:11.813 "superblock": true, 00:15:11.813 "num_base_bdevs": 2, 00:15:11.813 "num_base_bdevs_discovered": 2, 00:15:11.813 "num_base_bdevs_operational": 2, 00:15:11.813 "base_bdevs_list": [ 00:15:11.813 { 00:15:11.813 "name": "BaseBdev1", 00:15:11.813 "uuid": "446d48f9-6477-5f19-9c9a-046809d67fb8", 00:15:11.813 "is_configured": true, 00:15:11.814 "data_offset": 2048, 00:15:11.814 "data_size": 63488 00:15:11.814 }, 00:15:11.814 { 00:15:11.814 "name": "BaseBdev2", 00:15:11.814 "uuid": "1fad6ec8-8287-524b-8ed4-349a49f48e12", 00:15:11.814 "is_configured": true, 00:15:11.814 "data_offset": 2048, 00:15:11.814 "data_size": 63488 00:15:11.814 } 00:15:11.814 ] 00:15:11.814 }' 00:15:11.814 23:59:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:11.814 23:59:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.073 23:59:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:12.332 [2024-07-24 23:59:08.046874] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.332 [2024-07-24 23:59:08.047232] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.332 [2024-07-24 23:59:08.050284] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.332 [2024-07-24 23:59:08.050362] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.332 [2024-07-24 23:59:08.050404] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.332 [2024-07-24 23:59:08.050422] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008480 name raid_bdev1, state offline 00:15:12.332 0 00:15:12.332 23:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 78201 00:15:12.332 23:59:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 78201 ']' 00:15:12.332 23:59:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 78201 00:15:12.332 23:59:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:15:12.332 23:59:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:12.332 23:59:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78201 00:15:12.332 killing process with pid 78201 00:15:12.332 23:59:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:12.332 23:59:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:12.332 23:59:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78201' 00:15:12.332 23:59:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 78201 00:15:12.332 [2024-07-24 23:59:08.100555] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:12.332 23:59:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 78201 00:15:12.590 [2024-07-24 23:59:08.201744] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:13.527 23:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.DVY4gBN0a5 00:15:13.527 23:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:15:13.527 23:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:15:13.527 23:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.51 00:15:13.527 23:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:15:13.527 23:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:13.527 23:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:13.527 23:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.51 != \0\.\0\0 ]] 00:15:13.527 00:15:13.527 real 0m6.499s 00:15:13.527 user 0m9.355s 00:15:13.527 sys 0m0.836s 00:15:13.527 23:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:13.527 ************************************ 00:15:13.527 END TEST raid_read_error_test 00:15:13.527 ************************************ 00:15:13.527 23:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.527 23:59:09 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:15:13.527 23:59:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:13.527 23:59:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:13.527 23:59:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:13.527 ************************************ 00:15:13.527 START TEST raid_write_error_test 00:15:13.527 ************************************ 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:15:13.527 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:15:13.528 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:15:13.528 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.CBPVCFxK5E 00:15:13.528 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=78372 00:15:13.528 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 78372 /var/tmp/spdk-raid.sock 00:15:13.528 23:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 78372 ']' 00:15:13.528 23:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:13.528 23:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:13.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:13.528 23:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:13.528 23:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:13.528 23:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:13.528 23:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.786 [2024-07-24 23:59:09.431636] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:15:13.786 [2024-07-24 23:59:09.431846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78372 ] 00:15:13.786 [2024-07-24 23:59:09.601733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.044 [2024-07-24 23:59:09.774728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.302 [2024-07-24 23:59:09.946498] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.560 23:59:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:14.560 23:59:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:15:14.560 23:59:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:14.560 23:59:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:14.819 BaseBdev1_malloc 00:15:14.819 23:59:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:15.078 true 00:15:15.078 23:59:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:15.336 [2024-07-24 23:59:11.041212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:15.336 [2024-07-24 23:59:11.041301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.336 [2024-07-24 23:59:11.041335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:15:15.336 [2024-07-24 23:59:11.041353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.336 [2024-07-24 23:59:11.043758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.336 [2024-07-24 23:59:11.043830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:15.336 BaseBdev1 00:15:15.336 23:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:15.336 23:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:15.594 BaseBdev2_malloc 00:15:15.594 23:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:15.851 true 00:15:15.851 23:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:15.851 [2024-07-24 23:59:11.714052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:15.851 [2024-07-24 23:59:11.714144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.851 [2024-07-24 23:59:11.714174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:15:15.851 [2024-07-24 23:59:11.714194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.851 [2024-07-24 23:59:11.716753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.851 [2024-07-24 23:59:11.716830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:15.851 BaseBdev2 00:15:16.110 23:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:16.110 [2024-07-24 23:59:11.926143] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:16.110 [2024-07-24 23:59:11.928318] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:16.110 [2024-07-24 23:59:11.928616] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008480 00:15:16.110 [2024-07-24 23:59:11.928640] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:16.110 [2024-07-24 23:59:11.928769] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:15:16.110 [2024-07-24 23:59:11.929252] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008480 00:15:16.110 [2024-07-24 23:59:11.929281] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008480 00:15:16.110 [2024-07-24 23:59:11.929504] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.110 23:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:16.110 23:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:16.110 23:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:16.110 23:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:16.110 23:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:16.110 23:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:16.110 23:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:16.110 23:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:16.110 23:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:16.110 23:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:16.110 23:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.110 23:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.369 23:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:16.369 "name": "raid_bdev1", 00:15:16.369 "uuid": "a44ad001-6652-4799-af60-59ba60cd5870", 00:15:16.369 "strip_size_kb": 64, 00:15:16.369 "state": "online", 00:15:16.369 "raid_level": "concat", 00:15:16.369 "superblock": true, 00:15:16.369 "num_base_bdevs": 2, 00:15:16.369 "num_base_bdevs_discovered": 2, 00:15:16.369 "num_base_bdevs_operational": 2, 00:15:16.369 "base_bdevs_list": [ 00:15:16.369 { 00:15:16.369 "name": "BaseBdev1", 00:15:16.369 "uuid": "d0dc5cad-805b-510c-81f7-6ceaa11e0c81", 00:15:16.369 "is_configured": true, 00:15:16.369 "data_offset": 2048, 00:15:16.369 "data_size": 63488 00:15:16.369 }, 00:15:16.369 { 00:15:16.369 "name": "BaseBdev2", 00:15:16.369 "uuid": "eb91840d-f2af-58a4-bad2-6f0ef0cc8508", 00:15:16.369 "is_configured": true, 00:15:16.369 "data_offset": 2048, 00:15:16.369 "data_size": 63488 00:15:16.369 } 00:15:16.369 ] 00:15:16.369 }' 00:15:16.369 23:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:16.369 23:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.628 23:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:15:16.628 23:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:16.887 [2024-07-24 23:59:12.579407] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:15:17.822 23:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:18.081 23:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:15:18.081 23:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:15:18.081 23:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:15:18.081 23:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:18.081 23:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:18.081 23:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:18.081 23:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:18.081 23:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:18.081 23:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:18.081 23:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.081 23:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.081 23:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.081 23:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.081 23:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.081 23:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.339 23:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:18.339 "name": "raid_bdev1", 00:15:18.339 "uuid": "a44ad001-6652-4799-af60-59ba60cd5870", 00:15:18.339 "strip_size_kb": 64, 00:15:18.339 "state": "online", 00:15:18.339 "raid_level": "concat", 00:15:18.339 "superblock": true, 00:15:18.339 "num_base_bdevs": 2, 00:15:18.339 "num_base_bdevs_discovered": 2, 00:15:18.339 "num_base_bdevs_operational": 2, 00:15:18.339 "base_bdevs_list": [ 00:15:18.339 { 00:15:18.339 "name": "BaseBdev1", 00:15:18.339 "uuid": "d0dc5cad-805b-510c-81f7-6ceaa11e0c81", 00:15:18.339 "is_configured": true, 00:15:18.339 "data_offset": 2048, 00:15:18.339 "data_size": 63488 00:15:18.339 }, 00:15:18.339 { 00:15:18.339 "name": "BaseBdev2", 00:15:18.339 "uuid": "eb91840d-f2af-58a4-bad2-6f0ef0cc8508", 00:15:18.339 "is_configured": true, 00:15:18.339 "data_offset": 2048, 00:15:18.339 "data_size": 63488 00:15:18.339 } 00:15:18.339 ] 00:15:18.339 }' 00:15:18.339 23:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:18.339 23:59:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.597 23:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:18.856 [2024-07-24 23:59:14.537210] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:18.856 [2024-07-24 23:59:14.537267] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.856 [2024-07-24 23:59:14.540331] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.856 [2024-07-24 23:59:14.540396] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.856 [2024-07-24 23:59:14.540436] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.856 [2024-07-24 23:59:14.540453] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008480 name raid_bdev1, state offline 00:15:18.856 0 00:15:18.856 23:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 78372 00:15:18.856 23:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 78372 ']' 00:15:18.856 23:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 78372 00:15:18.856 23:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:15:18.856 23:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:18.856 23:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78372 00:15:18.856 23:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:18.856 killing process with pid 78372 00:15:18.856 23:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:18.856 23:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78372' 00:15:18.856 23:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 78372 00:15:18.856 [2024-07-24 23:59:14.590862] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:18.856 23:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 78372 00:15:18.856 [2024-07-24 23:59:14.690125] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:20.234 23:59:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:15:20.234 23:59:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:15:20.234 23:59:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.CBPVCFxK5E 00:15:20.234 23:59:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.51 00:15:20.234 23:59:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:15:20.234 23:59:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:20.234 23:59:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:20.234 23:59:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.51 != \0\.\0\0 ]] 00:15:20.234 00:15:20.234 real 0m6.436s 00:15:20.234 user 0m9.263s 00:15:20.234 sys 0m0.837s 00:15:20.234 23:59:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:20.234 ************************************ 00:15:20.234 END TEST raid_write_error_test 00:15:20.234 ************************************ 00:15:20.234 23:59:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.234 23:59:15 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:15:20.234 23:59:15 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:15:20.234 23:59:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:20.234 23:59:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:20.234 23:59:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:20.234 ************************************ 00:15:20.234 START TEST raid_state_function_test 00:15:20.234 ************************************ 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=78540 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 78540' 00:15:20.234 Process raid pid: 78540 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 78540 /var/tmp/spdk-raid.sock 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 78540 ']' 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:20.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:20.234 23:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.234 [2024-07-24 23:59:15.913195] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:15:20.234 [2024-07-24 23:59:15.913363] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.234 [2024-07-24 23:59:16.090619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.493 [2024-07-24 23:59:16.260825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.752 [2024-07-24 23:59:16.429944] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.013 23:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:21.013 23:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:21.013 23:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:21.280 [2024-07-24 23:59:17.026598] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:21.280 [2024-07-24 23:59:17.026672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:21.280 [2024-07-24 23:59:17.026686] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:21.280 [2024-07-24 23:59:17.026700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:21.280 23:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:21.280 23:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:21.280 23:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:21.280 23:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:21.280 23:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:21.280 23:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:21.280 23:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:21.280 23:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:21.280 23:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:21.280 23:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:21.280 23:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.280 23:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.539 23:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:21.539 "name": "Existed_Raid", 00:15:21.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.539 "strip_size_kb": 0, 00:15:21.539 "state": "configuring", 00:15:21.539 "raid_level": "raid1", 00:15:21.539 "superblock": false, 00:15:21.539 "num_base_bdevs": 2, 00:15:21.539 "num_base_bdevs_discovered": 0, 00:15:21.539 "num_base_bdevs_operational": 2, 00:15:21.539 "base_bdevs_list": [ 00:15:21.539 { 00:15:21.539 "name": "BaseBdev1", 00:15:21.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.539 "is_configured": false, 00:15:21.539 "data_offset": 0, 00:15:21.539 "data_size": 0 00:15:21.539 }, 00:15:21.539 { 00:15:21.539 "name": "BaseBdev2", 00:15:21.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.539 "is_configured": false, 00:15:21.539 "data_offset": 0, 00:15:21.539 "data_size": 0 00:15:21.539 } 00:15:21.539 ] 00:15:21.539 }' 00:15:21.539 23:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:21.539 23:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.797 23:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:22.056 [2024-07-24 23:59:17.822690] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:22.056 [2024-07-24 23:59:17.822755] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:15:22.056 23:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:22.315 [2024-07-24 23:59:18.042828] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:22.315 [2024-07-24 23:59:18.042897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:22.315 [2024-07-24 23:59:18.042911] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.315 [2024-07-24 23:59:18.042926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.315 23:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:22.573 [2024-07-24 23:59:18.344270] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.573 BaseBdev1 00:15:22.573 23:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:22.573 23:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:22.573 23:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:22.573 23:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:22.573 23:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:22.573 23:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:22.573 23:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:22.832 23:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:23.091 [ 00:15:23.091 { 00:15:23.091 "name": "BaseBdev1", 00:15:23.091 "aliases": [ 00:15:23.091 "8173247e-81b0-4ef7-8d60-2430c82de6b2" 00:15:23.091 ], 00:15:23.091 "product_name": "Malloc disk", 00:15:23.091 "block_size": 512, 00:15:23.091 "num_blocks": 65536, 00:15:23.091 "uuid": "8173247e-81b0-4ef7-8d60-2430c82de6b2", 00:15:23.091 "assigned_rate_limits": { 00:15:23.091 "rw_ios_per_sec": 0, 00:15:23.091 "rw_mbytes_per_sec": 0, 00:15:23.091 "r_mbytes_per_sec": 0, 00:15:23.091 "w_mbytes_per_sec": 0 00:15:23.091 }, 00:15:23.091 "claimed": true, 00:15:23.091 "claim_type": "exclusive_write", 00:15:23.091 "zoned": false, 00:15:23.091 "supported_io_types": { 00:15:23.091 "read": true, 00:15:23.091 "write": true, 00:15:23.091 "unmap": true, 00:15:23.091 "flush": true, 00:15:23.091 "reset": true, 00:15:23.091 "nvme_admin": false, 00:15:23.091 "nvme_io": false, 00:15:23.091 "nvme_io_md": false, 00:15:23.091 "write_zeroes": true, 00:15:23.091 "zcopy": true, 00:15:23.091 "get_zone_info": false, 00:15:23.091 "zone_management": false, 00:15:23.091 "zone_append": false, 00:15:23.091 "compare": false, 00:15:23.091 "compare_and_write": false, 00:15:23.091 "abort": true, 00:15:23.091 "seek_hole": false, 00:15:23.091 "seek_data": false, 00:15:23.091 "copy": true, 00:15:23.091 "nvme_iov_md": false 00:15:23.091 }, 00:15:23.091 "memory_domains": [ 00:15:23.091 { 00:15:23.091 "dma_device_id": "system", 00:15:23.091 "dma_device_type": 1 00:15:23.091 }, 00:15:23.091 { 00:15:23.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.091 "dma_device_type": 2 00:15:23.091 } 00:15:23.091 ], 00:15:23.091 "driver_specific": {} 00:15:23.091 } 00:15:23.091 ] 00:15:23.091 23:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:23.091 23:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:23.091 23:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:23.091 23:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:23.091 23:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:23.091 23:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:23.091 23:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:23.091 23:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:23.091 23:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:23.091 23:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:23.091 23:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:23.091 23:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.091 23:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.350 23:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:23.350 "name": "Existed_Raid", 00:15:23.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.350 "strip_size_kb": 0, 00:15:23.350 "state": "configuring", 00:15:23.350 "raid_level": "raid1", 00:15:23.350 "superblock": false, 00:15:23.350 "num_base_bdevs": 2, 00:15:23.350 "num_base_bdevs_discovered": 1, 00:15:23.350 "num_base_bdevs_operational": 2, 00:15:23.350 "base_bdevs_list": [ 00:15:23.350 { 00:15:23.350 "name": "BaseBdev1", 00:15:23.350 "uuid": "8173247e-81b0-4ef7-8d60-2430c82de6b2", 00:15:23.350 "is_configured": true, 00:15:23.350 "data_offset": 0, 00:15:23.350 "data_size": 65536 00:15:23.350 }, 00:15:23.350 { 00:15:23.350 "name": "BaseBdev2", 00:15:23.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.350 "is_configured": false, 00:15:23.350 "data_offset": 0, 00:15:23.350 "data_size": 0 00:15:23.350 } 00:15:23.350 ] 00:15:23.350 }' 00:15:23.350 23:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:23.350 23:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.609 23:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:23.867 [2024-07-24 23:59:19.552697] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:23.868 [2024-07-24 23:59:19.552771] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:15:23.868 23:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:24.126 [2024-07-24 23:59:19.764827] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.126 [2024-07-24 23:59:19.766860] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.126 [2024-07-24 23:59:19.766911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.126 23:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:24.126 23:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:24.126 23:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:24.126 23:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:24.126 23:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:24.126 23:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:24.126 23:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:24.126 23:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:24.126 23:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:24.126 23:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:24.126 23:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:24.126 23:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:24.126 23:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.126 23:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.126 23:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:24.126 "name": "Existed_Raid", 00:15:24.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.126 "strip_size_kb": 0, 00:15:24.126 "state": "configuring", 00:15:24.126 "raid_level": "raid1", 00:15:24.126 "superblock": false, 00:15:24.126 "num_base_bdevs": 2, 00:15:24.126 "num_base_bdevs_discovered": 1, 00:15:24.126 "num_base_bdevs_operational": 2, 00:15:24.126 "base_bdevs_list": [ 00:15:24.126 { 00:15:24.127 "name": "BaseBdev1", 00:15:24.127 "uuid": "8173247e-81b0-4ef7-8d60-2430c82de6b2", 00:15:24.127 "is_configured": true, 00:15:24.127 "data_offset": 0, 00:15:24.127 "data_size": 65536 00:15:24.127 }, 00:15:24.127 { 00:15:24.127 "name": "BaseBdev2", 00:15:24.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.127 "is_configured": false, 00:15:24.127 "data_offset": 0, 00:15:24.127 "data_size": 0 00:15:24.127 } 00:15:24.127 ] 00:15:24.127 }' 00:15:24.127 23:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:24.127 23:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.695 23:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:24.695 [2024-07-24 23:59:20.556923] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.695 [2024-07-24 23:59:20.557002] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:15:24.695 [2024-07-24 23:59:20.557015] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:24.695 [2024-07-24 23:59:20.557132] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:15:24.695 [2024-07-24 23:59:20.557578] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:15:24.695 [2024-07-24 23:59:20.557633] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:15:24.695 [2024-07-24 23:59:20.557982] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.695 BaseBdev2 00:15:24.954 23:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:24.954 23:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:24.954 23:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:24.954 23:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:24.954 23:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:24.954 23:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:24.954 23:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:24.954 23:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:25.213 [ 00:15:25.213 { 00:15:25.213 "name": "BaseBdev2", 00:15:25.213 "aliases": [ 00:15:25.213 "67a7345d-4348-4e33-8c5e-058512daa642" 00:15:25.213 ], 00:15:25.213 "product_name": "Malloc disk", 00:15:25.213 "block_size": 512, 00:15:25.213 "num_blocks": 65536, 00:15:25.213 "uuid": "67a7345d-4348-4e33-8c5e-058512daa642", 00:15:25.213 "assigned_rate_limits": { 00:15:25.213 "rw_ios_per_sec": 0, 00:15:25.213 "rw_mbytes_per_sec": 0, 00:15:25.213 "r_mbytes_per_sec": 0, 00:15:25.213 "w_mbytes_per_sec": 0 00:15:25.213 }, 00:15:25.213 "claimed": true, 00:15:25.213 "claim_type": "exclusive_write", 00:15:25.213 "zoned": false, 00:15:25.213 "supported_io_types": { 00:15:25.213 "read": true, 00:15:25.213 "write": true, 00:15:25.213 "unmap": true, 00:15:25.213 "flush": true, 00:15:25.213 "reset": true, 00:15:25.213 "nvme_admin": false, 00:15:25.213 "nvme_io": false, 00:15:25.213 "nvme_io_md": false, 00:15:25.213 "write_zeroes": true, 00:15:25.213 "zcopy": true, 00:15:25.213 "get_zone_info": false, 00:15:25.213 "zone_management": false, 00:15:25.213 "zone_append": false, 00:15:25.213 "compare": false, 00:15:25.213 "compare_and_write": false, 00:15:25.213 "abort": true, 00:15:25.213 "seek_hole": false, 00:15:25.213 "seek_data": false, 00:15:25.213 "copy": true, 00:15:25.213 "nvme_iov_md": false 00:15:25.213 }, 00:15:25.213 "memory_domains": [ 00:15:25.213 { 00:15:25.213 "dma_device_id": "system", 00:15:25.213 "dma_device_type": 1 00:15:25.213 }, 00:15:25.213 { 00:15:25.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.213 "dma_device_type": 2 00:15:25.213 } 00:15:25.213 ], 00:15:25.213 "driver_specific": {} 00:15:25.213 } 00:15:25.213 ] 00:15:25.213 23:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:25.213 23:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:25.213 23:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:25.213 23:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:25.213 23:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:25.213 23:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:25.213 23:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:25.213 23:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:25.213 23:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:25.213 23:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:25.213 23:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:25.213 23:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:25.213 23:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:25.213 23:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.213 23:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.472 23:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:25.472 "name": "Existed_Raid", 00:15:25.472 "uuid": "73bcd855-c8eb-4de5-8043-27ee6171a1a9", 00:15:25.472 "strip_size_kb": 0, 00:15:25.472 "state": "online", 00:15:25.472 "raid_level": "raid1", 00:15:25.472 "superblock": false, 00:15:25.472 "num_base_bdevs": 2, 00:15:25.472 "num_base_bdevs_discovered": 2, 00:15:25.472 "num_base_bdevs_operational": 2, 00:15:25.472 "base_bdevs_list": [ 00:15:25.472 { 00:15:25.472 "name": "BaseBdev1", 00:15:25.472 "uuid": "8173247e-81b0-4ef7-8d60-2430c82de6b2", 00:15:25.472 "is_configured": true, 00:15:25.472 "data_offset": 0, 00:15:25.472 "data_size": 65536 00:15:25.472 }, 00:15:25.472 { 00:15:25.472 "name": "BaseBdev2", 00:15:25.472 "uuid": "67a7345d-4348-4e33-8c5e-058512daa642", 00:15:25.472 "is_configured": true, 00:15:25.472 "data_offset": 0, 00:15:25.472 "data_size": 65536 00:15:25.472 } 00:15:25.472 ] 00:15:25.472 }' 00:15:25.472 23:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:25.472 23:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.730 23:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:25.730 23:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:25.730 23:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:25.730 23:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:25.730 23:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:25.730 23:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:25.730 23:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:25.730 23:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:25.990 [2024-07-24 23:59:21.773565] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.990 23:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:25.990 "name": "Existed_Raid", 00:15:25.990 "aliases": [ 00:15:25.990 "73bcd855-c8eb-4de5-8043-27ee6171a1a9" 00:15:25.990 ], 00:15:25.990 "product_name": "Raid Volume", 00:15:25.990 "block_size": 512, 00:15:25.990 "num_blocks": 65536, 00:15:25.990 "uuid": "73bcd855-c8eb-4de5-8043-27ee6171a1a9", 00:15:25.990 "assigned_rate_limits": { 00:15:25.990 "rw_ios_per_sec": 0, 00:15:25.990 "rw_mbytes_per_sec": 0, 00:15:25.990 "r_mbytes_per_sec": 0, 00:15:25.990 "w_mbytes_per_sec": 0 00:15:25.990 }, 00:15:25.990 "claimed": false, 00:15:25.990 "zoned": false, 00:15:25.990 "supported_io_types": { 00:15:25.990 "read": true, 00:15:25.990 "write": true, 00:15:25.990 "unmap": false, 00:15:25.990 "flush": false, 00:15:25.990 "reset": true, 00:15:25.990 "nvme_admin": false, 00:15:25.990 "nvme_io": false, 00:15:25.990 "nvme_io_md": false, 00:15:25.990 "write_zeroes": true, 00:15:25.990 "zcopy": false, 00:15:25.990 "get_zone_info": false, 00:15:25.990 "zone_management": false, 00:15:25.990 "zone_append": false, 00:15:25.990 "compare": false, 00:15:25.990 "compare_and_write": false, 00:15:25.990 "abort": false, 00:15:25.990 "seek_hole": false, 00:15:25.990 "seek_data": false, 00:15:25.990 "copy": false, 00:15:25.990 "nvme_iov_md": false 00:15:25.990 }, 00:15:25.990 "memory_domains": [ 00:15:25.990 { 00:15:25.990 "dma_device_id": "system", 00:15:25.990 "dma_device_type": 1 00:15:25.990 }, 00:15:25.990 { 00:15:25.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.990 "dma_device_type": 2 00:15:25.990 }, 00:15:25.990 { 00:15:25.990 "dma_device_id": "system", 00:15:25.990 "dma_device_type": 1 00:15:25.990 }, 00:15:25.990 { 00:15:25.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.990 "dma_device_type": 2 00:15:25.990 } 00:15:25.990 ], 00:15:25.990 "driver_specific": { 00:15:25.990 "raid": { 00:15:25.990 "uuid": "73bcd855-c8eb-4de5-8043-27ee6171a1a9", 00:15:25.990 "strip_size_kb": 0, 00:15:25.990 "state": "online", 00:15:25.990 "raid_level": "raid1", 00:15:25.990 "superblock": false, 00:15:25.990 "num_base_bdevs": 2, 00:15:25.990 "num_base_bdevs_discovered": 2, 00:15:25.990 "num_base_bdevs_operational": 2, 00:15:25.990 "base_bdevs_list": [ 00:15:25.990 { 00:15:25.990 "name": "BaseBdev1", 00:15:25.990 "uuid": "8173247e-81b0-4ef7-8d60-2430c82de6b2", 00:15:25.990 "is_configured": true, 00:15:25.990 "data_offset": 0, 00:15:25.990 "data_size": 65536 00:15:25.990 }, 00:15:25.990 { 00:15:25.990 "name": "BaseBdev2", 00:15:25.990 "uuid": "67a7345d-4348-4e33-8c5e-058512daa642", 00:15:25.990 "is_configured": true, 00:15:25.990 "data_offset": 0, 00:15:25.990 "data_size": 65536 00:15:25.990 } 00:15:25.990 ] 00:15:25.990 } 00:15:25.990 } 00:15:25.990 }' 00:15:25.990 23:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:25.990 23:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:25.990 BaseBdev2' 00:15:25.990 23:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:25.990 23:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:25.990 23:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:26.249 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:26.249 "name": "BaseBdev1", 00:15:26.249 "aliases": [ 00:15:26.249 "8173247e-81b0-4ef7-8d60-2430c82de6b2" 00:15:26.249 ], 00:15:26.249 "product_name": "Malloc disk", 00:15:26.249 "block_size": 512, 00:15:26.249 "num_blocks": 65536, 00:15:26.249 "uuid": "8173247e-81b0-4ef7-8d60-2430c82de6b2", 00:15:26.249 "assigned_rate_limits": { 00:15:26.249 "rw_ios_per_sec": 0, 00:15:26.249 "rw_mbytes_per_sec": 0, 00:15:26.249 "r_mbytes_per_sec": 0, 00:15:26.249 "w_mbytes_per_sec": 0 00:15:26.249 }, 00:15:26.249 "claimed": true, 00:15:26.249 "claim_type": "exclusive_write", 00:15:26.249 "zoned": false, 00:15:26.249 "supported_io_types": { 00:15:26.249 "read": true, 00:15:26.249 "write": true, 00:15:26.249 "unmap": true, 00:15:26.249 "flush": true, 00:15:26.249 "reset": true, 00:15:26.249 "nvme_admin": false, 00:15:26.249 "nvme_io": false, 00:15:26.249 "nvme_io_md": false, 00:15:26.249 "write_zeroes": true, 00:15:26.249 "zcopy": true, 00:15:26.249 "get_zone_info": false, 00:15:26.249 "zone_management": false, 00:15:26.249 "zone_append": false, 00:15:26.249 "compare": false, 00:15:26.249 "compare_and_write": false, 00:15:26.249 "abort": true, 00:15:26.249 "seek_hole": false, 00:15:26.249 "seek_data": false, 00:15:26.249 "copy": true, 00:15:26.249 "nvme_iov_md": false 00:15:26.249 }, 00:15:26.249 "memory_domains": [ 00:15:26.249 { 00:15:26.249 "dma_device_id": "system", 00:15:26.249 "dma_device_type": 1 00:15:26.249 }, 00:15:26.249 { 00:15:26.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.249 "dma_device_type": 2 00:15:26.249 } 00:15:26.249 ], 00:15:26.249 "driver_specific": {} 00:15:26.249 }' 00:15:26.249 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.249 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.249 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:26.249 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:26.249 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:26.249 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:26.249 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:26.249 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:26.508 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:26.508 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.508 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.508 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:26.508 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:26.508 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:26.508 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:26.767 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:26.767 "name": "BaseBdev2", 00:15:26.767 "aliases": [ 00:15:26.767 "67a7345d-4348-4e33-8c5e-058512daa642" 00:15:26.767 ], 00:15:26.767 "product_name": "Malloc disk", 00:15:26.767 "block_size": 512, 00:15:26.767 "num_blocks": 65536, 00:15:26.767 "uuid": "67a7345d-4348-4e33-8c5e-058512daa642", 00:15:26.767 "assigned_rate_limits": { 00:15:26.767 "rw_ios_per_sec": 0, 00:15:26.767 "rw_mbytes_per_sec": 0, 00:15:26.767 "r_mbytes_per_sec": 0, 00:15:26.767 "w_mbytes_per_sec": 0 00:15:26.767 }, 00:15:26.767 "claimed": true, 00:15:26.767 "claim_type": "exclusive_write", 00:15:26.767 "zoned": false, 00:15:26.767 "supported_io_types": { 00:15:26.767 "read": true, 00:15:26.767 "write": true, 00:15:26.767 "unmap": true, 00:15:26.767 "flush": true, 00:15:26.767 "reset": true, 00:15:26.767 "nvme_admin": false, 00:15:26.767 "nvme_io": false, 00:15:26.767 "nvme_io_md": false, 00:15:26.767 "write_zeroes": true, 00:15:26.767 "zcopy": true, 00:15:26.767 "get_zone_info": false, 00:15:26.767 "zone_management": false, 00:15:26.767 "zone_append": false, 00:15:26.767 "compare": false, 00:15:26.767 "compare_and_write": false, 00:15:26.767 "abort": true, 00:15:26.767 "seek_hole": false, 00:15:26.767 "seek_data": false, 00:15:26.767 "copy": true, 00:15:26.767 "nvme_iov_md": false 00:15:26.767 }, 00:15:26.767 "memory_domains": [ 00:15:26.767 { 00:15:26.767 "dma_device_id": "system", 00:15:26.767 "dma_device_type": 1 00:15:26.767 }, 00:15:26.767 { 00:15:26.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.767 "dma_device_type": 2 00:15:26.767 } 00:15:26.767 ], 00:15:26.767 "driver_specific": {} 00:15:26.767 }' 00:15:26.767 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.767 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.767 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:26.767 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:26.767 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:26.767 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:26.767 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:26.767 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:26.767 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:26.767 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.767 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.767 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:26.767 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:27.026 [2024-07-24 23:59:22.701665] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:27.026 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:27.026 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:27.026 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:27.026 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:15:27.026 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:27.026 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:27.026 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:27.026 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:27.026 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:27.026 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:27.026 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:27.026 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:27.026 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:27.026 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:27.026 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:27.026 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.026 23:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.285 23:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:27.285 "name": "Existed_Raid", 00:15:27.285 "uuid": "73bcd855-c8eb-4de5-8043-27ee6171a1a9", 00:15:27.285 "strip_size_kb": 0, 00:15:27.285 "state": "online", 00:15:27.285 "raid_level": "raid1", 00:15:27.285 "superblock": false, 00:15:27.285 "num_base_bdevs": 2, 00:15:27.285 "num_base_bdevs_discovered": 1, 00:15:27.285 "num_base_bdevs_operational": 1, 00:15:27.285 "base_bdevs_list": [ 00:15:27.285 { 00:15:27.285 "name": null, 00:15:27.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.285 "is_configured": false, 00:15:27.285 "data_offset": 0, 00:15:27.285 "data_size": 65536 00:15:27.285 }, 00:15:27.285 { 00:15:27.285 "name": "BaseBdev2", 00:15:27.285 "uuid": "67a7345d-4348-4e33-8c5e-058512daa642", 00:15:27.285 "is_configured": true, 00:15:27.285 "data_offset": 0, 00:15:27.285 "data_size": 65536 00:15:27.285 } 00:15:27.285 ] 00:15:27.285 }' 00:15:27.285 23:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:27.285 23:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.544 23:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:27.544 23:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:27.544 23:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.544 23:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:27.801 23:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:27.801 23:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:27.801 23:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:28.059 [2024-07-24 23:59:23.893468] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:28.059 [2024-07-24 23:59:23.893580] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:28.317 [2024-07-24 23:59:23.967587] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.317 [2024-07-24 23:59:23.967875] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.317 [2024-07-24 23:59:23.967911] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:15:28.317 23:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:28.317 23:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:28.317 23:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.317 23:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:28.576 23:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:28.576 23:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:28.576 23:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:28.576 23:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 78540 00:15:28.576 23:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 78540 ']' 00:15:28.576 23:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 78540 00:15:28.576 23:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:28.576 23:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:28.576 23:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78540 00:15:28.576 killing process with pid 78540 00:15:28.576 23:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:28.576 23:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:28.576 23:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78540' 00:15:28.576 23:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 78540 00:15:28.576 23:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 78540 00:15:28.576 [2024-07-24 23:59:24.273635] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:28.576 [2024-07-24 23:59:24.273785] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:29.538 23:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:29.538 00:15:29.538 real 0m9.515s 00:15:29.538 user 0m15.659s 00:15:29.538 sys 0m1.452s 00:15:29.538 23:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:29.538 23:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.538 ************************************ 00:15:29.538 END TEST raid_state_function_test 00:15:29.538 ************************************ 00:15:29.538 23:59:25 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:15:29.538 23:59:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:29.538 23:59:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:29.538 23:59:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:29.797 ************************************ 00:15:29.797 START TEST raid_state_function_test_sb 00:15:29.797 ************************************ 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:29.797 Process raid pid: 78882 00:15:29.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=78882 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 78882' 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 78882 /var/tmp/spdk-raid.sock 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 78882 ']' 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:29.797 23:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.797 [2024-07-24 23:59:25.465674] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:15:29.797 [2024-07-24 23:59:25.466045] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.797 [2024-07-24 23:59:25.629881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.055 [2024-07-24 23:59:25.861006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.313 [2024-07-24 23:59:26.021694] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:30.572 23:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:30.572 23:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:30.572 23:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:30.830 [2024-07-24 23:59:26.614507] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:30.830 [2024-07-24 23:59:26.614589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:30.830 [2024-07-24 23:59:26.614605] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:30.830 [2024-07-24 23:59:26.614620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:30.830 23:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:30.830 23:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:30.830 23:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:30.830 23:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:30.830 23:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:30.830 23:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:30.830 23:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:30.830 23:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:30.830 23:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:30.830 23:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:30.830 23:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.830 23:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.088 23:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:31.088 "name": "Existed_Raid", 00:15:31.088 "uuid": "22b686b9-96e6-40cd-a532-bd5a63ed5795", 00:15:31.088 "strip_size_kb": 0, 00:15:31.088 "state": "configuring", 00:15:31.088 "raid_level": "raid1", 00:15:31.088 "superblock": true, 00:15:31.088 "num_base_bdevs": 2, 00:15:31.088 "num_base_bdevs_discovered": 0, 00:15:31.088 "num_base_bdevs_operational": 2, 00:15:31.088 "base_bdevs_list": [ 00:15:31.088 { 00:15:31.088 "name": "BaseBdev1", 00:15:31.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.088 "is_configured": false, 00:15:31.088 "data_offset": 0, 00:15:31.088 "data_size": 0 00:15:31.088 }, 00:15:31.088 { 00:15:31.088 "name": "BaseBdev2", 00:15:31.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.088 "is_configured": false, 00:15:31.088 "data_offset": 0, 00:15:31.088 "data_size": 0 00:15:31.088 } 00:15:31.088 ] 00:15:31.088 }' 00:15:31.088 23:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:31.088 23:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.346 23:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:31.605 [2024-07-24 23:59:27.390585] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:31.605 [2024-07-24 23:59:27.390631] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:15:31.605 23:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:31.863 [2024-07-24 23:59:27.662672] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:31.863 [2024-07-24 23:59:27.662752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:31.863 [2024-07-24 23:59:27.662767] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:31.863 [2024-07-24 23:59:27.662782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:31.863 23:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:32.121 [2024-07-24 23:59:27.906018] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.121 BaseBdev1 00:15:32.121 23:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:32.121 23:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:32.121 23:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:32.121 23:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:32.121 23:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:32.121 23:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:32.121 23:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:32.379 23:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:32.638 [ 00:15:32.638 { 00:15:32.638 "name": "BaseBdev1", 00:15:32.638 "aliases": [ 00:15:32.638 "36c3170b-d289-49d6-a687-28f485cc5a58" 00:15:32.638 ], 00:15:32.638 "product_name": "Malloc disk", 00:15:32.638 "block_size": 512, 00:15:32.638 "num_blocks": 65536, 00:15:32.638 "uuid": "36c3170b-d289-49d6-a687-28f485cc5a58", 00:15:32.638 "assigned_rate_limits": { 00:15:32.638 "rw_ios_per_sec": 0, 00:15:32.638 "rw_mbytes_per_sec": 0, 00:15:32.638 "r_mbytes_per_sec": 0, 00:15:32.638 "w_mbytes_per_sec": 0 00:15:32.638 }, 00:15:32.638 "claimed": true, 00:15:32.638 "claim_type": "exclusive_write", 00:15:32.638 "zoned": false, 00:15:32.638 "supported_io_types": { 00:15:32.638 "read": true, 00:15:32.638 "write": true, 00:15:32.638 "unmap": true, 00:15:32.638 "flush": true, 00:15:32.638 "reset": true, 00:15:32.638 "nvme_admin": false, 00:15:32.638 "nvme_io": false, 00:15:32.638 "nvme_io_md": false, 00:15:32.638 "write_zeroes": true, 00:15:32.638 "zcopy": true, 00:15:32.638 "get_zone_info": false, 00:15:32.638 "zone_management": false, 00:15:32.638 "zone_append": false, 00:15:32.638 "compare": false, 00:15:32.638 "compare_and_write": false, 00:15:32.638 "abort": true, 00:15:32.638 "seek_hole": false, 00:15:32.638 "seek_data": false, 00:15:32.638 "copy": true, 00:15:32.638 "nvme_iov_md": false 00:15:32.638 }, 00:15:32.638 "memory_domains": [ 00:15:32.638 { 00:15:32.638 "dma_device_id": "system", 00:15:32.638 "dma_device_type": 1 00:15:32.638 }, 00:15:32.638 { 00:15:32.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.639 "dma_device_type": 2 00:15:32.639 } 00:15:32.639 ], 00:15:32.639 "driver_specific": {} 00:15:32.639 } 00:15:32.639 ] 00:15:32.639 23:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:32.639 23:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:32.639 23:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:32.639 23:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:32.639 23:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:32.639 23:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:32.639 23:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:32.639 23:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:32.639 23:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:32.639 23:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:32.639 23:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:32.639 23:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.639 23:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.898 23:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:32.898 "name": "Existed_Raid", 00:15:32.898 "uuid": "a9e76334-8d71-4846-9a2a-80c0440e71ce", 00:15:32.898 "strip_size_kb": 0, 00:15:32.898 "state": "configuring", 00:15:32.898 "raid_level": "raid1", 00:15:32.898 "superblock": true, 00:15:32.898 "num_base_bdevs": 2, 00:15:32.898 "num_base_bdevs_discovered": 1, 00:15:32.898 "num_base_bdevs_operational": 2, 00:15:32.898 "base_bdevs_list": [ 00:15:32.898 { 00:15:32.898 "name": "BaseBdev1", 00:15:32.898 "uuid": "36c3170b-d289-49d6-a687-28f485cc5a58", 00:15:32.898 "is_configured": true, 00:15:32.898 "data_offset": 2048, 00:15:32.898 "data_size": 63488 00:15:32.898 }, 00:15:32.898 { 00:15:32.898 "name": "BaseBdev2", 00:15:32.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.898 "is_configured": false, 00:15:32.898 "data_offset": 0, 00:15:32.898 "data_size": 0 00:15:32.898 } 00:15:32.898 ] 00:15:32.898 }' 00:15:32.898 23:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:32.898 23:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.156 23:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:33.415 [2024-07-24 23:59:29.078515] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:33.415 [2024-07-24 23:59:29.078573] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:15:33.415 23:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:33.673 [2024-07-24 23:59:29.334588] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.673 [2024-07-24 23:59:29.336682] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.673 [2024-07-24 23:59:29.336749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.673 23:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:33.674 23:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:33.674 23:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:33.674 23:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:33.674 23:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:33.674 23:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:33.674 23:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:33.674 23:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:33.674 23:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:33.674 23:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:33.674 23:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:33.674 23:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:33.674 23:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.674 23:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.932 23:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:33.932 "name": "Existed_Raid", 00:15:33.932 "uuid": "cf93dd36-1f03-4750-8b86-86b9d7de8eb2", 00:15:33.932 "strip_size_kb": 0, 00:15:33.932 "state": "configuring", 00:15:33.932 "raid_level": "raid1", 00:15:33.932 "superblock": true, 00:15:33.932 "num_base_bdevs": 2, 00:15:33.932 "num_base_bdevs_discovered": 1, 00:15:33.932 "num_base_bdevs_operational": 2, 00:15:33.932 "base_bdevs_list": [ 00:15:33.932 { 00:15:33.932 "name": "BaseBdev1", 00:15:33.932 "uuid": "36c3170b-d289-49d6-a687-28f485cc5a58", 00:15:33.932 "is_configured": true, 00:15:33.932 "data_offset": 2048, 00:15:33.932 "data_size": 63488 00:15:33.932 }, 00:15:33.932 { 00:15:33.932 "name": "BaseBdev2", 00:15:33.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.932 "is_configured": false, 00:15:33.932 "data_offset": 0, 00:15:33.932 "data_size": 0 00:15:33.932 } 00:15:33.932 ] 00:15:33.932 }' 00:15:33.932 23:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:33.932 23:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.191 23:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:34.450 [2024-07-24 23:59:30.187994] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:34.450 [2024-07-24 23:59:30.188296] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:15:34.450 [2024-07-24 23:59:30.188315] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:34.450 [2024-07-24 23:59:30.188422] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:15:34.450 [2024-07-24 23:59:30.188782] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:15:34.450 [2024-07-24 23:59:30.188803] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:15:34.450 [2024-07-24 23:59:30.189028] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.450 BaseBdev2 00:15:34.450 23:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:34.450 23:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:34.450 23:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:34.450 23:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:34.450 23:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:34.450 23:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:34.450 23:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:34.709 23:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:34.968 [ 00:15:34.968 { 00:15:34.968 "name": "BaseBdev2", 00:15:34.968 "aliases": [ 00:15:34.968 "b3c20a6f-9baf-40dd-bcb1-49d6fda5fa51" 00:15:34.968 ], 00:15:34.968 "product_name": "Malloc disk", 00:15:34.968 "block_size": 512, 00:15:34.968 "num_blocks": 65536, 00:15:34.968 "uuid": "b3c20a6f-9baf-40dd-bcb1-49d6fda5fa51", 00:15:34.968 "assigned_rate_limits": { 00:15:34.968 "rw_ios_per_sec": 0, 00:15:34.968 "rw_mbytes_per_sec": 0, 00:15:34.968 "r_mbytes_per_sec": 0, 00:15:34.968 "w_mbytes_per_sec": 0 00:15:34.968 }, 00:15:34.968 "claimed": true, 00:15:34.968 "claim_type": "exclusive_write", 00:15:34.968 "zoned": false, 00:15:34.968 "supported_io_types": { 00:15:34.968 "read": true, 00:15:34.968 "write": true, 00:15:34.968 "unmap": true, 00:15:34.968 "flush": true, 00:15:34.968 "reset": true, 00:15:34.968 "nvme_admin": false, 00:15:34.968 "nvme_io": false, 00:15:34.968 "nvme_io_md": false, 00:15:34.968 "write_zeroes": true, 00:15:34.968 "zcopy": true, 00:15:34.968 "get_zone_info": false, 00:15:34.968 "zone_management": false, 00:15:34.968 "zone_append": false, 00:15:34.968 "compare": false, 00:15:34.968 "compare_and_write": false, 00:15:34.968 "abort": true, 00:15:34.968 "seek_hole": false, 00:15:34.968 "seek_data": false, 00:15:34.968 "copy": true, 00:15:34.968 "nvme_iov_md": false 00:15:34.968 }, 00:15:34.968 "memory_domains": [ 00:15:34.968 { 00:15:34.968 "dma_device_id": "system", 00:15:34.968 "dma_device_type": 1 00:15:34.968 }, 00:15:34.968 { 00:15:34.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.968 "dma_device_type": 2 00:15:34.968 } 00:15:34.968 ], 00:15:34.968 "driver_specific": {} 00:15:34.968 } 00:15:34.968 ] 00:15:34.968 23:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:34.968 23:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:34.968 23:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:34.968 23:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:34.968 23:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:34.968 23:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:34.968 23:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:34.968 23:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:34.968 23:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:34.968 23:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:34.968 23:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:34.968 23:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:34.968 23:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:34.968 23:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.968 23:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.227 23:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:35.227 "name": "Existed_Raid", 00:15:35.227 "uuid": "cf93dd36-1f03-4750-8b86-86b9d7de8eb2", 00:15:35.227 "strip_size_kb": 0, 00:15:35.227 "state": "online", 00:15:35.227 "raid_level": "raid1", 00:15:35.227 "superblock": true, 00:15:35.227 "num_base_bdevs": 2, 00:15:35.227 "num_base_bdevs_discovered": 2, 00:15:35.227 "num_base_bdevs_operational": 2, 00:15:35.227 "base_bdevs_list": [ 00:15:35.227 { 00:15:35.227 "name": "BaseBdev1", 00:15:35.227 "uuid": "36c3170b-d289-49d6-a687-28f485cc5a58", 00:15:35.227 "is_configured": true, 00:15:35.227 "data_offset": 2048, 00:15:35.227 "data_size": 63488 00:15:35.227 }, 00:15:35.227 { 00:15:35.227 "name": "BaseBdev2", 00:15:35.227 "uuid": "b3c20a6f-9baf-40dd-bcb1-49d6fda5fa51", 00:15:35.227 "is_configured": true, 00:15:35.227 "data_offset": 2048, 00:15:35.227 "data_size": 63488 00:15:35.227 } 00:15:35.227 ] 00:15:35.227 }' 00:15:35.227 23:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:35.227 23:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.486 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:35.486 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:35.486 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:35.486 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:35.486 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:35.486 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:35.486 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:35.486 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:35.746 [2024-07-24 23:59:31.532749] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:35.746 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:35.746 "name": "Existed_Raid", 00:15:35.746 "aliases": [ 00:15:35.746 "cf93dd36-1f03-4750-8b86-86b9d7de8eb2" 00:15:35.746 ], 00:15:35.746 "product_name": "Raid Volume", 00:15:35.746 "block_size": 512, 00:15:35.746 "num_blocks": 63488, 00:15:35.746 "uuid": "cf93dd36-1f03-4750-8b86-86b9d7de8eb2", 00:15:35.746 "assigned_rate_limits": { 00:15:35.746 "rw_ios_per_sec": 0, 00:15:35.746 "rw_mbytes_per_sec": 0, 00:15:35.746 "r_mbytes_per_sec": 0, 00:15:35.746 "w_mbytes_per_sec": 0 00:15:35.746 }, 00:15:35.746 "claimed": false, 00:15:35.746 "zoned": false, 00:15:35.746 "supported_io_types": { 00:15:35.746 "read": true, 00:15:35.746 "write": true, 00:15:35.746 "unmap": false, 00:15:35.746 "flush": false, 00:15:35.746 "reset": true, 00:15:35.746 "nvme_admin": false, 00:15:35.746 "nvme_io": false, 00:15:35.746 "nvme_io_md": false, 00:15:35.746 "write_zeroes": true, 00:15:35.746 "zcopy": false, 00:15:35.746 "get_zone_info": false, 00:15:35.746 "zone_management": false, 00:15:35.746 "zone_append": false, 00:15:35.746 "compare": false, 00:15:35.746 "compare_and_write": false, 00:15:35.746 "abort": false, 00:15:35.746 "seek_hole": false, 00:15:35.746 "seek_data": false, 00:15:35.746 "copy": false, 00:15:35.746 "nvme_iov_md": false 00:15:35.746 }, 00:15:35.746 "memory_domains": [ 00:15:35.746 { 00:15:35.746 "dma_device_id": "system", 00:15:35.746 "dma_device_type": 1 00:15:35.746 }, 00:15:35.746 { 00:15:35.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.746 "dma_device_type": 2 00:15:35.746 }, 00:15:35.746 { 00:15:35.746 "dma_device_id": "system", 00:15:35.746 "dma_device_type": 1 00:15:35.746 }, 00:15:35.746 { 00:15:35.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.746 "dma_device_type": 2 00:15:35.746 } 00:15:35.746 ], 00:15:35.746 "driver_specific": { 00:15:35.746 "raid": { 00:15:35.746 "uuid": "cf93dd36-1f03-4750-8b86-86b9d7de8eb2", 00:15:35.746 "strip_size_kb": 0, 00:15:35.746 "state": "online", 00:15:35.746 "raid_level": "raid1", 00:15:35.746 "superblock": true, 00:15:35.746 "num_base_bdevs": 2, 00:15:35.746 "num_base_bdevs_discovered": 2, 00:15:35.746 "num_base_bdevs_operational": 2, 00:15:35.746 "base_bdevs_list": [ 00:15:35.746 { 00:15:35.746 "name": "BaseBdev1", 00:15:35.746 "uuid": "36c3170b-d289-49d6-a687-28f485cc5a58", 00:15:35.746 "is_configured": true, 00:15:35.746 "data_offset": 2048, 00:15:35.746 "data_size": 63488 00:15:35.746 }, 00:15:35.746 { 00:15:35.746 "name": "BaseBdev2", 00:15:35.746 "uuid": "b3c20a6f-9baf-40dd-bcb1-49d6fda5fa51", 00:15:35.746 "is_configured": true, 00:15:35.746 "data_offset": 2048, 00:15:35.746 "data_size": 63488 00:15:35.746 } 00:15:35.746 ] 00:15:35.746 } 00:15:35.746 } 00:15:35.746 }' 00:15:35.746 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:35.746 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:35.746 BaseBdev2' 00:15:35.746 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:35.746 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:35.746 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:36.005 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:36.005 "name": "BaseBdev1", 00:15:36.005 "aliases": [ 00:15:36.005 "36c3170b-d289-49d6-a687-28f485cc5a58" 00:15:36.005 ], 00:15:36.005 "product_name": "Malloc disk", 00:15:36.005 "block_size": 512, 00:15:36.005 "num_blocks": 65536, 00:15:36.005 "uuid": "36c3170b-d289-49d6-a687-28f485cc5a58", 00:15:36.005 "assigned_rate_limits": { 00:15:36.005 "rw_ios_per_sec": 0, 00:15:36.006 "rw_mbytes_per_sec": 0, 00:15:36.006 "r_mbytes_per_sec": 0, 00:15:36.006 "w_mbytes_per_sec": 0 00:15:36.006 }, 00:15:36.006 "claimed": true, 00:15:36.006 "claim_type": "exclusive_write", 00:15:36.006 "zoned": false, 00:15:36.006 "supported_io_types": { 00:15:36.006 "read": true, 00:15:36.006 "write": true, 00:15:36.006 "unmap": true, 00:15:36.006 "flush": true, 00:15:36.006 "reset": true, 00:15:36.006 "nvme_admin": false, 00:15:36.006 "nvme_io": false, 00:15:36.006 "nvme_io_md": false, 00:15:36.006 "write_zeroes": true, 00:15:36.006 "zcopy": true, 00:15:36.006 "get_zone_info": false, 00:15:36.006 "zone_management": false, 00:15:36.006 "zone_append": false, 00:15:36.006 "compare": false, 00:15:36.006 "compare_and_write": false, 00:15:36.006 "abort": true, 00:15:36.006 "seek_hole": false, 00:15:36.006 "seek_data": false, 00:15:36.006 "copy": true, 00:15:36.006 "nvme_iov_md": false 00:15:36.006 }, 00:15:36.006 "memory_domains": [ 00:15:36.006 { 00:15:36.006 "dma_device_id": "system", 00:15:36.006 "dma_device_type": 1 00:15:36.006 }, 00:15:36.006 { 00:15:36.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.006 "dma_device_type": 2 00:15:36.006 } 00:15:36.006 ], 00:15:36.006 "driver_specific": {} 00:15:36.006 }' 00:15:36.006 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:36.006 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:36.006 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:36.006 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.264 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.264 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:36.265 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.265 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.265 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:36.265 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.265 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.265 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:36.265 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:36.265 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:36.265 23:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:36.523 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:36.523 "name": "BaseBdev2", 00:15:36.523 "aliases": [ 00:15:36.523 "b3c20a6f-9baf-40dd-bcb1-49d6fda5fa51" 00:15:36.523 ], 00:15:36.523 "product_name": "Malloc disk", 00:15:36.523 "block_size": 512, 00:15:36.523 "num_blocks": 65536, 00:15:36.523 "uuid": "b3c20a6f-9baf-40dd-bcb1-49d6fda5fa51", 00:15:36.523 "assigned_rate_limits": { 00:15:36.523 "rw_ios_per_sec": 0, 00:15:36.523 "rw_mbytes_per_sec": 0, 00:15:36.523 "r_mbytes_per_sec": 0, 00:15:36.523 "w_mbytes_per_sec": 0 00:15:36.523 }, 00:15:36.523 "claimed": true, 00:15:36.523 "claim_type": "exclusive_write", 00:15:36.523 "zoned": false, 00:15:36.523 "supported_io_types": { 00:15:36.523 "read": true, 00:15:36.523 "write": true, 00:15:36.523 "unmap": true, 00:15:36.523 "flush": true, 00:15:36.523 "reset": true, 00:15:36.523 "nvme_admin": false, 00:15:36.523 "nvme_io": false, 00:15:36.523 "nvme_io_md": false, 00:15:36.523 "write_zeroes": true, 00:15:36.523 "zcopy": true, 00:15:36.523 "get_zone_info": false, 00:15:36.523 "zone_management": false, 00:15:36.523 "zone_append": false, 00:15:36.523 "compare": false, 00:15:36.523 "compare_and_write": false, 00:15:36.523 "abort": true, 00:15:36.523 "seek_hole": false, 00:15:36.523 "seek_data": false, 00:15:36.523 "copy": true, 00:15:36.524 "nvme_iov_md": false 00:15:36.524 }, 00:15:36.524 "memory_domains": [ 00:15:36.524 { 00:15:36.524 "dma_device_id": "system", 00:15:36.524 "dma_device_type": 1 00:15:36.524 }, 00:15:36.524 { 00:15:36.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.524 "dma_device_type": 2 00:15:36.524 } 00:15:36.524 ], 00:15:36.524 "driver_specific": {} 00:15:36.524 }' 00:15:36.524 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:36.524 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:36.524 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:36.524 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.524 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.524 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:36.524 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.524 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.524 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:36.524 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.524 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.524 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:36.524 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:36.783 [2024-07-24 23:59:32.468765] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:36.783 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:36.783 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:36.783 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:36.783 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:15:36.783 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:36.783 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:36.783 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:36.783 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:36.783 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:36.783 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:36.783 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:36.783 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:36.783 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:36.783 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:36.783 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:36.783 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.783 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.042 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:37.042 "name": "Existed_Raid", 00:15:37.042 "uuid": "cf93dd36-1f03-4750-8b86-86b9d7de8eb2", 00:15:37.042 "strip_size_kb": 0, 00:15:37.042 "state": "online", 00:15:37.042 "raid_level": "raid1", 00:15:37.042 "superblock": true, 00:15:37.042 "num_base_bdevs": 2, 00:15:37.042 "num_base_bdevs_discovered": 1, 00:15:37.042 "num_base_bdevs_operational": 1, 00:15:37.042 "base_bdevs_list": [ 00:15:37.042 { 00:15:37.042 "name": null, 00:15:37.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.042 "is_configured": false, 00:15:37.042 "data_offset": 2048, 00:15:37.042 "data_size": 63488 00:15:37.042 }, 00:15:37.042 { 00:15:37.042 "name": "BaseBdev2", 00:15:37.042 "uuid": "b3c20a6f-9baf-40dd-bcb1-49d6fda5fa51", 00:15:37.042 "is_configured": true, 00:15:37.042 "data_offset": 2048, 00:15:37.042 "data_size": 63488 00:15:37.042 } 00:15:37.042 ] 00:15:37.042 }' 00:15:37.042 23:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:37.042 23:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.622 23:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:37.622 23:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:37.622 23:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.622 23:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:37.622 23:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:37.622 23:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.622 23:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:37.888 [2024-07-24 23:59:33.648621] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:37.888 [2024-07-24 23:59:33.648770] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.888 [2024-07-24 23:59:33.736225] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.888 [2024-07-24 23:59:33.736404] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.888 [2024-07-24 23:59:33.736498] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:15:37.888 23:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:37.888 23:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:37.888 23:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.888 23:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:38.457 23:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:38.457 23:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:38.457 23:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:38.457 23:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 78882 00:15:38.457 23:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 78882 ']' 00:15:38.457 23:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 78882 00:15:38.457 23:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:38.457 23:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:38.457 23:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78882 00:15:38.457 killing process with pid 78882 00:15:38.457 23:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:38.457 23:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:38.457 23:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78882' 00:15:38.457 23:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 78882 00:15:38.457 23:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 78882 00:15:38.457 [2024-07-24 23:59:34.053470] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:38.457 [2024-07-24 23:59:34.053657] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:39.390 23:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:39.390 00:15:39.390 real 0m9.748s 00:15:39.390 user 0m16.149s 00:15:39.390 sys 0m1.415s 00:15:39.390 23:59:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:39.390 23:59:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.390 ************************************ 00:15:39.390 END TEST raid_state_function_test_sb 00:15:39.390 ************************************ 00:15:39.390 23:59:35 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:15:39.390 23:59:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:39.390 23:59:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:39.390 23:59:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:39.390 ************************************ 00:15:39.390 START TEST raid_superblock_test 00:15:39.390 ************************************ 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=79220 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 79220 /var/tmp/spdk-raid.sock 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 79220 ']' 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:39.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:39.390 23:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.648 [2024-07-24 23:59:35.279295] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:15:39.648 [2024-07-24 23:59:35.279496] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79220 ] 00:15:39.648 [2024-07-24 23:59:35.455912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.907 [2024-07-24 23:59:35.694007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.165 [2024-07-24 23:59:35.866744] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:40.423 23:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:40.423 23:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:40.423 23:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:15:40.423 23:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:40.423 23:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:15:40.423 23:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:15:40.423 23:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:40.423 23:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:40.423 23:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:40.424 23:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:40.424 23:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:40.682 malloc1 00:15:40.682 23:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:40.940 [2024-07-24 23:59:36.700686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:40.940 [2024-07-24 23:59:36.700781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.940 [2024-07-24 23:59:36.700837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:15:40.940 [2024-07-24 23:59:36.700855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.940 [2024-07-24 23:59:36.703467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.940 [2024-07-24 23:59:36.703530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:40.940 pt1 00:15:40.940 23:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:40.940 23:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:40.940 23:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:15:40.940 23:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:15:40.940 23:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:40.940 23:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:40.940 23:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:40.940 23:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:40.940 23:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:41.198 malloc2 00:15:41.198 23:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:41.457 [2024-07-24 23:59:37.209267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:41.457 [2024-07-24 23:59:37.209384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.457 [2024-07-24 23:59:37.209418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:15:41.457 [2024-07-24 23:59:37.209433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.457 [2024-07-24 23:59:37.212082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.457 [2024-07-24 23:59:37.212143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:41.457 pt2 00:15:41.457 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:41.457 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:41.457 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:41.715 [2024-07-24 23:59:37.433392] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:41.715 [2024-07-24 23:59:37.435652] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:41.715 [2024-07-24 23:59:37.435930] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007e80 00:15:41.715 [2024-07-24 23:59:37.435982] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:41.715 [2024-07-24 23:59:37.436151] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:15:41.715 [2024-07-24 23:59:37.436718] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007e80 00:15:41.715 [2024-07-24 23:59:37.436751] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007e80 00:15:41.715 [2024-07-24 23:59:37.436974] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.715 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:41.715 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:41.715 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:41.715 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:41.715 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:41.715 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:41.715 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:41.715 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:41.715 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:41.715 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:41.715 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.715 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.973 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:41.973 "name": "raid_bdev1", 00:15:41.973 "uuid": "1f033ffd-1133-4de0-aeab-89e4857f8414", 00:15:41.973 "strip_size_kb": 0, 00:15:41.973 "state": "online", 00:15:41.973 "raid_level": "raid1", 00:15:41.973 "superblock": true, 00:15:41.973 "num_base_bdevs": 2, 00:15:41.973 "num_base_bdevs_discovered": 2, 00:15:41.973 "num_base_bdevs_operational": 2, 00:15:41.973 "base_bdevs_list": [ 00:15:41.973 { 00:15:41.973 "name": "pt1", 00:15:41.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:41.973 "is_configured": true, 00:15:41.973 "data_offset": 2048, 00:15:41.973 "data_size": 63488 00:15:41.973 }, 00:15:41.973 { 00:15:41.973 "name": "pt2", 00:15:41.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:41.973 "is_configured": true, 00:15:41.973 "data_offset": 2048, 00:15:41.973 "data_size": 63488 00:15:41.973 } 00:15:41.973 ] 00:15:41.973 }' 00:15:41.973 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:41.973 23:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.232 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:15:42.232 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:42.232 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:42.232 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:42.232 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:42.232 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:42.232 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:42.232 23:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:42.491 [2024-07-24 23:59:38.222017] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.491 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:42.491 "name": "raid_bdev1", 00:15:42.491 "aliases": [ 00:15:42.491 "1f033ffd-1133-4de0-aeab-89e4857f8414" 00:15:42.491 ], 00:15:42.491 "product_name": "Raid Volume", 00:15:42.491 "block_size": 512, 00:15:42.491 "num_blocks": 63488, 00:15:42.491 "uuid": "1f033ffd-1133-4de0-aeab-89e4857f8414", 00:15:42.491 "assigned_rate_limits": { 00:15:42.491 "rw_ios_per_sec": 0, 00:15:42.491 "rw_mbytes_per_sec": 0, 00:15:42.491 "r_mbytes_per_sec": 0, 00:15:42.491 "w_mbytes_per_sec": 0 00:15:42.491 }, 00:15:42.491 "claimed": false, 00:15:42.491 "zoned": false, 00:15:42.491 "supported_io_types": { 00:15:42.491 "read": true, 00:15:42.491 "write": true, 00:15:42.491 "unmap": false, 00:15:42.491 "flush": false, 00:15:42.491 "reset": true, 00:15:42.491 "nvme_admin": false, 00:15:42.491 "nvme_io": false, 00:15:42.491 "nvme_io_md": false, 00:15:42.491 "write_zeroes": true, 00:15:42.491 "zcopy": false, 00:15:42.491 "get_zone_info": false, 00:15:42.491 "zone_management": false, 00:15:42.491 "zone_append": false, 00:15:42.491 "compare": false, 00:15:42.491 "compare_and_write": false, 00:15:42.491 "abort": false, 00:15:42.491 "seek_hole": false, 00:15:42.491 "seek_data": false, 00:15:42.491 "copy": false, 00:15:42.491 "nvme_iov_md": false 00:15:42.491 }, 00:15:42.491 "memory_domains": [ 00:15:42.491 { 00:15:42.491 "dma_device_id": "system", 00:15:42.491 "dma_device_type": 1 00:15:42.491 }, 00:15:42.491 { 00:15:42.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.491 "dma_device_type": 2 00:15:42.491 }, 00:15:42.491 { 00:15:42.491 "dma_device_id": "system", 00:15:42.491 "dma_device_type": 1 00:15:42.491 }, 00:15:42.491 { 00:15:42.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.491 "dma_device_type": 2 00:15:42.491 } 00:15:42.491 ], 00:15:42.491 "driver_specific": { 00:15:42.491 "raid": { 00:15:42.491 "uuid": "1f033ffd-1133-4de0-aeab-89e4857f8414", 00:15:42.491 "strip_size_kb": 0, 00:15:42.491 "state": "online", 00:15:42.491 "raid_level": "raid1", 00:15:42.491 "superblock": true, 00:15:42.491 "num_base_bdevs": 2, 00:15:42.491 "num_base_bdevs_discovered": 2, 00:15:42.491 "num_base_bdevs_operational": 2, 00:15:42.491 "base_bdevs_list": [ 00:15:42.491 { 00:15:42.491 "name": "pt1", 00:15:42.491 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:42.491 "is_configured": true, 00:15:42.491 "data_offset": 2048, 00:15:42.491 "data_size": 63488 00:15:42.491 }, 00:15:42.491 { 00:15:42.491 "name": "pt2", 00:15:42.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:42.491 "is_configured": true, 00:15:42.491 "data_offset": 2048, 00:15:42.491 "data_size": 63488 00:15:42.491 } 00:15:42.491 ] 00:15:42.491 } 00:15:42.491 } 00:15:42.491 }' 00:15:42.491 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:42.491 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:42.491 pt2' 00:15:42.491 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:42.491 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:42.491 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:42.750 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:42.750 "name": "pt1", 00:15:42.750 "aliases": [ 00:15:42.750 "00000000-0000-0000-0000-000000000001" 00:15:42.750 ], 00:15:42.750 "product_name": "passthru", 00:15:42.750 "block_size": 512, 00:15:42.750 "num_blocks": 65536, 00:15:42.750 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:42.750 "assigned_rate_limits": { 00:15:42.750 "rw_ios_per_sec": 0, 00:15:42.750 "rw_mbytes_per_sec": 0, 00:15:42.750 "r_mbytes_per_sec": 0, 00:15:42.750 "w_mbytes_per_sec": 0 00:15:42.750 }, 00:15:42.750 "claimed": true, 00:15:42.750 "claim_type": "exclusive_write", 00:15:42.750 "zoned": false, 00:15:42.750 "supported_io_types": { 00:15:42.750 "read": true, 00:15:42.750 "write": true, 00:15:42.750 "unmap": true, 00:15:42.750 "flush": true, 00:15:42.750 "reset": true, 00:15:42.750 "nvme_admin": false, 00:15:42.750 "nvme_io": false, 00:15:42.750 "nvme_io_md": false, 00:15:42.750 "write_zeroes": true, 00:15:42.750 "zcopy": true, 00:15:42.750 "get_zone_info": false, 00:15:42.750 "zone_management": false, 00:15:42.750 "zone_append": false, 00:15:42.750 "compare": false, 00:15:42.750 "compare_and_write": false, 00:15:42.750 "abort": true, 00:15:42.750 "seek_hole": false, 00:15:42.750 "seek_data": false, 00:15:42.750 "copy": true, 00:15:42.750 "nvme_iov_md": false 00:15:42.750 }, 00:15:42.750 "memory_domains": [ 00:15:42.750 { 00:15:42.750 "dma_device_id": "system", 00:15:42.750 "dma_device_type": 1 00:15:42.750 }, 00:15:42.750 { 00:15:42.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.750 "dma_device_type": 2 00:15:42.750 } 00:15:42.750 ], 00:15:42.750 "driver_specific": { 00:15:42.750 "passthru": { 00:15:42.750 "name": "pt1", 00:15:42.750 "base_bdev_name": "malloc1" 00:15:42.750 } 00:15:42.750 } 00:15:42.750 }' 00:15:42.750 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:42.750 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:42.750 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:42.750 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:42.750 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:42.750 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:42.750 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:42.750 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:42.750 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:42.750 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.750 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.750 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:42.750 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:42.750 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:42.750 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:43.009 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:43.009 "name": "pt2", 00:15:43.009 "aliases": [ 00:15:43.009 "00000000-0000-0000-0000-000000000002" 00:15:43.009 ], 00:15:43.009 "product_name": "passthru", 00:15:43.009 "block_size": 512, 00:15:43.009 "num_blocks": 65536, 00:15:43.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:43.009 "assigned_rate_limits": { 00:15:43.009 "rw_ios_per_sec": 0, 00:15:43.009 "rw_mbytes_per_sec": 0, 00:15:43.009 "r_mbytes_per_sec": 0, 00:15:43.009 "w_mbytes_per_sec": 0 00:15:43.009 }, 00:15:43.009 "claimed": true, 00:15:43.009 "claim_type": "exclusive_write", 00:15:43.009 "zoned": false, 00:15:43.009 "supported_io_types": { 00:15:43.009 "read": true, 00:15:43.009 "write": true, 00:15:43.009 "unmap": true, 00:15:43.009 "flush": true, 00:15:43.009 "reset": true, 00:15:43.009 "nvme_admin": false, 00:15:43.009 "nvme_io": false, 00:15:43.009 "nvme_io_md": false, 00:15:43.009 "write_zeroes": true, 00:15:43.009 "zcopy": true, 00:15:43.009 "get_zone_info": false, 00:15:43.009 "zone_management": false, 00:15:43.009 "zone_append": false, 00:15:43.009 "compare": false, 00:15:43.009 "compare_and_write": false, 00:15:43.009 "abort": true, 00:15:43.009 "seek_hole": false, 00:15:43.009 "seek_data": false, 00:15:43.009 "copy": true, 00:15:43.009 "nvme_iov_md": false 00:15:43.009 }, 00:15:43.009 "memory_domains": [ 00:15:43.009 { 00:15:43.009 "dma_device_id": "system", 00:15:43.009 "dma_device_type": 1 00:15:43.009 }, 00:15:43.009 { 00:15:43.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.009 "dma_device_type": 2 00:15:43.009 } 00:15:43.009 ], 00:15:43.009 "driver_specific": { 00:15:43.009 "passthru": { 00:15:43.009 "name": "pt2", 00:15:43.009 "base_bdev_name": "malloc2" 00:15:43.009 } 00:15:43.009 } 00:15:43.009 }' 00:15:43.009 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:43.009 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:43.009 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:43.009 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:43.009 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:43.269 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:43.269 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:43.269 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:43.269 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:43.269 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:43.269 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:43.269 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:43.269 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:43.269 23:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:15:43.269 [2024-07-24 23:59:39.122158] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:43.527 23:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=1f033ffd-1133-4de0-aeab-89e4857f8414 00:15:43.527 23:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 1f033ffd-1133-4de0-aeab-89e4857f8414 ']' 00:15:43.528 23:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:43.528 [2024-07-24 23:59:39.393956] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:43.528 [2024-07-24 23:59:39.393987] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.528 [2024-07-24 23:59:39.394072] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.528 [2024-07-24 23:59:39.394141] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.528 [2024-07-24 23:59:39.394190] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state offline 00:15:43.786 23:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.786 23:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:15:43.786 23:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:15:43.786 23:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:15:43.786 23:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:43.786 23:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:44.046 23:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:44.046 23:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:44.304 23:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:44.305 23:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:44.563 23:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:15:44.563 23:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:44.563 23:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:44.563 23:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:44.563 23:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:44.563 23:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:44.563 23:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:44.563 23:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:44.563 23:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:44.563 23:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:44.564 23:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:44.564 23:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:44.564 23:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:44.823 [2024-07-24 23:59:40.590366] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:44.823 [2024-07-24 23:59:40.592929] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:44.823 [2024-07-24 23:59:40.593153] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:44.823 [2024-07-24 23:59:40.593458] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:44.823 [2024-07-24 23:59:40.593764] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:44.823 [2024-07-24 23:59:40.593942] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008480 name raid_bdev1, state configuring 00:15:44.823 request: 00:15:44.823 { 00:15:44.823 "name": "raid_bdev1", 00:15:44.823 "raid_level": "raid1", 00:15:44.823 "base_bdevs": [ 00:15:44.823 "malloc1", 00:15:44.823 "malloc2" 00:15:44.823 ], 00:15:44.823 "superblock": false, 00:15:44.823 "method": "bdev_raid_create", 00:15:44.823 "req_id": 1 00:15:44.823 } 00:15:44.823 Got JSON-RPC error response 00:15:44.823 response: 00:15:44.823 { 00:15:44.823 "code": -17, 00:15:44.823 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:44.823 } 00:15:44.823 23:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:44.823 23:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:44.823 23:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:44.823 23:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:44.823 23:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.823 23:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:15:45.082 23:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:15:45.082 23:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:15:45.082 23:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:45.341 [2024-07-24 23:59:41.098557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:45.341 [2024-07-24 23:59:41.098648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.341 [2024-07-24 23:59:41.098674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:15:45.341 [2024-07-24 23:59:41.098689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.341 [2024-07-24 23:59:41.101160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.341 [2024-07-24 23:59:41.101236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:45.341 [2024-07-24 23:59:41.101334] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:45.341 [2024-07-24 23:59:41.101405] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:45.341 pt1 00:15:45.341 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:45.341 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:45.341 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:45.341 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:45.341 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:45.341 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:45.341 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:45.341 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:45.341 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:45.341 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:45.341 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.341 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.600 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.600 "name": "raid_bdev1", 00:15:45.600 "uuid": "1f033ffd-1133-4de0-aeab-89e4857f8414", 00:15:45.600 "strip_size_kb": 0, 00:15:45.600 "state": "configuring", 00:15:45.600 "raid_level": "raid1", 00:15:45.600 "superblock": true, 00:15:45.600 "num_base_bdevs": 2, 00:15:45.600 "num_base_bdevs_discovered": 1, 00:15:45.600 "num_base_bdevs_operational": 2, 00:15:45.600 "base_bdevs_list": [ 00:15:45.600 { 00:15:45.600 "name": "pt1", 00:15:45.600 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:45.600 "is_configured": true, 00:15:45.600 "data_offset": 2048, 00:15:45.600 "data_size": 63488 00:15:45.600 }, 00:15:45.600 { 00:15:45.600 "name": null, 00:15:45.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.600 "is_configured": false, 00:15:45.600 "data_offset": 2048, 00:15:45.600 "data_size": 63488 00:15:45.600 } 00:15:45.600 ] 00:15:45.600 }' 00:15:45.600 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.600 23:59:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.859 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:15:45.859 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:15:45.859 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:15:45.859 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:46.118 [2024-07-24 23:59:41.894853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:46.118 [2024-07-24 23:59:41.894943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.118 [2024-07-24 23:59:41.894984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:15:46.118 [2024-07-24 23:59:41.895014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.118 [2024-07-24 23:59:41.895537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.118 [2024-07-24 23:59:41.895568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:46.118 [2024-07-24 23:59:41.895667] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:46.118 [2024-07-24 23:59:41.895701] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:46.118 [2024-07-24 23:59:41.895889] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009080 00:15:46.118 [2024-07-24 23:59:41.895912] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:46.118 [2024-07-24 23:59:41.896032] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:15:46.118 [2024-07-24 23:59:41.896405] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009080 00:15:46.118 [2024-07-24 23:59:41.896422] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009080 00:15:46.118 [2024-07-24 23:59:41.896582] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.118 pt2 00:15:46.118 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:15:46.118 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:15:46.118 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:46.118 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:46.118 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:46.118 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:46.118 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:46.118 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:46.118 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:46.118 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:46.118 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:46.118 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:46.118 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.118 23:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.377 23:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:46.378 "name": "raid_bdev1", 00:15:46.378 "uuid": "1f033ffd-1133-4de0-aeab-89e4857f8414", 00:15:46.378 "strip_size_kb": 0, 00:15:46.378 "state": "online", 00:15:46.378 "raid_level": "raid1", 00:15:46.378 "superblock": true, 00:15:46.378 "num_base_bdevs": 2, 00:15:46.378 "num_base_bdevs_discovered": 2, 00:15:46.378 "num_base_bdevs_operational": 2, 00:15:46.378 "base_bdevs_list": [ 00:15:46.378 { 00:15:46.378 "name": "pt1", 00:15:46.378 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:46.378 "is_configured": true, 00:15:46.378 "data_offset": 2048, 00:15:46.378 "data_size": 63488 00:15:46.378 }, 00:15:46.378 { 00:15:46.378 "name": "pt2", 00:15:46.378 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.378 "is_configured": true, 00:15:46.378 "data_offset": 2048, 00:15:46.378 "data_size": 63488 00:15:46.378 } 00:15:46.378 ] 00:15:46.378 }' 00:15:46.378 23:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:46.378 23:59:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.636 23:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:15:46.636 23:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:46.636 23:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:46.636 23:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:46.637 23:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:46.637 23:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:46.637 23:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:46.637 23:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:46.895 [2024-07-24 23:59:42.751383] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.155 23:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:47.155 "name": "raid_bdev1", 00:15:47.155 "aliases": [ 00:15:47.155 "1f033ffd-1133-4de0-aeab-89e4857f8414" 00:15:47.155 ], 00:15:47.155 "product_name": "Raid Volume", 00:15:47.155 "block_size": 512, 00:15:47.155 "num_blocks": 63488, 00:15:47.155 "uuid": "1f033ffd-1133-4de0-aeab-89e4857f8414", 00:15:47.155 "assigned_rate_limits": { 00:15:47.155 "rw_ios_per_sec": 0, 00:15:47.155 "rw_mbytes_per_sec": 0, 00:15:47.155 "r_mbytes_per_sec": 0, 00:15:47.155 "w_mbytes_per_sec": 0 00:15:47.155 }, 00:15:47.155 "claimed": false, 00:15:47.155 "zoned": false, 00:15:47.155 "supported_io_types": { 00:15:47.155 "read": true, 00:15:47.155 "write": true, 00:15:47.155 "unmap": false, 00:15:47.155 "flush": false, 00:15:47.155 "reset": true, 00:15:47.155 "nvme_admin": false, 00:15:47.155 "nvme_io": false, 00:15:47.155 "nvme_io_md": false, 00:15:47.155 "write_zeroes": true, 00:15:47.155 "zcopy": false, 00:15:47.155 "get_zone_info": false, 00:15:47.155 "zone_management": false, 00:15:47.155 "zone_append": false, 00:15:47.155 "compare": false, 00:15:47.155 "compare_and_write": false, 00:15:47.155 "abort": false, 00:15:47.155 "seek_hole": false, 00:15:47.155 "seek_data": false, 00:15:47.155 "copy": false, 00:15:47.155 "nvme_iov_md": false 00:15:47.155 }, 00:15:47.155 "memory_domains": [ 00:15:47.155 { 00:15:47.155 "dma_device_id": "system", 00:15:47.155 "dma_device_type": 1 00:15:47.155 }, 00:15:47.155 { 00:15:47.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.155 "dma_device_type": 2 00:15:47.155 }, 00:15:47.155 { 00:15:47.155 "dma_device_id": "system", 00:15:47.155 "dma_device_type": 1 00:15:47.155 }, 00:15:47.155 { 00:15:47.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.155 "dma_device_type": 2 00:15:47.155 } 00:15:47.155 ], 00:15:47.155 "driver_specific": { 00:15:47.155 "raid": { 00:15:47.155 "uuid": "1f033ffd-1133-4de0-aeab-89e4857f8414", 00:15:47.155 "strip_size_kb": 0, 00:15:47.155 "state": "online", 00:15:47.155 "raid_level": "raid1", 00:15:47.155 "superblock": true, 00:15:47.155 "num_base_bdevs": 2, 00:15:47.155 "num_base_bdevs_discovered": 2, 00:15:47.155 "num_base_bdevs_operational": 2, 00:15:47.155 "base_bdevs_list": [ 00:15:47.155 { 00:15:47.155 "name": "pt1", 00:15:47.155 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.155 "is_configured": true, 00:15:47.155 "data_offset": 2048, 00:15:47.155 "data_size": 63488 00:15:47.155 }, 00:15:47.155 { 00:15:47.155 "name": "pt2", 00:15:47.155 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.155 "is_configured": true, 00:15:47.155 "data_offset": 2048, 00:15:47.155 "data_size": 63488 00:15:47.155 } 00:15:47.155 ] 00:15:47.155 } 00:15:47.155 } 00:15:47.155 }' 00:15:47.155 23:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:47.155 23:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:47.155 pt2' 00:15:47.155 23:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:47.155 23:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:47.155 23:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:47.415 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:47.415 "name": "pt1", 00:15:47.415 "aliases": [ 00:15:47.415 "00000000-0000-0000-0000-000000000001" 00:15:47.415 ], 00:15:47.415 "product_name": "passthru", 00:15:47.415 "block_size": 512, 00:15:47.415 "num_blocks": 65536, 00:15:47.415 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.415 "assigned_rate_limits": { 00:15:47.415 "rw_ios_per_sec": 0, 00:15:47.415 "rw_mbytes_per_sec": 0, 00:15:47.415 "r_mbytes_per_sec": 0, 00:15:47.415 "w_mbytes_per_sec": 0 00:15:47.415 }, 00:15:47.415 "claimed": true, 00:15:47.415 "claim_type": "exclusive_write", 00:15:47.415 "zoned": false, 00:15:47.415 "supported_io_types": { 00:15:47.415 "read": true, 00:15:47.415 "write": true, 00:15:47.415 "unmap": true, 00:15:47.415 "flush": true, 00:15:47.415 "reset": true, 00:15:47.415 "nvme_admin": false, 00:15:47.415 "nvme_io": false, 00:15:47.415 "nvme_io_md": false, 00:15:47.415 "write_zeroes": true, 00:15:47.415 "zcopy": true, 00:15:47.415 "get_zone_info": false, 00:15:47.415 "zone_management": false, 00:15:47.415 "zone_append": false, 00:15:47.415 "compare": false, 00:15:47.415 "compare_and_write": false, 00:15:47.415 "abort": true, 00:15:47.415 "seek_hole": false, 00:15:47.415 "seek_data": false, 00:15:47.415 "copy": true, 00:15:47.415 "nvme_iov_md": false 00:15:47.415 }, 00:15:47.415 "memory_domains": [ 00:15:47.415 { 00:15:47.415 "dma_device_id": "system", 00:15:47.415 "dma_device_type": 1 00:15:47.415 }, 00:15:47.415 { 00:15:47.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.415 "dma_device_type": 2 00:15:47.415 } 00:15:47.415 ], 00:15:47.415 "driver_specific": { 00:15:47.415 "passthru": { 00:15:47.415 "name": "pt1", 00:15:47.415 "base_bdev_name": "malloc1" 00:15:47.415 } 00:15:47.415 } 00:15:47.415 }' 00:15:47.415 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:47.415 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:47.415 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:47.415 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:47.415 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:47.415 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:47.415 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:47.415 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:47.415 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:47.415 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:47.415 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:47.415 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:47.415 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:47.415 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:47.415 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:47.676 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:47.676 "name": "pt2", 00:15:47.676 "aliases": [ 00:15:47.676 "00000000-0000-0000-0000-000000000002" 00:15:47.676 ], 00:15:47.676 "product_name": "passthru", 00:15:47.676 "block_size": 512, 00:15:47.676 "num_blocks": 65536, 00:15:47.676 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.676 "assigned_rate_limits": { 00:15:47.676 "rw_ios_per_sec": 0, 00:15:47.676 "rw_mbytes_per_sec": 0, 00:15:47.676 "r_mbytes_per_sec": 0, 00:15:47.676 "w_mbytes_per_sec": 0 00:15:47.676 }, 00:15:47.676 "claimed": true, 00:15:47.676 "claim_type": "exclusive_write", 00:15:47.676 "zoned": false, 00:15:47.676 "supported_io_types": { 00:15:47.676 "read": true, 00:15:47.676 "write": true, 00:15:47.676 "unmap": true, 00:15:47.676 "flush": true, 00:15:47.676 "reset": true, 00:15:47.676 "nvme_admin": false, 00:15:47.676 "nvme_io": false, 00:15:47.676 "nvme_io_md": false, 00:15:47.676 "write_zeroes": true, 00:15:47.676 "zcopy": true, 00:15:47.676 "get_zone_info": false, 00:15:47.676 "zone_management": false, 00:15:47.676 "zone_append": false, 00:15:47.676 "compare": false, 00:15:47.676 "compare_and_write": false, 00:15:47.676 "abort": true, 00:15:47.676 "seek_hole": false, 00:15:47.676 "seek_data": false, 00:15:47.676 "copy": true, 00:15:47.676 "nvme_iov_md": false 00:15:47.676 }, 00:15:47.676 "memory_domains": [ 00:15:47.676 { 00:15:47.676 "dma_device_id": "system", 00:15:47.676 "dma_device_type": 1 00:15:47.676 }, 00:15:47.676 { 00:15:47.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.676 "dma_device_type": 2 00:15:47.676 } 00:15:47.676 ], 00:15:47.676 "driver_specific": { 00:15:47.676 "passthru": { 00:15:47.676 "name": "pt2", 00:15:47.676 "base_bdev_name": "malloc2" 00:15:47.676 } 00:15:47.676 } 00:15:47.676 }' 00:15:47.676 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:47.676 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:47.676 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:47.676 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:47.676 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:47.676 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:47.676 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:47.676 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:47.676 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:47.676 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:47.676 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:47.676 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:47.676 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:47.676 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:15:47.937 [2024-07-24 23:59:43.671674] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.937 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 1f033ffd-1133-4de0-aeab-89e4857f8414 '!=' 1f033ffd-1133-4de0-aeab-89e4857f8414 ']' 00:15:47.937 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:15:47.937 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:47.937 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:15:47.937 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:48.196 [2024-07-24 23:59:43.939544] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:48.196 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:48.196 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:48.196 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:48.196 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:48.196 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:48.196 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:48.196 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:48.196 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:48.196 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:48.196 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:48.196 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.196 23:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.454 23:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:48.454 "name": "raid_bdev1", 00:15:48.454 "uuid": "1f033ffd-1133-4de0-aeab-89e4857f8414", 00:15:48.454 "strip_size_kb": 0, 00:15:48.454 "state": "online", 00:15:48.454 "raid_level": "raid1", 00:15:48.454 "superblock": true, 00:15:48.454 "num_base_bdevs": 2, 00:15:48.454 "num_base_bdevs_discovered": 1, 00:15:48.454 "num_base_bdevs_operational": 1, 00:15:48.454 "base_bdevs_list": [ 00:15:48.454 { 00:15:48.454 "name": null, 00:15:48.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.454 "is_configured": false, 00:15:48.454 "data_offset": 2048, 00:15:48.454 "data_size": 63488 00:15:48.454 }, 00:15:48.454 { 00:15:48.454 "name": "pt2", 00:15:48.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.454 "is_configured": true, 00:15:48.454 "data_offset": 2048, 00:15:48.454 "data_size": 63488 00:15:48.454 } 00:15:48.454 ] 00:15:48.454 }' 00:15:48.454 23:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:48.454 23:59:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.713 23:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:48.971 [2024-07-24 23:59:44.751720] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:48.971 [2024-07-24 23:59:44.751946] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:48.971 [2024-07-24 23:59:44.752056] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.971 [2024-07-24 23:59:44.752124] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.971 [2024-07-24 23:59:44.752144] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state offline 00:15:48.971 23:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.971 23:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:15:49.231 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:15:49.231 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:15:49.231 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:15:49.231 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:15:49.231 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:49.490 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:15:49.490 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:15:49.490 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:15:49.490 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:15:49.490 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=1 00:15:49.490 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:49.749 [2024-07-24 23:59:45.431797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:49.749 [2024-07-24 23:59:45.432123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.749 [2024-07-24 23:59:45.432193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009680 00:15:49.749 [2024-07-24 23:59:45.432437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.749 [2024-07-24 23:59:45.434913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.749 [2024-07-24 23:59:45.435120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:49.749 [2024-07-24 23:59:45.435376] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:49.749 [2024-07-24 23:59:45.435548] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:49.749 [2024-07-24 23:59:45.435790] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009c80 00:15:49.749 [2024-07-24 23:59:45.435945] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:49.749 [2024-07-24 23:59:45.436081] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:15:49.749 [2024-07-24 23:59:45.436548] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009c80 00:15:49.749 [2024-07-24 23:59:45.436682] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009c80 00:15:49.749 [2024-07-24 23:59:45.437063] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.749 pt2 00:15:49.749 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:49.749 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:49.749 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:49.749 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:49.749 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:49.749 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:49.749 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:49.749 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:49.749 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:49.749 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:49.749 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.749 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.009 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:50.009 "name": "raid_bdev1", 00:15:50.009 "uuid": "1f033ffd-1133-4de0-aeab-89e4857f8414", 00:15:50.009 "strip_size_kb": 0, 00:15:50.009 "state": "online", 00:15:50.009 "raid_level": "raid1", 00:15:50.009 "superblock": true, 00:15:50.009 "num_base_bdevs": 2, 00:15:50.009 "num_base_bdevs_discovered": 1, 00:15:50.009 "num_base_bdevs_operational": 1, 00:15:50.009 "base_bdevs_list": [ 00:15:50.009 { 00:15:50.009 "name": null, 00:15:50.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.009 "is_configured": false, 00:15:50.009 "data_offset": 2048, 00:15:50.009 "data_size": 63488 00:15:50.009 }, 00:15:50.009 { 00:15:50.009 "name": "pt2", 00:15:50.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.009 "is_configured": true, 00:15:50.009 "data_offset": 2048, 00:15:50.009 "data_size": 63488 00:15:50.009 } 00:15:50.009 ] 00:15:50.009 }' 00:15:50.009 23:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:50.009 23:59:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.269 23:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:50.528 [2024-07-24 23:59:46.229215] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.528 [2024-07-24 23:59:46.229253] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.528 [2024-07-24 23:59:46.229328] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.528 [2024-07-24 23:59:46.229388] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.528 [2024-07-24 23:59:46.229402] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009c80 name raid_bdev1, state offline 00:15:50.528 23:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.528 23:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:15:50.788 23:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:15:50.788 23:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:15:50.788 23:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:15:50.788 23:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:51.048 [2024-07-24 23:59:46.745477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:51.048 [2024-07-24 23:59:46.745579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.048 [2024-07-24 23:59:46.745614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:15:51.048 [2024-07-24 23:59:46.745639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.048 [2024-07-24 23:59:46.748365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.048 [2024-07-24 23:59:46.748406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:51.048 [2024-07-24 23:59:46.748526] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:51.048 [2024-07-24 23:59:46.748579] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:51.048 [2024-07-24 23:59:46.748762] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:51.048 [2024-07-24 23:59:46.748781] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.048 [2024-07-24 23:59:46.748856] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state configuring 00:15:51.048 [2024-07-24 23:59:46.748937] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.048 [2024-07-24 23:59:46.749043] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a880 00:15:51.048 [2024-07-24 23:59:46.749059] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:51.048 [2024-07-24 23:59:46.749159] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:15:51.048 [2024-07-24 23:59:46.749554] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a880 00:15:51.048 [2024-07-24 23:59:46.749574] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a880 00:15:51.048 [2024-07-24 23:59:46.749784] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.048 pt1 00:15:51.048 23:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:15:51.048 23:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:51.048 23:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:51.048 23:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:51.048 23:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:51.048 23:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:51.048 23:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:51.048 23:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:51.048 23:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:51.048 23:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:51.048 23:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:51.048 23:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.048 23:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.308 23:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:51.308 "name": "raid_bdev1", 00:15:51.308 "uuid": "1f033ffd-1133-4de0-aeab-89e4857f8414", 00:15:51.308 "strip_size_kb": 0, 00:15:51.308 "state": "online", 00:15:51.308 "raid_level": "raid1", 00:15:51.308 "superblock": true, 00:15:51.308 "num_base_bdevs": 2, 00:15:51.308 "num_base_bdevs_discovered": 1, 00:15:51.308 "num_base_bdevs_operational": 1, 00:15:51.308 "base_bdevs_list": [ 00:15:51.308 { 00:15:51.308 "name": null, 00:15:51.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.308 "is_configured": false, 00:15:51.308 "data_offset": 2048, 00:15:51.308 "data_size": 63488 00:15:51.308 }, 00:15:51.308 { 00:15:51.308 "name": "pt2", 00:15:51.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.308 "is_configured": true, 00:15:51.308 "data_offset": 2048, 00:15:51.308 "data_size": 63488 00:15:51.308 } 00:15:51.308 ] 00:15:51.308 }' 00:15:51.308 23:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:51.308 23:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.568 23:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:15:51.568 23:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:51.827 23:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:15:51.827 23:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:51.827 23:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:15:52.087 [2024-07-24 23:59:47.890271] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.087 23:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 1f033ffd-1133-4de0-aeab-89e4857f8414 '!=' 1f033ffd-1133-4de0-aeab-89e4857f8414 ']' 00:15:52.087 23:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 79220 00:15:52.087 23:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 79220 ']' 00:15:52.087 23:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 79220 00:15:52.087 23:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:52.087 23:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:52.087 23:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79220 00:15:52.087 killing process with pid 79220 00:15:52.087 23:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:52.087 23:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:52.087 23:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79220' 00:15:52.087 23:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 79220 00:15:52.087 [2024-07-24 23:59:47.942516] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.087 [2024-07-24 23:59:47.942608] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.087 23:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 79220 00:15:52.087 [2024-07-24 23:59:47.942661] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.087 [2024-07-24 23:59:47.942677] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state offline 00:15:52.347 [2024-07-24 23:59:48.089418] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.287 ************************************ 00:15:53.287 END TEST raid_superblock_test 00:15:53.287 ************************************ 00:15:53.287 23:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:15:53.287 00:15:53.287 real 0m13.912s 00:15:53.287 user 0m23.811s 00:15:53.287 sys 0m2.186s 00:15:53.287 23:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:53.287 23:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.547 23:59:49 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:15:53.547 23:59:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:53.547 23:59:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:53.547 23:59:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.547 ************************************ 00:15:53.547 START TEST raid_read_error_test 00:15:53.547 ************************************ 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.NmMb7zweyv 00:15:53.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=79707 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 79707 /var/tmp/spdk-raid.sock 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 79707 ']' 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:53.547 23:59:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.547 [2024-07-24 23:59:49.262314] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:15:53.547 [2024-07-24 23:59:49.262499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79707 ] 00:15:53.807 [2024-07-24 23:59:49.433992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.807 [2024-07-24 23:59:49.612959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.065 [2024-07-24 23:59:49.772940] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.632 23:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:54.632 23:59:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:15:54.632 23:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:54.632 23:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:54.632 BaseBdev1_malloc 00:15:54.632 23:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:54.891 true 00:15:54.891 23:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:55.150 [2024-07-24 23:59:50.905152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:55.150 [2024-07-24 23:59:50.905260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.150 [2024-07-24 23:59:50.905293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:15:55.150 [2024-07-24 23:59:50.905309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.150 [2024-07-24 23:59:50.907995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.150 [2024-07-24 23:59:50.908043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:55.150 BaseBdev1 00:15:55.150 23:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:55.150 23:59:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:55.408 BaseBdev2_malloc 00:15:55.408 23:59:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:55.667 true 00:15:55.667 23:59:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:55.926 [2024-07-24 23:59:51.641531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:55.926 [2024-07-24 23:59:51.641647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.926 [2024-07-24 23:59:51.641678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:15:55.926 [2024-07-24 23:59:51.641696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.926 [2024-07-24 23:59:51.644439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.926 [2024-07-24 23:59:51.644503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:55.926 BaseBdev2 00:15:55.926 23:59:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:56.185 [2024-07-24 23:59:51.857603] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:56.185 [2024-07-24 23:59:51.860219] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.185 [2024-07-24 23:59:51.860508] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008480 00:15:56.185 [2024-07-24 23:59:51.860533] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:56.185 [2024-07-24 23:59:51.860659] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:15:56.185 [2024-07-24 23:59:51.861165] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008480 00:15:56.185 [2024-07-24 23:59:51.861185] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008480 00:15:56.185 [2024-07-24 23:59:51.861471] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.185 23:59:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:56.185 23:59:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:56.185 23:59:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:56.185 23:59:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:56.185 23:59:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:56.185 23:59:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:56.185 23:59:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:56.185 23:59:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:56.185 23:59:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:56.185 23:59:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:56.185 23:59:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.185 23:59:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.444 23:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:56.444 "name": "raid_bdev1", 00:15:56.444 "uuid": "1104e10a-08e9-405c-bebe-a8fcc000fb63", 00:15:56.444 "strip_size_kb": 0, 00:15:56.444 "state": "online", 00:15:56.444 "raid_level": "raid1", 00:15:56.444 "superblock": true, 00:15:56.444 "num_base_bdevs": 2, 00:15:56.444 "num_base_bdevs_discovered": 2, 00:15:56.444 "num_base_bdevs_operational": 2, 00:15:56.444 "base_bdevs_list": [ 00:15:56.444 { 00:15:56.444 "name": "BaseBdev1", 00:15:56.444 "uuid": "152d303c-3b27-53d3-99df-f1aacc18fd2b", 00:15:56.444 "is_configured": true, 00:15:56.444 "data_offset": 2048, 00:15:56.444 "data_size": 63488 00:15:56.444 }, 00:15:56.444 { 00:15:56.444 "name": "BaseBdev2", 00:15:56.444 "uuid": "af95bc16-a725-58b9-9327-b1f3ed1dff61", 00:15:56.444 "is_configured": true, 00:15:56.444 "data_offset": 2048, 00:15:56.444 "data_size": 63488 00:15:56.444 } 00:15:56.444 ] 00:15:56.444 }' 00:15:56.444 23:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:56.444 23:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.702 23:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:15:56.702 23:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:56.969 [2024-07-24 23:59:52.587111] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:15:57.920 23:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:57.920 23:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:15:57.920 23:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:57.920 23:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:15:57.920 23:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:15:57.920 23:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:57.920 23:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:57.920 23:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:57.920 23:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:57.920 23:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:57.920 23:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:57.920 23:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:57.920 23:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:57.920 23:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:57.920 23:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:57.920 23:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.920 23:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.178 23:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:58.178 "name": "raid_bdev1", 00:15:58.178 "uuid": "1104e10a-08e9-405c-bebe-a8fcc000fb63", 00:15:58.178 "strip_size_kb": 0, 00:15:58.178 "state": "online", 00:15:58.178 "raid_level": "raid1", 00:15:58.178 "superblock": true, 00:15:58.178 "num_base_bdevs": 2, 00:15:58.178 "num_base_bdevs_discovered": 2, 00:15:58.178 "num_base_bdevs_operational": 2, 00:15:58.178 "base_bdevs_list": [ 00:15:58.178 { 00:15:58.178 "name": "BaseBdev1", 00:15:58.178 "uuid": "152d303c-3b27-53d3-99df-f1aacc18fd2b", 00:15:58.178 "is_configured": true, 00:15:58.178 "data_offset": 2048, 00:15:58.178 "data_size": 63488 00:15:58.178 }, 00:15:58.178 { 00:15:58.178 "name": "BaseBdev2", 00:15:58.178 "uuid": "af95bc16-a725-58b9-9327-b1f3ed1dff61", 00:15:58.178 "is_configured": true, 00:15:58.178 "data_offset": 2048, 00:15:58.178 "data_size": 63488 00:15:58.178 } 00:15:58.178 ] 00:15:58.178 }' 00:15:58.178 23:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:58.178 23:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.744 23:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:58.744 [2024-07-24 23:59:54.583111] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.744 [2024-07-24 23:59:54.583481] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.744 [2024-07-24 23:59:54.586357] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.744 [2024-07-24 23:59:54.586417] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.744 [2024-07-24 23:59:54.586541] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.744 [2024-07-24 23:59:54.586561] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008480 name raid_bdev1, state offline 00:15:58.744 0 00:15:58.744 23:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 79707 00:15:58.744 23:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 79707 ']' 00:15:58.744 23:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 79707 00:15:58.744 23:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:15:58.744 23:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:59.003 23:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79707 00:15:59.003 killing process with pid 79707 00:15:59.003 23:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:59.003 23:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:59.003 23:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79707' 00:15:59.003 23:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 79707 00:15:59.003 [2024-07-24 23:59:54.634541] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:59.003 23:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 79707 00:15:59.003 [2024-07-24 23:59:54.726381] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:59.940 23:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:15:59.940 23:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.NmMb7zweyv 00:15:59.940 23:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:16:00.200 23:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:16:00.200 23:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:16:00.200 23:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:00.200 23:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:00.200 23:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:00.200 00:16:00.200 real 0m6.627s 00:16:00.200 user 0m9.693s 00:16:00.200 sys 0m0.804s 00:16:00.200 ************************************ 00:16:00.200 END TEST raid_read_error_test 00:16:00.200 ************************************ 00:16:00.200 23:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:00.200 23:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.200 23:59:55 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:16:00.200 23:59:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:00.200 23:59:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:00.200 23:59:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:00.200 ************************************ 00:16:00.200 START TEST raid_write_error_test 00:16:00.200 ************************************ 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.FVbPcJbUWB 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=79878 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 79878 /var/tmp/spdk-raid.sock 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 79878 ']' 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:00.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:00.200 23:59:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.200 [2024-07-24 23:59:55.944578] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:16:00.200 [2024-07-24 23:59:55.944755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79878 ] 00:16:00.459 [2024-07-24 23:59:56.117713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.459 [2024-07-24 23:59:56.293752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.718 [2024-07-24 23:59:56.460411] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.286 23:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:01.286 23:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:01.286 23:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:01.286 23:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:01.545 BaseBdev1_malloc 00:16:01.545 23:59:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:01.545 true 00:16:01.804 23:59:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:01.804 [2024-07-24 23:59:57.616278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:01.804 [2024-07-24 23:59:57.616378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.804 [2024-07-24 23:59:57.616413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:16:01.804 [2024-07-24 23:59:57.616430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.804 [2024-07-24 23:59:57.619151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.804 [2024-07-24 23:59:57.619435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:01.804 BaseBdev1 00:16:01.804 23:59:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:01.804 23:59:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:02.063 BaseBdev2_malloc 00:16:02.063 23:59:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:02.322 true 00:16:02.322 23:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:02.581 [2024-07-24 23:59:58.337419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:02.581 [2024-07-24 23:59:58.337759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.581 [2024-07-24 23:59:58.337850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:16:02.581 [2024-07-24 23:59:58.338109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.581 [2024-07-24 23:59:58.340590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.581 [2024-07-24 23:59:58.340799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:02.581 BaseBdev2 00:16:02.581 23:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:02.840 [2024-07-24 23:59:58.541659] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.840 [2024-07-24 23:59:58.544024] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:02.840 [2024-07-24 23:59:58.544442] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008480 00:16:02.840 [2024-07-24 23:59:58.544473] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:02.840 [2024-07-24 23:59:58.544621] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:16:02.840 [2024-07-24 23:59:58.545047] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008480 00:16:02.840 [2024-07-24 23:59:58.545064] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008480 00:16:02.840 [2024-07-24 23:59:58.545286] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.840 23:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:02.840 23:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:02.840 23:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:02.840 23:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:02.840 23:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:02.840 23:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:02.840 23:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:02.840 23:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:02.840 23:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:02.840 23:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:02.840 23:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.840 23:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.100 23:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:03.100 "name": "raid_bdev1", 00:16:03.100 "uuid": "93459a6b-dd13-4ab4-bfba-2bcba538e1b4", 00:16:03.100 "strip_size_kb": 0, 00:16:03.100 "state": "online", 00:16:03.100 "raid_level": "raid1", 00:16:03.100 "superblock": true, 00:16:03.100 "num_base_bdevs": 2, 00:16:03.100 "num_base_bdevs_discovered": 2, 00:16:03.100 "num_base_bdevs_operational": 2, 00:16:03.100 "base_bdevs_list": [ 00:16:03.100 { 00:16:03.100 "name": "BaseBdev1", 00:16:03.100 "uuid": "c3c38fc1-9704-54e3-b03a-62519a046fbf", 00:16:03.100 "is_configured": true, 00:16:03.100 "data_offset": 2048, 00:16:03.100 "data_size": 63488 00:16:03.100 }, 00:16:03.100 { 00:16:03.100 "name": "BaseBdev2", 00:16:03.100 "uuid": "a38b2360-94b8-5d43-a947-a57752920567", 00:16:03.100 "is_configured": true, 00:16:03.100 "data_offset": 2048, 00:16:03.100 "data_size": 63488 00:16:03.100 } 00:16:03.100 ] 00:16:03.100 }' 00:16:03.100 23:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:03.100 23:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.359 23:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:16:03.359 23:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:03.359 [2024-07-24 23:59:59.219030] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:16:04.299 00:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:04.558 [2024-07-25 00:00:00.344715] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:04.558 [2024-07-25 00:00:00.344832] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:04.558 [2024-07-25 00:00:00.345078] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005a00 00:16:04.558 00:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:16:04.558 00:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:04.558 00:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:16:04.558 00:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=1 00:16:04.558 00:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:04.558 00:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:04.558 00:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:04.558 00:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:04.558 00:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:04.558 00:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:04.558 00:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:04.558 00:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:04.558 00:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:04.558 00:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:04.558 00:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.558 00:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.816 00:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:04.816 "name": "raid_bdev1", 00:16:04.816 "uuid": "93459a6b-dd13-4ab4-bfba-2bcba538e1b4", 00:16:04.816 "strip_size_kb": 0, 00:16:04.816 "state": "online", 00:16:04.816 "raid_level": "raid1", 00:16:04.816 "superblock": true, 00:16:04.816 "num_base_bdevs": 2, 00:16:04.816 "num_base_bdevs_discovered": 1, 00:16:04.816 "num_base_bdevs_operational": 1, 00:16:04.816 "base_bdevs_list": [ 00:16:04.816 { 00:16:04.816 "name": null, 00:16:04.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.816 "is_configured": false, 00:16:04.816 "data_offset": 2048, 00:16:04.816 "data_size": 63488 00:16:04.816 }, 00:16:04.816 { 00:16:04.816 "name": "BaseBdev2", 00:16:04.816 "uuid": "a38b2360-94b8-5d43-a947-a57752920567", 00:16:04.816 "is_configured": true, 00:16:04.816 "data_offset": 2048, 00:16:04.816 "data_size": 63488 00:16:04.816 } 00:16:04.816 ] 00:16:04.816 }' 00:16:04.816 00:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:04.816 00:00:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.384 00:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:05.384 [2024-07-25 00:00:01.193326] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.384 [2024-07-25 00:00:01.193368] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.384 [2024-07-25 00:00:01.196245] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.384 [2024-07-25 00:00:01.196293] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.384 [2024-07-25 00:00:01.196355] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.384 [2024-07-25 00:00:01.196369] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008480 name raid_bdev1, state offline 00:16:05.384 0 00:16:05.384 00:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 79878 00:16:05.384 00:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 79878 ']' 00:16:05.384 00:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 79878 00:16:05.384 00:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:16:05.384 00:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:05.384 00:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79878 00:16:05.384 00:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:05.384 00:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:05.384 00:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79878' 00:16:05.384 killing process with pid 79878 00:16:05.384 00:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 79878 00:16:05.384 [2024-07-25 00:00:01.252408] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:05.384 00:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 79878 00:16:05.640 [2024-07-25 00:00:01.352882] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:07.015 00:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.FVbPcJbUWB 00:16:07.015 00:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:16:07.015 00:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:16:07.015 00:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:16:07.015 00:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:16:07.015 00:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:07.015 00:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:07.015 00:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:07.015 ************************************ 00:16:07.015 END TEST raid_write_error_test 00:16:07.015 ************************************ 00:16:07.015 00:16:07.015 real 0m6.594s 00:16:07.015 user 0m9.534s 00:16:07.015 sys 0m0.869s 00:16:07.015 00:00:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:07.015 00:00:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.015 00:00:02 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:16:07.015 00:00:02 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:16:07.015 00:00:02 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:16:07.015 00:00:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:07.015 00:00:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:07.015 00:00:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:07.015 ************************************ 00:16:07.015 START TEST raid_state_function_test 00:16:07.015 ************************************ 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=80052 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 80052' 00:16:07.015 Process raid pid: 80052 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 80052 /var/tmp/spdk-raid.sock 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80052 ']' 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:07.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:07.015 00:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.015 [2024-07-25 00:00:02.583437] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:16:07.015 [2024-07-25 00:00:02.583602] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.015 [2024-07-25 00:00:02.744362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.273 [2024-07-25 00:00:02.923745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.273 [2024-07-25 00:00:03.083911] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.840 00:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:07.840 00:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:16:07.840 00:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:08.113 [2024-07-25 00:00:03.772059] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:08.113 [2024-07-25 00:00:03.772118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:08.113 [2024-07-25 00:00:03.772134] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:08.113 [2024-07-25 00:00:03.772170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:08.113 [2024-07-25 00:00:03.772196] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:08.113 [2024-07-25 00:00:03.772209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:08.113 00:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:08.113 00:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:08.113 00:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:08.113 00:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:08.113 00:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:08.113 00:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:08.113 00:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:08.113 00:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:08.113 00:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:08.113 00:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:08.113 00:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.113 00:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.388 00:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:08.388 "name": "Existed_Raid", 00:16:08.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.388 "strip_size_kb": 64, 00:16:08.388 "state": "configuring", 00:16:08.388 "raid_level": "raid0", 00:16:08.388 "superblock": false, 00:16:08.388 "num_base_bdevs": 3, 00:16:08.388 "num_base_bdevs_discovered": 0, 00:16:08.388 "num_base_bdevs_operational": 3, 00:16:08.388 "base_bdevs_list": [ 00:16:08.388 { 00:16:08.388 "name": "BaseBdev1", 00:16:08.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.388 "is_configured": false, 00:16:08.388 "data_offset": 0, 00:16:08.388 "data_size": 0 00:16:08.388 }, 00:16:08.388 { 00:16:08.388 "name": "BaseBdev2", 00:16:08.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.388 "is_configured": false, 00:16:08.388 "data_offset": 0, 00:16:08.388 "data_size": 0 00:16:08.388 }, 00:16:08.388 { 00:16:08.388 "name": "BaseBdev3", 00:16:08.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.388 "is_configured": false, 00:16:08.388 "data_offset": 0, 00:16:08.388 "data_size": 0 00:16:08.388 } 00:16:08.388 ] 00:16:08.388 }' 00:16:08.388 00:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:08.388 00:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.647 00:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:08.906 [2024-07-25 00:00:04.624127] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:08.906 [2024-07-25 00:00:04.624195] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:16:08.906 00:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:09.165 [2024-07-25 00:00:04.828190] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:09.165 [2024-07-25 00:00:04.828279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:09.165 [2024-07-25 00:00:04.828302] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:09.165 [2024-07-25 00:00:04.828320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:09.165 [2024-07-25 00:00:04.828330] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:09.165 [2024-07-25 00:00:04.828342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:09.165 00:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:09.423 [2024-07-25 00:00:05.114855] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.423 BaseBdev1 00:16:09.423 00:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:09.423 00:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:09.423 00:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:09.423 00:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:09.423 00:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:09.423 00:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:09.423 00:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:09.681 00:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:09.681 [ 00:16:09.681 { 00:16:09.681 "name": "BaseBdev1", 00:16:09.681 "aliases": [ 00:16:09.681 "b736af54-bd5e-41ff-8342-7c9273e8c6a0" 00:16:09.681 ], 00:16:09.681 "product_name": "Malloc disk", 00:16:09.681 "block_size": 512, 00:16:09.681 "num_blocks": 65536, 00:16:09.681 "uuid": "b736af54-bd5e-41ff-8342-7c9273e8c6a0", 00:16:09.681 "assigned_rate_limits": { 00:16:09.681 "rw_ios_per_sec": 0, 00:16:09.681 "rw_mbytes_per_sec": 0, 00:16:09.681 "r_mbytes_per_sec": 0, 00:16:09.681 "w_mbytes_per_sec": 0 00:16:09.681 }, 00:16:09.681 "claimed": true, 00:16:09.681 "claim_type": "exclusive_write", 00:16:09.681 "zoned": false, 00:16:09.681 "supported_io_types": { 00:16:09.681 "read": true, 00:16:09.681 "write": true, 00:16:09.681 "unmap": true, 00:16:09.681 "flush": true, 00:16:09.681 "reset": true, 00:16:09.681 "nvme_admin": false, 00:16:09.681 "nvme_io": false, 00:16:09.681 "nvme_io_md": false, 00:16:09.681 "write_zeroes": true, 00:16:09.681 "zcopy": true, 00:16:09.681 "get_zone_info": false, 00:16:09.681 "zone_management": false, 00:16:09.681 "zone_append": false, 00:16:09.681 "compare": false, 00:16:09.681 "compare_and_write": false, 00:16:09.681 "abort": true, 00:16:09.681 "seek_hole": false, 00:16:09.681 "seek_data": false, 00:16:09.681 "copy": true, 00:16:09.681 "nvme_iov_md": false 00:16:09.681 }, 00:16:09.681 "memory_domains": [ 00:16:09.681 { 00:16:09.681 "dma_device_id": "system", 00:16:09.681 "dma_device_type": 1 00:16:09.681 }, 00:16:09.681 { 00:16:09.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.681 "dma_device_type": 2 00:16:09.681 } 00:16:09.681 ], 00:16:09.681 "driver_specific": {} 00:16:09.681 } 00:16:09.681 ] 00:16:09.681 00:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:09.681 00:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:09.681 00:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:09.681 00:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:09.681 00:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:09.681 00:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:09.681 00:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:09.681 00:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:09.681 00:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:09.681 00:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:09.681 00:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:09.681 00:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.681 00:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.943 00:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:09.943 "name": "Existed_Raid", 00:16:09.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.943 "strip_size_kb": 64, 00:16:09.943 "state": "configuring", 00:16:09.943 "raid_level": "raid0", 00:16:09.943 "superblock": false, 00:16:09.943 "num_base_bdevs": 3, 00:16:09.943 "num_base_bdevs_discovered": 1, 00:16:09.943 "num_base_bdevs_operational": 3, 00:16:09.943 "base_bdevs_list": [ 00:16:09.943 { 00:16:09.943 "name": "BaseBdev1", 00:16:09.943 "uuid": "b736af54-bd5e-41ff-8342-7c9273e8c6a0", 00:16:09.943 "is_configured": true, 00:16:09.943 "data_offset": 0, 00:16:09.943 "data_size": 65536 00:16:09.943 }, 00:16:09.943 { 00:16:09.943 "name": "BaseBdev2", 00:16:09.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.943 "is_configured": false, 00:16:09.943 "data_offset": 0, 00:16:09.943 "data_size": 0 00:16:09.943 }, 00:16:09.943 { 00:16:09.943 "name": "BaseBdev3", 00:16:09.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.943 "is_configured": false, 00:16:09.943 "data_offset": 0, 00:16:09.943 "data_size": 0 00:16:09.943 } 00:16:09.943 ] 00:16:09.943 }' 00:16:09.943 00:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:09.943 00:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.510 00:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:10.510 [2024-07-25 00:00:06.315383] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:10.510 [2024-07-25 00:00:06.315461] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:16:10.510 00:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:10.769 [2024-07-25 00:00:06.587505] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.769 [2024-07-25 00:00:06.589721] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:10.769 [2024-07-25 00:00:06.589810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:10.769 [2024-07-25 00:00:06.589838] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:10.769 [2024-07-25 00:00:06.589853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:10.769 00:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:10.769 00:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:10.769 00:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:10.769 00:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:10.769 00:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:10.769 00:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:10.769 00:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:10.769 00:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:10.769 00:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:10.769 00:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:10.769 00:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:10.769 00:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:10.769 00:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.769 00:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.028 00:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:11.028 "name": "Existed_Raid", 00:16:11.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.028 "strip_size_kb": 64, 00:16:11.028 "state": "configuring", 00:16:11.028 "raid_level": "raid0", 00:16:11.028 "superblock": false, 00:16:11.028 "num_base_bdevs": 3, 00:16:11.028 "num_base_bdevs_discovered": 1, 00:16:11.028 "num_base_bdevs_operational": 3, 00:16:11.028 "base_bdevs_list": [ 00:16:11.028 { 00:16:11.028 "name": "BaseBdev1", 00:16:11.028 "uuid": "b736af54-bd5e-41ff-8342-7c9273e8c6a0", 00:16:11.028 "is_configured": true, 00:16:11.028 "data_offset": 0, 00:16:11.028 "data_size": 65536 00:16:11.028 }, 00:16:11.028 { 00:16:11.028 "name": "BaseBdev2", 00:16:11.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.028 "is_configured": false, 00:16:11.028 "data_offset": 0, 00:16:11.028 "data_size": 0 00:16:11.028 }, 00:16:11.028 { 00:16:11.028 "name": "BaseBdev3", 00:16:11.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.028 "is_configured": false, 00:16:11.028 "data_offset": 0, 00:16:11.028 "data_size": 0 00:16:11.028 } 00:16:11.028 ] 00:16:11.028 }' 00:16:11.028 00:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:11.028 00:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.294 00:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:11.553 [2024-07-25 00:00:07.362433] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.553 BaseBdev2 00:16:11.553 00:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:11.553 00:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:11.553 00:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:11.553 00:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:11.553 00:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:11.553 00:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:11.553 00:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:11.813 00:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:12.072 [ 00:16:12.072 { 00:16:12.072 "name": "BaseBdev2", 00:16:12.072 "aliases": [ 00:16:12.072 "b9188c84-04da-44d6-b450-7326f2060778" 00:16:12.072 ], 00:16:12.072 "product_name": "Malloc disk", 00:16:12.072 "block_size": 512, 00:16:12.072 "num_blocks": 65536, 00:16:12.072 "uuid": "b9188c84-04da-44d6-b450-7326f2060778", 00:16:12.072 "assigned_rate_limits": { 00:16:12.072 "rw_ios_per_sec": 0, 00:16:12.072 "rw_mbytes_per_sec": 0, 00:16:12.072 "r_mbytes_per_sec": 0, 00:16:12.072 "w_mbytes_per_sec": 0 00:16:12.072 }, 00:16:12.072 "claimed": true, 00:16:12.072 "claim_type": "exclusive_write", 00:16:12.072 "zoned": false, 00:16:12.072 "supported_io_types": { 00:16:12.072 "read": true, 00:16:12.072 "write": true, 00:16:12.072 "unmap": true, 00:16:12.072 "flush": true, 00:16:12.072 "reset": true, 00:16:12.072 "nvme_admin": false, 00:16:12.072 "nvme_io": false, 00:16:12.072 "nvme_io_md": false, 00:16:12.072 "write_zeroes": true, 00:16:12.072 "zcopy": true, 00:16:12.072 "get_zone_info": false, 00:16:12.072 "zone_management": false, 00:16:12.072 "zone_append": false, 00:16:12.072 "compare": false, 00:16:12.072 "compare_and_write": false, 00:16:12.072 "abort": true, 00:16:12.072 "seek_hole": false, 00:16:12.072 "seek_data": false, 00:16:12.072 "copy": true, 00:16:12.072 "nvme_iov_md": false 00:16:12.072 }, 00:16:12.072 "memory_domains": [ 00:16:12.072 { 00:16:12.072 "dma_device_id": "system", 00:16:12.072 "dma_device_type": 1 00:16:12.072 }, 00:16:12.072 { 00:16:12.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.072 "dma_device_type": 2 00:16:12.072 } 00:16:12.072 ], 00:16:12.072 "driver_specific": {} 00:16:12.072 } 00:16:12.072 ] 00:16:12.072 00:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:12.072 00:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:12.072 00:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:12.072 00:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:12.072 00:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:12.072 00:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:12.072 00:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:12.072 00:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:12.072 00:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:12.072 00:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:12.072 00:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:12.072 00:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:12.072 00:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:12.072 00:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.072 00:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.331 00:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:12.331 "name": "Existed_Raid", 00:16:12.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.331 "strip_size_kb": 64, 00:16:12.331 "state": "configuring", 00:16:12.331 "raid_level": "raid0", 00:16:12.331 "superblock": false, 00:16:12.331 "num_base_bdevs": 3, 00:16:12.331 "num_base_bdevs_discovered": 2, 00:16:12.331 "num_base_bdevs_operational": 3, 00:16:12.331 "base_bdevs_list": [ 00:16:12.331 { 00:16:12.331 "name": "BaseBdev1", 00:16:12.331 "uuid": "b736af54-bd5e-41ff-8342-7c9273e8c6a0", 00:16:12.331 "is_configured": true, 00:16:12.331 "data_offset": 0, 00:16:12.331 "data_size": 65536 00:16:12.331 }, 00:16:12.331 { 00:16:12.331 "name": "BaseBdev2", 00:16:12.331 "uuid": "b9188c84-04da-44d6-b450-7326f2060778", 00:16:12.331 "is_configured": true, 00:16:12.331 "data_offset": 0, 00:16:12.331 "data_size": 65536 00:16:12.331 }, 00:16:12.331 { 00:16:12.331 "name": "BaseBdev3", 00:16:12.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.331 "is_configured": false, 00:16:12.331 "data_offset": 0, 00:16:12.331 "data_size": 0 00:16:12.331 } 00:16:12.331 ] 00:16:12.331 }' 00:16:12.331 00:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:12.331 00:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.594 00:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:12.853 [2024-07-25 00:00:08.642667] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:12.853 [2024-07-25 00:00:08.642738] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:16:12.853 [2024-07-25 00:00:08.642755] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:12.853 [2024-07-25 00:00:08.642944] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:16:12.853 [2024-07-25 00:00:08.643468] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:16:12.853 [2024-07-25 00:00:08.643496] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:16:12.853 [2024-07-25 00:00:08.643877] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.853 BaseBdev3 00:16:12.853 00:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:12.853 00:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:12.853 00:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:12.853 00:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:12.854 00:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:12.854 00:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:12.854 00:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:13.111 00:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:13.369 [ 00:16:13.370 { 00:16:13.370 "name": "BaseBdev3", 00:16:13.370 "aliases": [ 00:16:13.370 "e7c041a3-e1de-4780-9f47-35eb825ec851" 00:16:13.370 ], 00:16:13.370 "product_name": "Malloc disk", 00:16:13.370 "block_size": 512, 00:16:13.370 "num_blocks": 65536, 00:16:13.370 "uuid": "e7c041a3-e1de-4780-9f47-35eb825ec851", 00:16:13.370 "assigned_rate_limits": { 00:16:13.370 "rw_ios_per_sec": 0, 00:16:13.370 "rw_mbytes_per_sec": 0, 00:16:13.370 "r_mbytes_per_sec": 0, 00:16:13.370 "w_mbytes_per_sec": 0 00:16:13.370 }, 00:16:13.370 "claimed": true, 00:16:13.370 "claim_type": "exclusive_write", 00:16:13.370 "zoned": false, 00:16:13.370 "supported_io_types": { 00:16:13.370 "read": true, 00:16:13.370 "write": true, 00:16:13.370 "unmap": true, 00:16:13.370 "flush": true, 00:16:13.370 "reset": true, 00:16:13.370 "nvme_admin": false, 00:16:13.370 "nvme_io": false, 00:16:13.370 "nvme_io_md": false, 00:16:13.370 "write_zeroes": true, 00:16:13.370 "zcopy": true, 00:16:13.370 "get_zone_info": false, 00:16:13.370 "zone_management": false, 00:16:13.370 "zone_append": false, 00:16:13.370 "compare": false, 00:16:13.370 "compare_and_write": false, 00:16:13.370 "abort": true, 00:16:13.370 "seek_hole": false, 00:16:13.370 "seek_data": false, 00:16:13.370 "copy": true, 00:16:13.370 "nvme_iov_md": false 00:16:13.370 }, 00:16:13.370 "memory_domains": [ 00:16:13.370 { 00:16:13.370 "dma_device_id": "system", 00:16:13.370 "dma_device_type": 1 00:16:13.370 }, 00:16:13.370 { 00:16:13.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.370 "dma_device_type": 2 00:16:13.370 } 00:16:13.370 ], 00:16:13.370 "driver_specific": {} 00:16:13.370 } 00:16:13.370 ] 00:16:13.370 00:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:13.370 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:13.370 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:13.370 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:13.370 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:13.370 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:13.370 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:13.370 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:13.370 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:13.370 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:13.370 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:13.370 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:13.370 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:13.370 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.370 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.628 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:13.628 "name": "Existed_Raid", 00:16:13.628 "uuid": "c97a3786-8519-4b8d-a26a-61d8825d0e8c", 00:16:13.628 "strip_size_kb": 64, 00:16:13.628 "state": "online", 00:16:13.628 "raid_level": "raid0", 00:16:13.628 "superblock": false, 00:16:13.628 "num_base_bdevs": 3, 00:16:13.628 "num_base_bdevs_discovered": 3, 00:16:13.628 "num_base_bdevs_operational": 3, 00:16:13.628 "base_bdevs_list": [ 00:16:13.628 { 00:16:13.628 "name": "BaseBdev1", 00:16:13.628 "uuid": "b736af54-bd5e-41ff-8342-7c9273e8c6a0", 00:16:13.628 "is_configured": true, 00:16:13.628 "data_offset": 0, 00:16:13.628 "data_size": 65536 00:16:13.628 }, 00:16:13.628 { 00:16:13.628 "name": "BaseBdev2", 00:16:13.628 "uuid": "b9188c84-04da-44d6-b450-7326f2060778", 00:16:13.628 "is_configured": true, 00:16:13.628 "data_offset": 0, 00:16:13.628 "data_size": 65536 00:16:13.628 }, 00:16:13.628 { 00:16:13.628 "name": "BaseBdev3", 00:16:13.628 "uuid": "e7c041a3-e1de-4780-9f47-35eb825ec851", 00:16:13.628 "is_configured": true, 00:16:13.628 "data_offset": 0, 00:16:13.628 "data_size": 65536 00:16:13.628 } 00:16:13.628 ] 00:16:13.628 }' 00:16:13.628 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:13.628 00:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.887 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:13.887 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:13.887 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:13.887 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:13.887 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:13.887 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:13.887 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:13.887 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:14.146 [2024-07-25 00:00:09.963511] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.146 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:14.146 "name": "Existed_Raid", 00:16:14.146 "aliases": [ 00:16:14.146 "c97a3786-8519-4b8d-a26a-61d8825d0e8c" 00:16:14.146 ], 00:16:14.146 "product_name": "Raid Volume", 00:16:14.146 "block_size": 512, 00:16:14.146 "num_blocks": 196608, 00:16:14.146 "uuid": "c97a3786-8519-4b8d-a26a-61d8825d0e8c", 00:16:14.146 "assigned_rate_limits": { 00:16:14.146 "rw_ios_per_sec": 0, 00:16:14.146 "rw_mbytes_per_sec": 0, 00:16:14.146 "r_mbytes_per_sec": 0, 00:16:14.146 "w_mbytes_per_sec": 0 00:16:14.146 }, 00:16:14.146 "claimed": false, 00:16:14.146 "zoned": false, 00:16:14.146 "supported_io_types": { 00:16:14.146 "read": true, 00:16:14.146 "write": true, 00:16:14.146 "unmap": true, 00:16:14.146 "flush": true, 00:16:14.146 "reset": true, 00:16:14.146 "nvme_admin": false, 00:16:14.146 "nvme_io": false, 00:16:14.146 "nvme_io_md": false, 00:16:14.146 "write_zeroes": true, 00:16:14.146 "zcopy": false, 00:16:14.146 "get_zone_info": false, 00:16:14.146 "zone_management": false, 00:16:14.146 "zone_append": false, 00:16:14.146 "compare": false, 00:16:14.146 "compare_and_write": false, 00:16:14.146 "abort": false, 00:16:14.146 "seek_hole": false, 00:16:14.146 "seek_data": false, 00:16:14.146 "copy": false, 00:16:14.146 "nvme_iov_md": false 00:16:14.146 }, 00:16:14.146 "memory_domains": [ 00:16:14.146 { 00:16:14.146 "dma_device_id": "system", 00:16:14.146 "dma_device_type": 1 00:16:14.146 }, 00:16:14.146 { 00:16:14.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.146 "dma_device_type": 2 00:16:14.146 }, 00:16:14.146 { 00:16:14.146 "dma_device_id": "system", 00:16:14.146 "dma_device_type": 1 00:16:14.146 }, 00:16:14.146 { 00:16:14.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.146 "dma_device_type": 2 00:16:14.146 }, 00:16:14.146 { 00:16:14.146 "dma_device_id": "system", 00:16:14.146 "dma_device_type": 1 00:16:14.146 }, 00:16:14.146 { 00:16:14.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.146 "dma_device_type": 2 00:16:14.146 } 00:16:14.146 ], 00:16:14.146 "driver_specific": { 00:16:14.146 "raid": { 00:16:14.146 "uuid": "c97a3786-8519-4b8d-a26a-61d8825d0e8c", 00:16:14.146 "strip_size_kb": 64, 00:16:14.146 "state": "online", 00:16:14.146 "raid_level": "raid0", 00:16:14.146 "superblock": false, 00:16:14.146 "num_base_bdevs": 3, 00:16:14.146 "num_base_bdevs_discovered": 3, 00:16:14.146 "num_base_bdevs_operational": 3, 00:16:14.146 "base_bdevs_list": [ 00:16:14.146 { 00:16:14.146 "name": "BaseBdev1", 00:16:14.146 "uuid": "b736af54-bd5e-41ff-8342-7c9273e8c6a0", 00:16:14.146 "is_configured": true, 00:16:14.146 "data_offset": 0, 00:16:14.146 "data_size": 65536 00:16:14.146 }, 00:16:14.146 { 00:16:14.146 "name": "BaseBdev2", 00:16:14.146 "uuid": "b9188c84-04da-44d6-b450-7326f2060778", 00:16:14.146 "is_configured": true, 00:16:14.146 "data_offset": 0, 00:16:14.146 "data_size": 65536 00:16:14.146 }, 00:16:14.146 { 00:16:14.146 "name": "BaseBdev3", 00:16:14.146 "uuid": "e7c041a3-e1de-4780-9f47-35eb825ec851", 00:16:14.146 "is_configured": true, 00:16:14.146 "data_offset": 0, 00:16:14.146 "data_size": 65536 00:16:14.146 } 00:16:14.146 ] 00:16:14.146 } 00:16:14.146 } 00:16:14.146 }' 00:16:14.146 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:14.146 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:14.146 BaseBdev2 00:16:14.146 BaseBdev3' 00:16:14.146 00:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:14.146 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:14.146 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:14.406 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:14.406 "name": "BaseBdev1", 00:16:14.406 "aliases": [ 00:16:14.406 "b736af54-bd5e-41ff-8342-7c9273e8c6a0" 00:16:14.406 ], 00:16:14.406 "product_name": "Malloc disk", 00:16:14.406 "block_size": 512, 00:16:14.406 "num_blocks": 65536, 00:16:14.406 "uuid": "b736af54-bd5e-41ff-8342-7c9273e8c6a0", 00:16:14.406 "assigned_rate_limits": { 00:16:14.406 "rw_ios_per_sec": 0, 00:16:14.406 "rw_mbytes_per_sec": 0, 00:16:14.406 "r_mbytes_per_sec": 0, 00:16:14.406 "w_mbytes_per_sec": 0 00:16:14.406 }, 00:16:14.406 "claimed": true, 00:16:14.406 "claim_type": "exclusive_write", 00:16:14.406 "zoned": false, 00:16:14.406 "supported_io_types": { 00:16:14.406 "read": true, 00:16:14.406 "write": true, 00:16:14.406 "unmap": true, 00:16:14.406 "flush": true, 00:16:14.406 "reset": true, 00:16:14.406 "nvme_admin": false, 00:16:14.406 "nvme_io": false, 00:16:14.406 "nvme_io_md": false, 00:16:14.406 "write_zeroes": true, 00:16:14.406 "zcopy": true, 00:16:14.406 "get_zone_info": false, 00:16:14.406 "zone_management": false, 00:16:14.406 "zone_append": false, 00:16:14.406 "compare": false, 00:16:14.406 "compare_and_write": false, 00:16:14.406 "abort": true, 00:16:14.406 "seek_hole": false, 00:16:14.406 "seek_data": false, 00:16:14.406 "copy": true, 00:16:14.406 "nvme_iov_md": false 00:16:14.406 }, 00:16:14.406 "memory_domains": [ 00:16:14.406 { 00:16:14.406 "dma_device_id": "system", 00:16:14.406 "dma_device_type": 1 00:16:14.406 }, 00:16:14.406 { 00:16:14.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.406 "dma_device_type": 2 00:16:14.406 } 00:16:14.406 ], 00:16:14.406 "driver_specific": {} 00:16:14.406 }' 00:16:14.406 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:14.406 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:14.665 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:14.665 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:14.665 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:14.665 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:14.665 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:14.665 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:14.665 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:14.665 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:14.665 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:14.665 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:14.665 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:14.665 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:14.665 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:14.924 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:14.924 "name": "BaseBdev2", 00:16:14.924 "aliases": [ 00:16:14.924 "b9188c84-04da-44d6-b450-7326f2060778" 00:16:14.924 ], 00:16:14.924 "product_name": "Malloc disk", 00:16:14.924 "block_size": 512, 00:16:14.924 "num_blocks": 65536, 00:16:14.925 "uuid": "b9188c84-04da-44d6-b450-7326f2060778", 00:16:14.925 "assigned_rate_limits": { 00:16:14.925 "rw_ios_per_sec": 0, 00:16:14.925 "rw_mbytes_per_sec": 0, 00:16:14.925 "r_mbytes_per_sec": 0, 00:16:14.925 "w_mbytes_per_sec": 0 00:16:14.925 }, 00:16:14.925 "claimed": true, 00:16:14.925 "claim_type": "exclusive_write", 00:16:14.925 "zoned": false, 00:16:14.925 "supported_io_types": { 00:16:14.925 "read": true, 00:16:14.925 "write": true, 00:16:14.925 "unmap": true, 00:16:14.925 "flush": true, 00:16:14.925 "reset": true, 00:16:14.925 "nvme_admin": false, 00:16:14.925 "nvme_io": false, 00:16:14.925 "nvme_io_md": false, 00:16:14.925 "write_zeroes": true, 00:16:14.925 "zcopy": true, 00:16:14.925 "get_zone_info": false, 00:16:14.925 "zone_management": false, 00:16:14.925 "zone_append": false, 00:16:14.925 "compare": false, 00:16:14.925 "compare_and_write": false, 00:16:14.925 "abort": true, 00:16:14.925 "seek_hole": false, 00:16:14.925 "seek_data": false, 00:16:14.925 "copy": true, 00:16:14.925 "nvme_iov_md": false 00:16:14.925 }, 00:16:14.925 "memory_domains": [ 00:16:14.925 { 00:16:14.925 "dma_device_id": "system", 00:16:14.925 "dma_device_type": 1 00:16:14.925 }, 00:16:14.925 { 00:16:14.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.925 "dma_device_type": 2 00:16:14.925 } 00:16:14.925 ], 00:16:14.925 "driver_specific": {} 00:16:14.925 }' 00:16:14.925 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:14.925 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:14.925 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:14.925 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:14.925 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:14.925 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:14.925 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:14.925 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:14.925 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:14.925 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:14.925 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:14.925 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:14.925 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:14.925 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:14.925 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:15.184 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:15.184 "name": "BaseBdev3", 00:16:15.184 "aliases": [ 00:16:15.184 "e7c041a3-e1de-4780-9f47-35eb825ec851" 00:16:15.184 ], 00:16:15.184 "product_name": "Malloc disk", 00:16:15.184 "block_size": 512, 00:16:15.184 "num_blocks": 65536, 00:16:15.184 "uuid": "e7c041a3-e1de-4780-9f47-35eb825ec851", 00:16:15.184 "assigned_rate_limits": { 00:16:15.184 "rw_ios_per_sec": 0, 00:16:15.184 "rw_mbytes_per_sec": 0, 00:16:15.184 "r_mbytes_per_sec": 0, 00:16:15.184 "w_mbytes_per_sec": 0 00:16:15.184 }, 00:16:15.184 "claimed": true, 00:16:15.184 "claim_type": "exclusive_write", 00:16:15.184 "zoned": false, 00:16:15.184 "supported_io_types": { 00:16:15.184 "read": true, 00:16:15.184 "write": true, 00:16:15.184 "unmap": true, 00:16:15.184 "flush": true, 00:16:15.184 "reset": true, 00:16:15.184 "nvme_admin": false, 00:16:15.184 "nvme_io": false, 00:16:15.184 "nvme_io_md": false, 00:16:15.184 "write_zeroes": true, 00:16:15.185 "zcopy": true, 00:16:15.185 "get_zone_info": false, 00:16:15.185 "zone_management": false, 00:16:15.185 "zone_append": false, 00:16:15.185 "compare": false, 00:16:15.185 "compare_and_write": false, 00:16:15.185 "abort": true, 00:16:15.185 "seek_hole": false, 00:16:15.185 "seek_data": false, 00:16:15.185 "copy": true, 00:16:15.185 "nvme_iov_md": false 00:16:15.185 }, 00:16:15.185 "memory_domains": [ 00:16:15.185 { 00:16:15.185 "dma_device_id": "system", 00:16:15.185 "dma_device_type": 1 00:16:15.185 }, 00:16:15.185 { 00:16:15.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.185 "dma_device_type": 2 00:16:15.185 } 00:16:15.185 ], 00:16:15.185 "driver_specific": {} 00:16:15.185 }' 00:16:15.185 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.185 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.185 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:15.185 00:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.185 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.185 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:15.185 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.185 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.185 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:15.185 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.185 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.185 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:15.185 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:15.444 [2024-07-25 00:00:11.295656] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:15.444 [2024-07-25 00:00:11.295699] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:15.444 [2024-07-25 00:00:11.295778] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.706 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:15.706 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:16:15.706 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:15.706 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:15.706 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:15.706 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:15.706 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:15.706 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:15.706 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:15.706 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:15.706 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:15.706 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:15.706 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:15.706 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:15.706 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:15.706 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.706 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.965 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:15.965 "name": "Existed_Raid", 00:16:15.965 "uuid": "c97a3786-8519-4b8d-a26a-61d8825d0e8c", 00:16:15.965 "strip_size_kb": 64, 00:16:15.965 "state": "offline", 00:16:15.965 "raid_level": "raid0", 00:16:15.965 "superblock": false, 00:16:15.965 "num_base_bdevs": 3, 00:16:15.965 "num_base_bdevs_discovered": 2, 00:16:15.965 "num_base_bdevs_operational": 2, 00:16:15.965 "base_bdevs_list": [ 00:16:15.965 { 00:16:15.965 "name": null, 00:16:15.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.965 "is_configured": false, 00:16:15.965 "data_offset": 0, 00:16:15.965 "data_size": 65536 00:16:15.965 }, 00:16:15.965 { 00:16:15.965 "name": "BaseBdev2", 00:16:15.965 "uuid": "b9188c84-04da-44d6-b450-7326f2060778", 00:16:15.965 "is_configured": true, 00:16:15.965 "data_offset": 0, 00:16:15.965 "data_size": 65536 00:16:15.965 }, 00:16:15.965 { 00:16:15.965 "name": "BaseBdev3", 00:16:15.965 "uuid": "e7c041a3-e1de-4780-9f47-35eb825ec851", 00:16:15.965 "is_configured": true, 00:16:15.965 "data_offset": 0, 00:16:15.965 "data_size": 65536 00:16:15.965 } 00:16:15.965 ] 00:16:15.965 }' 00:16:15.965 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:15.965 00:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.224 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:16.224 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:16.224 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.224 00:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:16.482 00:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:16.483 00:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:16.483 00:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:16.741 [2024-07-25 00:00:12.431644] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:16.741 00:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:16.741 00:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:16.741 00:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.741 00:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:17.000 00:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:17.000 00:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:17.000 00:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:17.258 [2024-07-25 00:00:12.936662] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:17.258 [2024-07-25 00:00:12.936753] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:16:17.258 00:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:17.258 00:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:17.258 00:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.258 00:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:17.515 00:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:17.515 00:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:17.515 00:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:16:17.515 00:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:17.515 00:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:17.515 00:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:17.773 BaseBdev2 00:16:17.773 00:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:17.773 00:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:17.773 00:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:17.773 00:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:17.773 00:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:17.773 00:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:17.773 00:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:18.032 00:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:18.291 [ 00:16:18.291 { 00:16:18.291 "name": "BaseBdev2", 00:16:18.291 "aliases": [ 00:16:18.291 "a7703994-9182-4e81-b4d1-1cb5c8e27abc" 00:16:18.291 ], 00:16:18.291 "product_name": "Malloc disk", 00:16:18.291 "block_size": 512, 00:16:18.291 "num_blocks": 65536, 00:16:18.291 "uuid": "a7703994-9182-4e81-b4d1-1cb5c8e27abc", 00:16:18.291 "assigned_rate_limits": { 00:16:18.291 "rw_ios_per_sec": 0, 00:16:18.291 "rw_mbytes_per_sec": 0, 00:16:18.291 "r_mbytes_per_sec": 0, 00:16:18.291 "w_mbytes_per_sec": 0 00:16:18.291 }, 00:16:18.291 "claimed": false, 00:16:18.291 "zoned": false, 00:16:18.291 "supported_io_types": { 00:16:18.291 "read": true, 00:16:18.291 "write": true, 00:16:18.291 "unmap": true, 00:16:18.291 "flush": true, 00:16:18.291 "reset": true, 00:16:18.291 "nvme_admin": false, 00:16:18.291 "nvme_io": false, 00:16:18.291 "nvme_io_md": false, 00:16:18.291 "write_zeroes": true, 00:16:18.291 "zcopy": true, 00:16:18.291 "get_zone_info": false, 00:16:18.291 "zone_management": false, 00:16:18.291 "zone_append": false, 00:16:18.291 "compare": false, 00:16:18.291 "compare_and_write": false, 00:16:18.291 "abort": true, 00:16:18.291 "seek_hole": false, 00:16:18.291 "seek_data": false, 00:16:18.291 "copy": true, 00:16:18.291 "nvme_iov_md": false 00:16:18.291 }, 00:16:18.291 "memory_domains": [ 00:16:18.291 { 00:16:18.291 "dma_device_id": "system", 00:16:18.291 "dma_device_type": 1 00:16:18.291 }, 00:16:18.291 { 00:16:18.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.291 "dma_device_type": 2 00:16:18.291 } 00:16:18.291 ], 00:16:18.291 "driver_specific": {} 00:16:18.291 } 00:16:18.291 ] 00:16:18.291 00:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:18.291 00:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:18.291 00:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:18.291 00:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:18.550 BaseBdev3 00:16:18.550 00:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:18.550 00:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:18.550 00:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:18.550 00:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:18.550 00:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:18.550 00:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:18.550 00:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:18.809 00:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:18.809 [ 00:16:18.809 { 00:16:18.809 "name": "BaseBdev3", 00:16:18.809 "aliases": [ 00:16:18.809 "58fd0848-a267-4bcb-888b-7e786783aab3" 00:16:18.809 ], 00:16:18.809 "product_name": "Malloc disk", 00:16:18.809 "block_size": 512, 00:16:18.809 "num_blocks": 65536, 00:16:18.809 "uuid": "58fd0848-a267-4bcb-888b-7e786783aab3", 00:16:18.809 "assigned_rate_limits": { 00:16:18.809 "rw_ios_per_sec": 0, 00:16:18.809 "rw_mbytes_per_sec": 0, 00:16:18.809 "r_mbytes_per_sec": 0, 00:16:18.809 "w_mbytes_per_sec": 0 00:16:18.809 }, 00:16:18.809 "claimed": false, 00:16:18.809 "zoned": false, 00:16:18.809 "supported_io_types": { 00:16:18.809 "read": true, 00:16:18.809 "write": true, 00:16:18.809 "unmap": true, 00:16:18.809 "flush": true, 00:16:18.809 "reset": true, 00:16:18.809 "nvme_admin": false, 00:16:18.809 "nvme_io": false, 00:16:18.809 "nvme_io_md": false, 00:16:18.809 "write_zeroes": true, 00:16:18.809 "zcopy": true, 00:16:18.809 "get_zone_info": false, 00:16:18.809 "zone_management": false, 00:16:18.809 "zone_append": false, 00:16:18.809 "compare": false, 00:16:18.809 "compare_and_write": false, 00:16:18.809 "abort": true, 00:16:18.810 "seek_hole": false, 00:16:18.810 "seek_data": false, 00:16:18.810 "copy": true, 00:16:18.810 "nvme_iov_md": false 00:16:18.810 }, 00:16:18.810 "memory_domains": [ 00:16:18.810 { 00:16:18.810 "dma_device_id": "system", 00:16:18.810 "dma_device_type": 1 00:16:18.810 }, 00:16:18.810 { 00:16:18.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.810 "dma_device_type": 2 00:16:18.810 } 00:16:18.810 ], 00:16:18.810 "driver_specific": {} 00:16:18.810 } 00:16:18.810 ] 00:16:18.810 00:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:18.810 00:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:18.810 00:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:18.810 00:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:19.074 [2024-07-25 00:00:14.808093] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:19.074 [2024-07-25 00:00:14.808175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:19.074 [2024-07-25 00:00:14.808256] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.074 [2024-07-25 00:00:14.810271] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:19.074 00:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:19.074 00:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:19.074 00:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:19.074 00:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:19.074 00:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:19.074 00:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:19.074 00:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:19.074 00:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:19.074 00:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:19.074 00:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:19.074 00:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.074 00:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.333 00:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:19.333 "name": "Existed_Raid", 00:16:19.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.333 "strip_size_kb": 64, 00:16:19.333 "state": "configuring", 00:16:19.333 "raid_level": "raid0", 00:16:19.333 "superblock": false, 00:16:19.333 "num_base_bdevs": 3, 00:16:19.333 "num_base_bdevs_discovered": 2, 00:16:19.333 "num_base_bdevs_operational": 3, 00:16:19.333 "base_bdevs_list": [ 00:16:19.333 { 00:16:19.333 "name": "BaseBdev1", 00:16:19.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.333 "is_configured": false, 00:16:19.333 "data_offset": 0, 00:16:19.333 "data_size": 0 00:16:19.333 }, 00:16:19.333 { 00:16:19.333 "name": "BaseBdev2", 00:16:19.333 "uuid": "a7703994-9182-4e81-b4d1-1cb5c8e27abc", 00:16:19.333 "is_configured": true, 00:16:19.333 "data_offset": 0, 00:16:19.333 "data_size": 65536 00:16:19.333 }, 00:16:19.333 { 00:16:19.333 "name": "BaseBdev3", 00:16:19.333 "uuid": "58fd0848-a267-4bcb-888b-7e786783aab3", 00:16:19.333 "is_configured": true, 00:16:19.333 "data_offset": 0, 00:16:19.333 "data_size": 65536 00:16:19.333 } 00:16:19.333 ] 00:16:19.334 }' 00:16:19.334 00:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:19.334 00:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.592 00:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:19.851 [2024-07-25 00:00:15.612306] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:19.851 00:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:19.851 00:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:19.851 00:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:19.851 00:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:19.851 00:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:19.851 00:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:19.851 00:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:19.851 00:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:19.851 00:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:19.851 00:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:19.851 00:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.851 00:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.109 00:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:20.109 "name": "Existed_Raid", 00:16:20.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.109 "strip_size_kb": 64, 00:16:20.109 "state": "configuring", 00:16:20.109 "raid_level": "raid0", 00:16:20.109 "superblock": false, 00:16:20.109 "num_base_bdevs": 3, 00:16:20.109 "num_base_bdevs_discovered": 1, 00:16:20.109 "num_base_bdevs_operational": 3, 00:16:20.109 "base_bdevs_list": [ 00:16:20.109 { 00:16:20.109 "name": "BaseBdev1", 00:16:20.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.109 "is_configured": false, 00:16:20.109 "data_offset": 0, 00:16:20.109 "data_size": 0 00:16:20.109 }, 00:16:20.109 { 00:16:20.109 "name": null, 00:16:20.109 "uuid": "a7703994-9182-4e81-b4d1-1cb5c8e27abc", 00:16:20.109 "is_configured": false, 00:16:20.109 "data_offset": 0, 00:16:20.109 "data_size": 65536 00:16:20.109 }, 00:16:20.109 { 00:16:20.109 "name": "BaseBdev3", 00:16:20.109 "uuid": "58fd0848-a267-4bcb-888b-7e786783aab3", 00:16:20.109 "is_configured": true, 00:16:20.109 "data_offset": 0, 00:16:20.109 "data_size": 65536 00:16:20.109 } 00:16:20.109 ] 00:16:20.109 }' 00:16:20.109 00:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:20.109 00:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.368 00:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.368 00:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:20.627 00:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:20.627 00:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:20.886 [2024-07-25 00:00:16.608435] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:20.886 BaseBdev1 00:16:20.886 00:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:20.886 00:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:20.886 00:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:20.886 00:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:20.886 00:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:20.886 00:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:20.886 00:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:21.145 00:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:21.404 [ 00:16:21.404 { 00:16:21.404 "name": "BaseBdev1", 00:16:21.404 "aliases": [ 00:16:21.404 "37267032-6c8c-43ec-a9da-910c494dcf42" 00:16:21.404 ], 00:16:21.404 "product_name": "Malloc disk", 00:16:21.404 "block_size": 512, 00:16:21.404 "num_blocks": 65536, 00:16:21.404 "uuid": "37267032-6c8c-43ec-a9da-910c494dcf42", 00:16:21.404 "assigned_rate_limits": { 00:16:21.404 "rw_ios_per_sec": 0, 00:16:21.404 "rw_mbytes_per_sec": 0, 00:16:21.404 "r_mbytes_per_sec": 0, 00:16:21.404 "w_mbytes_per_sec": 0 00:16:21.404 }, 00:16:21.404 "claimed": true, 00:16:21.404 "claim_type": "exclusive_write", 00:16:21.404 "zoned": false, 00:16:21.404 "supported_io_types": { 00:16:21.404 "read": true, 00:16:21.404 "write": true, 00:16:21.404 "unmap": true, 00:16:21.404 "flush": true, 00:16:21.404 "reset": true, 00:16:21.404 "nvme_admin": false, 00:16:21.404 "nvme_io": false, 00:16:21.404 "nvme_io_md": false, 00:16:21.404 "write_zeroes": true, 00:16:21.404 "zcopy": true, 00:16:21.404 "get_zone_info": false, 00:16:21.404 "zone_management": false, 00:16:21.404 "zone_append": false, 00:16:21.404 "compare": false, 00:16:21.404 "compare_and_write": false, 00:16:21.404 "abort": true, 00:16:21.404 "seek_hole": false, 00:16:21.404 "seek_data": false, 00:16:21.404 "copy": true, 00:16:21.404 "nvme_iov_md": false 00:16:21.404 }, 00:16:21.404 "memory_domains": [ 00:16:21.404 { 00:16:21.405 "dma_device_id": "system", 00:16:21.405 "dma_device_type": 1 00:16:21.405 }, 00:16:21.405 { 00:16:21.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.405 "dma_device_type": 2 00:16:21.405 } 00:16:21.405 ], 00:16:21.405 "driver_specific": {} 00:16:21.405 } 00:16:21.405 ] 00:16:21.405 00:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:21.405 00:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:21.405 00:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:21.405 00:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:21.405 00:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:21.405 00:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:21.405 00:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:21.405 00:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:21.405 00:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:21.405 00:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:21.405 00:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:21.405 00:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.405 00:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.663 00:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:21.663 "name": "Existed_Raid", 00:16:21.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.663 "strip_size_kb": 64, 00:16:21.663 "state": "configuring", 00:16:21.663 "raid_level": "raid0", 00:16:21.663 "superblock": false, 00:16:21.663 "num_base_bdevs": 3, 00:16:21.663 "num_base_bdevs_discovered": 2, 00:16:21.663 "num_base_bdevs_operational": 3, 00:16:21.663 "base_bdevs_list": [ 00:16:21.663 { 00:16:21.663 "name": "BaseBdev1", 00:16:21.663 "uuid": "37267032-6c8c-43ec-a9da-910c494dcf42", 00:16:21.663 "is_configured": true, 00:16:21.663 "data_offset": 0, 00:16:21.663 "data_size": 65536 00:16:21.663 }, 00:16:21.663 { 00:16:21.663 "name": null, 00:16:21.663 "uuid": "a7703994-9182-4e81-b4d1-1cb5c8e27abc", 00:16:21.663 "is_configured": false, 00:16:21.663 "data_offset": 0, 00:16:21.663 "data_size": 65536 00:16:21.663 }, 00:16:21.663 { 00:16:21.663 "name": "BaseBdev3", 00:16:21.663 "uuid": "58fd0848-a267-4bcb-888b-7e786783aab3", 00:16:21.663 "is_configured": true, 00:16:21.663 "data_offset": 0, 00:16:21.663 "data_size": 65536 00:16:21.663 } 00:16:21.663 ] 00:16:21.663 }' 00:16:21.663 00:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:21.663 00:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.922 00:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:21.922 00:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.180 00:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:22.180 00:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:22.440 [2024-07-25 00:00:18.144947] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:22.440 00:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:22.440 00:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:22.440 00:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:22.440 00:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:22.440 00:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:22.440 00:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:22.440 00:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:22.440 00:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:22.440 00:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:22.440 00:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:22.440 00:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.440 00:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.699 00:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:22.699 "name": "Existed_Raid", 00:16:22.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.699 "strip_size_kb": 64, 00:16:22.699 "state": "configuring", 00:16:22.699 "raid_level": "raid0", 00:16:22.699 "superblock": false, 00:16:22.699 "num_base_bdevs": 3, 00:16:22.699 "num_base_bdevs_discovered": 1, 00:16:22.699 "num_base_bdevs_operational": 3, 00:16:22.699 "base_bdevs_list": [ 00:16:22.699 { 00:16:22.699 "name": "BaseBdev1", 00:16:22.699 "uuid": "37267032-6c8c-43ec-a9da-910c494dcf42", 00:16:22.699 "is_configured": true, 00:16:22.699 "data_offset": 0, 00:16:22.699 "data_size": 65536 00:16:22.699 }, 00:16:22.699 { 00:16:22.699 "name": null, 00:16:22.699 "uuid": "a7703994-9182-4e81-b4d1-1cb5c8e27abc", 00:16:22.699 "is_configured": false, 00:16:22.699 "data_offset": 0, 00:16:22.699 "data_size": 65536 00:16:22.699 }, 00:16:22.699 { 00:16:22.699 "name": null, 00:16:22.699 "uuid": "58fd0848-a267-4bcb-888b-7e786783aab3", 00:16:22.699 "is_configured": false, 00:16:22.699 "data_offset": 0, 00:16:22.699 "data_size": 65536 00:16:22.699 } 00:16:22.699 ] 00:16:22.699 }' 00:16:22.699 00:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:22.699 00:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.957 00:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.957 00:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:23.215 00:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:23.215 00:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:23.471 [2024-07-25 00:00:19.249342] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:23.471 00:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:23.471 00:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:23.471 00:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:23.471 00:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:23.471 00:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:23.471 00:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:23.471 00:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:23.471 00:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:23.471 00:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:23.471 00:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:23.471 00:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.471 00:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.729 00:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:23.729 "name": "Existed_Raid", 00:16:23.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.729 "strip_size_kb": 64, 00:16:23.729 "state": "configuring", 00:16:23.729 "raid_level": "raid0", 00:16:23.729 "superblock": false, 00:16:23.729 "num_base_bdevs": 3, 00:16:23.729 "num_base_bdevs_discovered": 2, 00:16:23.729 "num_base_bdevs_operational": 3, 00:16:23.729 "base_bdevs_list": [ 00:16:23.729 { 00:16:23.729 "name": "BaseBdev1", 00:16:23.729 "uuid": "37267032-6c8c-43ec-a9da-910c494dcf42", 00:16:23.729 "is_configured": true, 00:16:23.729 "data_offset": 0, 00:16:23.729 "data_size": 65536 00:16:23.729 }, 00:16:23.729 { 00:16:23.729 "name": null, 00:16:23.729 "uuid": "a7703994-9182-4e81-b4d1-1cb5c8e27abc", 00:16:23.729 "is_configured": false, 00:16:23.729 "data_offset": 0, 00:16:23.729 "data_size": 65536 00:16:23.729 }, 00:16:23.729 { 00:16:23.729 "name": "BaseBdev3", 00:16:23.729 "uuid": "58fd0848-a267-4bcb-888b-7e786783aab3", 00:16:23.729 "is_configured": true, 00:16:23.729 "data_offset": 0, 00:16:23.729 "data_size": 65536 00:16:23.729 } 00:16:23.729 ] 00:16:23.729 }' 00:16:23.729 00:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:23.729 00:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.988 00:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.988 00:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:24.246 00:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:24.246 00:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:24.503 [2024-07-25 00:00:20.305743] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:24.761 00:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:24.761 00:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:24.761 00:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:24.761 00:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:24.761 00:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:24.761 00:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:24.761 00:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:24.761 00:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:24.761 00:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:24.761 00:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:24.761 00:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.761 00:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.018 00:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:25.018 "name": "Existed_Raid", 00:16:25.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.018 "strip_size_kb": 64, 00:16:25.018 "state": "configuring", 00:16:25.018 "raid_level": "raid0", 00:16:25.018 "superblock": false, 00:16:25.018 "num_base_bdevs": 3, 00:16:25.018 "num_base_bdevs_discovered": 1, 00:16:25.018 "num_base_bdevs_operational": 3, 00:16:25.018 "base_bdevs_list": [ 00:16:25.018 { 00:16:25.018 "name": null, 00:16:25.018 "uuid": "37267032-6c8c-43ec-a9da-910c494dcf42", 00:16:25.018 "is_configured": false, 00:16:25.018 "data_offset": 0, 00:16:25.018 "data_size": 65536 00:16:25.018 }, 00:16:25.018 { 00:16:25.018 "name": null, 00:16:25.018 "uuid": "a7703994-9182-4e81-b4d1-1cb5c8e27abc", 00:16:25.018 "is_configured": false, 00:16:25.018 "data_offset": 0, 00:16:25.018 "data_size": 65536 00:16:25.018 }, 00:16:25.018 { 00:16:25.018 "name": "BaseBdev3", 00:16:25.018 "uuid": "58fd0848-a267-4bcb-888b-7e786783aab3", 00:16:25.018 "is_configured": true, 00:16:25.018 "data_offset": 0, 00:16:25.018 "data_size": 65536 00:16:25.018 } 00:16:25.018 ] 00:16:25.018 }' 00:16:25.018 00:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:25.018 00:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.276 00:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.276 00:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:25.534 00:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:25.534 00:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:25.792 [2024-07-25 00:00:21.524157] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:25.792 00:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:25.792 00:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:25.792 00:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:25.792 00:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:25.792 00:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:25.792 00:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:25.792 00:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:25.792 00:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:25.792 00:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:25.792 00:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:25.792 00:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.792 00:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.049 00:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:26.049 "name": "Existed_Raid", 00:16:26.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.049 "strip_size_kb": 64, 00:16:26.049 "state": "configuring", 00:16:26.049 "raid_level": "raid0", 00:16:26.049 "superblock": false, 00:16:26.049 "num_base_bdevs": 3, 00:16:26.049 "num_base_bdevs_discovered": 2, 00:16:26.049 "num_base_bdevs_operational": 3, 00:16:26.049 "base_bdevs_list": [ 00:16:26.049 { 00:16:26.049 "name": null, 00:16:26.049 "uuid": "37267032-6c8c-43ec-a9da-910c494dcf42", 00:16:26.049 "is_configured": false, 00:16:26.049 "data_offset": 0, 00:16:26.049 "data_size": 65536 00:16:26.049 }, 00:16:26.049 { 00:16:26.049 "name": "BaseBdev2", 00:16:26.049 "uuid": "a7703994-9182-4e81-b4d1-1cb5c8e27abc", 00:16:26.049 "is_configured": true, 00:16:26.049 "data_offset": 0, 00:16:26.049 "data_size": 65536 00:16:26.049 }, 00:16:26.049 { 00:16:26.049 "name": "BaseBdev3", 00:16:26.049 "uuid": "58fd0848-a267-4bcb-888b-7e786783aab3", 00:16:26.049 "is_configured": true, 00:16:26.049 "data_offset": 0, 00:16:26.049 "data_size": 65536 00:16:26.049 } 00:16:26.049 ] 00:16:26.049 }' 00:16:26.049 00:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:26.049 00:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.307 00:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.307 00:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:26.565 00:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:26.565 00:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:26.565 00:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.823 00:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 37267032-6c8c-43ec-a9da-910c494dcf42 00:16:27.081 [2024-07-25 00:00:22.834147] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:27.081 [2024-07-25 00:00:22.834194] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:16:27.081 [2024-07-25 00:00:22.834207] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:27.081 [2024-07-25 00:00:22.834309] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005d40 00:16:27.081 [2024-07-25 00:00:22.834630] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:16:27.081 [2024-07-25 00:00:22.834645] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000008a80 00:16:27.081 [2024-07-25 00:00:22.835032] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.081 NewBaseBdev 00:16:27.081 00:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:27.081 00:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:27.081 00:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:27.081 00:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:27.081 00:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:27.081 00:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:27.081 00:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:27.340 00:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:27.600 [ 00:16:27.600 { 00:16:27.600 "name": "NewBaseBdev", 00:16:27.600 "aliases": [ 00:16:27.600 "37267032-6c8c-43ec-a9da-910c494dcf42" 00:16:27.600 ], 00:16:27.600 "product_name": "Malloc disk", 00:16:27.600 "block_size": 512, 00:16:27.600 "num_blocks": 65536, 00:16:27.600 "uuid": "37267032-6c8c-43ec-a9da-910c494dcf42", 00:16:27.600 "assigned_rate_limits": { 00:16:27.600 "rw_ios_per_sec": 0, 00:16:27.600 "rw_mbytes_per_sec": 0, 00:16:27.600 "r_mbytes_per_sec": 0, 00:16:27.600 "w_mbytes_per_sec": 0 00:16:27.600 }, 00:16:27.600 "claimed": true, 00:16:27.600 "claim_type": "exclusive_write", 00:16:27.600 "zoned": false, 00:16:27.600 "supported_io_types": { 00:16:27.600 "read": true, 00:16:27.600 "write": true, 00:16:27.600 "unmap": true, 00:16:27.600 "flush": true, 00:16:27.600 "reset": true, 00:16:27.600 "nvme_admin": false, 00:16:27.600 "nvme_io": false, 00:16:27.600 "nvme_io_md": false, 00:16:27.600 "write_zeroes": true, 00:16:27.600 "zcopy": true, 00:16:27.600 "get_zone_info": false, 00:16:27.600 "zone_management": false, 00:16:27.600 "zone_append": false, 00:16:27.600 "compare": false, 00:16:27.600 "compare_and_write": false, 00:16:27.600 "abort": true, 00:16:27.600 "seek_hole": false, 00:16:27.600 "seek_data": false, 00:16:27.600 "copy": true, 00:16:27.600 "nvme_iov_md": false 00:16:27.600 }, 00:16:27.600 "memory_domains": [ 00:16:27.600 { 00:16:27.600 "dma_device_id": "system", 00:16:27.600 "dma_device_type": 1 00:16:27.600 }, 00:16:27.600 { 00:16:27.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.600 "dma_device_type": 2 00:16:27.600 } 00:16:27.600 ], 00:16:27.600 "driver_specific": {} 00:16:27.600 } 00:16:27.600 ] 00:16:27.600 00:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:27.600 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:27.600 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:27.600 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:27.600 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:27.600 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:27.600 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:27.600 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.600 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.600 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.600 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.600 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.600 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.858 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:27.858 "name": "Existed_Raid", 00:16:27.858 "uuid": "ae7c470e-0a02-4476-8417-720ba4b694f2", 00:16:27.858 "strip_size_kb": 64, 00:16:27.858 "state": "online", 00:16:27.858 "raid_level": "raid0", 00:16:27.858 "superblock": false, 00:16:27.858 "num_base_bdevs": 3, 00:16:27.858 "num_base_bdevs_discovered": 3, 00:16:27.858 "num_base_bdevs_operational": 3, 00:16:27.858 "base_bdevs_list": [ 00:16:27.858 { 00:16:27.858 "name": "NewBaseBdev", 00:16:27.858 "uuid": "37267032-6c8c-43ec-a9da-910c494dcf42", 00:16:27.858 "is_configured": true, 00:16:27.858 "data_offset": 0, 00:16:27.858 "data_size": 65536 00:16:27.858 }, 00:16:27.858 { 00:16:27.858 "name": "BaseBdev2", 00:16:27.858 "uuid": "a7703994-9182-4e81-b4d1-1cb5c8e27abc", 00:16:27.858 "is_configured": true, 00:16:27.858 "data_offset": 0, 00:16:27.858 "data_size": 65536 00:16:27.858 }, 00:16:27.858 { 00:16:27.858 "name": "BaseBdev3", 00:16:27.858 "uuid": "58fd0848-a267-4bcb-888b-7e786783aab3", 00:16:27.858 "is_configured": true, 00:16:27.858 "data_offset": 0, 00:16:27.858 "data_size": 65536 00:16:27.858 } 00:16:27.858 ] 00:16:27.858 }' 00:16:27.858 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:27.858 00:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.116 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:28.116 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:28.116 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:28.116 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:28.116 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:28.116 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:28.116 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:28.116 00:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:28.374 [2024-07-25 00:00:24.126843] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:28.374 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:28.374 "name": "Existed_Raid", 00:16:28.374 "aliases": [ 00:16:28.374 "ae7c470e-0a02-4476-8417-720ba4b694f2" 00:16:28.374 ], 00:16:28.374 "product_name": "Raid Volume", 00:16:28.374 "block_size": 512, 00:16:28.374 "num_blocks": 196608, 00:16:28.374 "uuid": "ae7c470e-0a02-4476-8417-720ba4b694f2", 00:16:28.374 "assigned_rate_limits": { 00:16:28.374 "rw_ios_per_sec": 0, 00:16:28.374 "rw_mbytes_per_sec": 0, 00:16:28.374 "r_mbytes_per_sec": 0, 00:16:28.374 "w_mbytes_per_sec": 0 00:16:28.374 }, 00:16:28.374 "claimed": false, 00:16:28.374 "zoned": false, 00:16:28.374 "supported_io_types": { 00:16:28.374 "read": true, 00:16:28.374 "write": true, 00:16:28.374 "unmap": true, 00:16:28.374 "flush": true, 00:16:28.374 "reset": true, 00:16:28.374 "nvme_admin": false, 00:16:28.374 "nvme_io": false, 00:16:28.374 "nvme_io_md": false, 00:16:28.374 "write_zeroes": true, 00:16:28.374 "zcopy": false, 00:16:28.374 "get_zone_info": false, 00:16:28.374 "zone_management": false, 00:16:28.374 "zone_append": false, 00:16:28.374 "compare": false, 00:16:28.374 "compare_and_write": false, 00:16:28.374 "abort": false, 00:16:28.374 "seek_hole": false, 00:16:28.374 "seek_data": false, 00:16:28.374 "copy": false, 00:16:28.374 "nvme_iov_md": false 00:16:28.374 }, 00:16:28.374 "memory_domains": [ 00:16:28.374 { 00:16:28.374 "dma_device_id": "system", 00:16:28.374 "dma_device_type": 1 00:16:28.374 }, 00:16:28.374 { 00:16:28.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.374 "dma_device_type": 2 00:16:28.374 }, 00:16:28.374 { 00:16:28.374 "dma_device_id": "system", 00:16:28.374 "dma_device_type": 1 00:16:28.374 }, 00:16:28.374 { 00:16:28.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.374 "dma_device_type": 2 00:16:28.374 }, 00:16:28.374 { 00:16:28.374 "dma_device_id": "system", 00:16:28.374 "dma_device_type": 1 00:16:28.374 }, 00:16:28.374 { 00:16:28.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.374 "dma_device_type": 2 00:16:28.374 } 00:16:28.374 ], 00:16:28.374 "driver_specific": { 00:16:28.374 "raid": { 00:16:28.374 "uuid": "ae7c470e-0a02-4476-8417-720ba4b694f2", 00:16:28.374 "strip_size_kb": 64, 00:16:28.374 "state": "online", 00:16:28.374 "raid_level": "raid0", 00:16:28.374 "superblock": false, 00:16:28.374 "num_base_bdevs": 3, 00:16:28.374 "num_base_bdevs_discovered": 3, 00:16:28.374 "num_base_bdevs_operational": 3, 00:16:28.374 "base_bdevs_list": [ 00:16:28.374 { 00:16:28.374 "name": "NewBaseBdev", 00:16:28.374 "uuid": "37267032-6c8c-43ec-a9da-910c494dcf42", 00:16:28.374 "is_configured": true, 00:16:28.374 "data_offset": 0, 00:16:28.374 "data_size": 65536 00:16:28.374 }, 00:16:28.374 { 00:16:28.374 "name": "BaseBdev2", 00:16:28.374 "uuid": "a7703994-9182-4e81-b4d1-1cb5c8e27abc", 00:16:28.374 "is_configured": true, 00:16:28.374 "data_offset": 0, 00:16:28.374 "data_size": 65536 00:16:28.374 }, 00:16:28.374 { 00:16:28.374 "name": "BaseBdev3", 00:16:28.374 "uuid": "58fd0848-a267-4bcb-888b-7e786783aab3", 00:16:28.374 "is_configured": true, 00:16:28.374 "data_offset": 0, 00:16:28.374 "data_size": 65536 00:16:28.374 } 00:16:28.374 ] 00:16:28.374 } 00:16:28.374 } 00:16:28.374 }' 00:16:28.374 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:28.374 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:28.374 BaseBdev2 00:16:28.374 BaseBdev3' 00:16:28.374 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:28.374 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:28.374 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:28.633 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:28.633 "name": "NewBaseBdev", 00:16:28.633 "aliases": [ 00:16:28.633 "37267032-6c8c-43ec-a9da-910c494dcf42" 00:16:28.633 ], 00:16:28.633 "product_name": "Malloc disk", 00:16:28.633 "block_size": 512, 00:16:28.633 "num_blocks": 65536, 00:16:28.633 "uuid": "37267032-6c8c-43ec-a9da-910c494dcf42", 00:16:28.633 "assigned_rate_limits": { 00:16:28.633 "rw_ios_per_sec": 0, 00:16:28.633 "rw_mbytes_per_sec": 0, 00:16:28.633 "r_mbytes_per_sec": 0, 00:16:28.633 "w_mbytes_per_sec": 0 00:16:28.633 }, 00:16:28.633 "claimed": true, 00:16:28.633 "claim_type": "exclusive_write", 00:16:28.633 "zoned": false, 00:16:28.633 "supported_io_types": { 00:16:28.633 "read": true, 00:16:28.633 "write": true, 00:16:28.633 "unmap": true, 00:16:28.633 "flush": true, 00:16:28.633 "reset": true, 00:16:28.633 "nvme_admin": false, 00:16:28.633 "nvme_io": false, 00:16:28.633 "nvme_io_md": false, 00:16:28.633 "write_zeroes": true, 00:16:28.633 "zcopy": true, 00:16:28.633 "get_zone_info": false, 00:16:28.633 "zone_management": false, 00:16:28.633 "zone_append": false, 00:16:28.633 "compare": false, 00:16:28.633 "compare_and_write": false, 00:16:28.633 "abort": true, 00:16:28.633 "seek_hole": false, 00:16:28.633 "seek_data": false, 00:16:28.633 "copy": true, 00:16:28.633 "nvme_iov_md": false 00:16:28.633 }, 00:16:28.633 "memory_domains": [ 00:16:28.633 { 00:16:28.633 "dma_device_id": "system", 00:16:28.633 "dma_device_type": 1 00:16:28.633 }, 00:16:28.633 { 00:16:28.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.633 "dma_device_type": 2 00:16:28.633 } 00:16:28.633 ], 00:16:28.633 "driver_specific": {} 00:16:28.633 }' 00:16:28.633 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:28.633 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:28.633 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:28.633 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:28.633 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:28.633 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:28.633 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:28.633 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:28.633 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:28.633 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:28.633 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:28.633 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:28.633 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:28.633 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:28.633 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:28.892 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:28.892 "name": "BaseBdev2", 00:16:28.892 "aliases": [ 00:16:28.892 "a7703994-9182-4e81-b4d1-1cb5c8e27abc" 00:16:28.892 ], 00:16:28.892 "product_name": "Malloc disk", 00:16:28.892 "block_size": 512, 00:16:28.892 "num_blocks": 65536, 00:16:28.892 "uuid": "a7703994-9182-4e81-b4d1-1cb5c8e27abc", 00:16:28.892 "assigned_rate_limits": { 00:16:28.892 "rw_ios_per_sec": 0, 00:16:28.892 "rw_mbytes_per_sec": 0, 00:16:28.892 "r_mbytes_per_sec": 0, 00:16:28.892 "w_mbytes_per_sec": 0 00:16:28.892 }, 00:16:28.892 "claimed": true, 00:16:28.892 "claim_type": "exclusive_write", 00:16:28.892 "zoned": false, 00:16:28.892 "supported_io_types": { 00:16:28.892 "read": true, 00:16:28.892 "write": true, 00:16:28.892 "unmap": true, 00:16:28.892 "flush": true, 00:16:28.892 "reset": true, 00:16:28.892 "nvme_admin": false, 00:16:28.892 "nvme_io": false, 00:16:28.892 "nvme_io_md": false, 00:16:28.892 "write_zeroes": true, 00:16:28.892 "zcopy": true, 00:16:28.892 "get_zone_info": false, 00:16:28.892 "zone_management": false, 00:16:28.892 "zone_append": false, 00:16:28.892 "compare": false, 00:16:28.892 "compare_and_write": false, 00:16:28.892 "abort": true, 00:16:28.892 "seek_hole": false, 00:16:28.892 "seek_data": false, 00:16:28.892 "copy": true, 00:16:28.893 "nvme_iov_md": false 00:16:28.893 }, 00:16:28.893 "memory_domains": [ 00:16:28.893 { 00:16:28.893 "dma_device_id": "system", 00:16:28.893 "dma_device_type": 1 00:16:28.893 }, 00:16:28.893 { 00:16:28.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.893 "dma_device_type": 2 00:16:28.893 } 00:16:28.893 ], 00:16:28.893 "driver_specific": {} 00:16:28.893 }' 00:16:28.893 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:28.893 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:28.893 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:28.893 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:28.893 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:28.893 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:28.893 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:28.893 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:28.893 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:28.893 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.152 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.152 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:29.152 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:29.152 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:29.152 00:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:29.410 00:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:29.410 "name": "BaseBdev3", 00:16:29.410 "aliases": [ 00:16:29.410 "58fd0848-a267-4bcb-888b-7e786783aab3" 00:16:29.410 ], 00:16:29.410 "product_name": "Malloc disk", 00:16:29.410 "block_size": 512, 00:16:29.410 "num_blocks": 65536, 00:16:29.410 "uuid": "58fd0848-a267-4bcb-888b-7e786783aab3", 00:16:29.410 "assigned_rate_limits": { 00:16:29.410 "rw_ios_per_sec": 0, 00:16:29.410 "rw_mbytes_per_sec": 0, 00:16:29.410 "r_mbytes_per_sec": 0, 00:16:29.410 "w_mbytes_per_sec": 0 00:16:29.410 }, 00:16:29.410 "claimed": true, 00:16:29.410 "claim_type": "exclusive_write", 00:16:29.410 "zoned": false, 00:16:29.410 "supported_io_types": { 00:16:29.410 "read": true, 00:16:29.410 "write": true, 00:16:29.410 "unmap": true, 00:16:29.410 "flush": true, 00:16:29.410 "reset": true, 00:16:29.410 "nvme_admin": false, 00:16:29.410 "nvme_io": false, 00:16:29.410 "nvme_io_md": false, 00:16:29.410 "write_zeroes": true, 00:16:29.410 "zcopy": true, 00:16:29.410 "get_zone_info": false, 00:16:29.410 "zone_management": false, 00:16:29.411 "zone_append": false, 00:16:29.411 "compare": false, 00:16:29.411 "compare_and_write": false, 00:16:29.411 "abort": true, 00:16:29.411 "seek_hole": false, 00:16:29.411 "seek_data": false, 00:16:29.411 "copy": true, 00:16:29.411 "nvme_iov_md": false 00:16:29.411 }, 00:16:29.411 "memory_domains": [ 00:16:29.411 { 00:16:29.411 "dma_device_id": "system", 00:16:29.411 "dma_device_type": 1 00:16:29.411 }, 00:16:29.411 { 00:16:29.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.411 "dma_device_type": 2 00:16:29.411 } 00:16:29.411 ], 00:16:29.411 "driver_specific": {} 00:16:29.411 }' 00:16:29.411 00:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.411 00:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.411 00:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:29.411 00:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.411 00:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.411 00:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:29.411 00:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.411 00:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.411 00:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:29.411 00:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.411 00:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.411 00:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:29.411 00:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:29.670 [2024-07-25 00:00:25.378830] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:29.670 [2024-07-25 00:00:25.378877] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:29.670 [2024-07-25 00:00:25.378959] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.670 [2024-07-25 00:00:25.379072] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.670 [2024-07-25 00:00:25.379096] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name Existed_Raid, state offline 00:16:29.670 00:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 80052 00:16:29.670 00:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80052 ']' 00:16:29.670 00:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80052 00:16:29.670 00:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:16:29.670 00:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:29.670 00:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80052 00:16:29.670 killing process with pid 80052 00:16:29.670 00:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:29.670 00:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:29.670 00:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80052' 00:16:29.670 00:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 80052 00:16:29.670 00:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 80052 00:16:29.670 [2024-07-25 00:00:25.432415] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:29.929 [2024-07-25 00:00:25.658711] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:30.865 00:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:30.865 00:16:30.865 real 0m24.161s 00:16:30.865 user 0m42.234s 00:16:30.865 sys 0m3.803s 00:16:30.865 00:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:30.865 00:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.865 ************************************ 00:16:30.865 END TEST raid_state_function_test 00:16:30.865 ************************************ 00:16:30.865 00:00:26 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:16:30.865 00:00:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:30.865 00:00:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:30.865 00:00:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:31.126 ************************************ 00:16:31.126 START TEST raid_state_function_test_sb 00:16:31.126 ************************************ 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:31.126 Process raid pid: 80958 00:16:31.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=80958 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 80958' 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 80958 /var/tmp/spdk-raid.sock 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 80958 ']' 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:31.126 00:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.126 [2024-07-25 00:00:26.809285] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:16:31.126 [2024-07-25 00:00:26.809467] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.126 [2024-07-25 00:00:26.982447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.385 [2024-07-25 00:00:27.147032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.644 [2024-07-25 00:00:27.313107] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:31.903 00:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:31.903 00:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:31.903 00:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:32.163 [2024-07-25 00:00:27.934945] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:32.163 [2024-07-25 00:00:27.935296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:32.163 [2024-07-25 00:00:27.935341] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:32.163 [2024-07-25 00:00:27.935359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:32.163 [2024-07-25 00:00:27.935370] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:32.163 [2024-07-25 00:00:27.935398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:32.163 00:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:32.163 00:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:32.163 00:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:32.163 00:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:32.163 00:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:32.163 00:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:32.163 00:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:32.163 00:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:32.163 00:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:32.163 00:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:32.163 00:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.163 00:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.421 00:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:32.421 "name": "Existed_Raid", 00:16:32.421 "uuid": "dbb83004-a08f-492e-8846-1ca2890fd4a7", 00:16:32.421 "strip_size_kb": 64, 00:16:32.421 "state": "configuring", 00:16:32.421 "raid_level": "raid0", 00:16:32.421 "superblock": true, 00:16:32.421 "num_base_bdevs": 3, 00:16:32.421 "num_base_bdevs_discovered": 0, 00:16:32.421 "num_base_bdevs_operational": 3, 00:16:32.421 "base_bdevs_list": [ 00:16:32.421 { 00:16:32.421 "name": "BaseBdev1", 00:16:32.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.421 "is_configured": false, 00:16:32.421 "data_offset": 0, 00:16:32.421 "data_size": 0 00:16:32.421 }, 00:16:32.421 { 00:16:32.421 "name": "BaseBdev2", 00:16:32.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.421 "is_configured": false, 00:16:32.421 "data_offset": 0, 00:16:32.421 "data_size": 0 00:16:32.421 }, 00:16:32.421 { 00:16:32.421 "name": "BaseBdev3", 00:16:32.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.421 "is_configured": false, 00:16:32.421 "data_offset": 0, 00:16:32.421 "data_size": 0 00:16:32.422 } 00:16:32.422 ] 00:16:32.422 }' 00:16:32.422 00:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:32.422 00:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.679 00:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:32.937 [2024-07-25 00:00:28.723141] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:32.937 [2024-07-25 00:00:28.723188] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:16:32.937 00:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:33.196 [2024-07-25 00:00:28.927204] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:33.196 [2024-07-25 00:00:28.927482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:33.196 [2024-07-25 00:00:28.927515] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:33.196 [2024-07-25 00:00:28.927536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:33.196 [2024-07-25 00:00:28.927546] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:33.196 [2024-07-25 00:00:28.927560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:33.196 00:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:33.454 [2024-07-25 00:00:29.158512] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:33.454 BaseBdev1 00:16:33.454 00:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:33.454 00:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:33.454 00:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:33.454 00:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:33.454 00:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:33.454 00:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:33.454 00:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:33.712 00:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:33.712 [ 00:16:33.712 { 00:16:33.712 "name": "BaseBdev1", 00:16:33.712 "aliases": [ 00:16:33.712 "45d82183-955f-4014-aa1c-6661a267ba22" 00:16:33.712 ], 00:16:33.712 "product_name": "Malloc disk", 00:16:33.712 "block_size": 512, 00:16:33.712 "num_blocks": 65536, 00:16:33.712 "uuid": "45d82183-955f-4014-aa1c-6661a267ba22", 00:16:33.712 "assigned_rate_limits": { 00:16:33.712 "rw_ios_per_sec": 0, 00:16:33.712 "rw_mbytes_per_sec": 0, 00:16:33.712 "r_mbytes_per_sec": 0, 00:16:33.712 "w_mbytes_per_sec": 0 00:16:33.712 }, 00:16:33.712 "claimed": true, 00:16:33.712 "claim_type": "exclusive_write", 00:16:33.712 "zoned": false, 00:16:33.712 "supported_io_types": { 00:16:33.712 "read": true, 00:16:33.712 "write": true, 00:16:33.712 "unmap": true, 00:16:33.712 "flush": true, 00:16:33.712 "reset": true, 00:16:33.712 "nvme_admin": false, 00:16:33.712 "nvme_io": false, 00:16:33.712 "nvme_io_md": false, 00:16:33.712 "write_zeroes": true, 00:16:33.712 "zcopy": true, 00:16:33.712 "get_zone_info": false, 00:16:33.712 "zone_management": false, 00:16:33.712 "zone_append": false, 00:16:33.712 "compare": false, 00:16:33.712 "compare_and_write": false, 00:16:33.712 "abort": true, 00:16:33.712 "seek_hole": false, 00:16:33.712 "seek_data": false, 00:16:33.712 "copy": true, 00:16:33.712 "nvme_iov_md": false 00:16:33.712 }, 00:16:33.712 "memory_domains": [ 00:16:33.712 { 00:16:33.712 "dma_device_id": "system", 00:16:33.712 "dma_device_type": 1 00:16:33.712 }, 00:16:33.712 { 00:16:33.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.712 "dma_device_type": 2 00:16:33.712 } 00:16:33.712 ], 00:16:33.712 "driver_specific": {} 00:16:33.712 } 00:16:33.712 ] 00:16:33.712 00:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:33.712 00:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:33.712 00:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:33.712 00:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:33.712 00:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:33.712 00:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:33.712 00:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:33.712 00:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.712 00:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.712 00:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.712 00:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.712 00:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.712 00:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.972 00:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:33.973 "name": "Existed_Raid", 00:16:33.973 "uuid": "2636335e-f7c2-4757-9a3d-c69a3e29d11f", 00:16:33.973 "strip_size_kb": 64, 00:16:33.973 "state": "configuring", 00:16:33.973 "raid_level": "raid0", 00:16:33.973 "superblock": true, 00:16:33.973 "num_base_bdevs": 3, 00:16:33.973 "num_base_bdevs_discovered": 1, 00:16:33.973 "num_base_bdevs_operational": 3, 00:16:33.973 "base_bdevs_list": [ 00:16:33.973 { 00:16:33.973 "name": "BaseBdev1", 00:16:33.973 "uuid": "45d82183-955f-4014-aa1c-6661a267ba22", 00:16:33.973 "is_configured": true, 00:16:33.973 "data_offset": 2048, 00:16:33.973 "data_size": 63488 00:16:33.973 }, 00:16:33.973 { 00:16:33.973 "name": "BaseBdev2", 00:16:33.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.973 "is_configured": false, 00:16:33.973 "data_offset": 0, 00:16:33.973 "data_size": 0 00:16:33.973 }, 00:16:33.973 { 00:16:33.973 "name": "BaseBdev3", 00:16:33.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.973 "is_configured": false, 00:16:33.973 "data_offset": 0, 00:16:33.973 "data_size": 0 00:16:33.973 } 00:16:33.973 ] 00:16:33.973 }' 00:16:33.973 00:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:33.973 00:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.541 00:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:34.541 [2024-07-25 00:00:30.366879] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:34.541 [2024-07-25 00:00:30.366941] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:16:34.541 00:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:34.799 [2024-07-25 00:00:30.571058] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:34.799 [2024-07-25 00:00:30.573169] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:34.799 [2024-07-25 00:00:30.573221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:34.800 [2024-07-25 00:00:30.573253] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:34.800 [2024-07-25 00:00:30.573268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:34.800 00:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:34.800 00:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:34.800 00:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:34.800 00:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:34.800 00:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:34.800 00:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:34.800 00:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:34.800 00:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:34.800 00:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:34.800 00:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:34.800 00:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:34.800 00:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:34.800 00:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.800 00:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.058 00:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:35.058 "name": "Existed_Raid", 00:16:35.058 "uuid": "60417b31-e186-45a8-8778-77952f983663", 00:16:35.058 "strip_size_kb": 64, 00:16:35.058 "state": "configuring", 00:16:35.058 "raid_level": "raid0", 00:16:35.058 "superblock": true, 00:16:35.059 "num_base_bdevs": 3, 00:16:35.059 "num_base_bdevs_discovered": 1, 00:16:35.059 "num_base_bdevs_operational": 3, 00:16:35.059 "base_bdevs_list": [ 00:16:35.059 { 00:16:35.059 "name": "BaseBdev1", 00:16:35.059 "uuid": "45d82183-955f-4014-aa1c-6661a267ba22", 00:16:35.059 "is_configured": true, 00:16:35.059 "data_offset": 2048, 00:16:35.059 "data_size": 63488 00:16:35.059 }, 00:16:35.059 { 00:16:35.059 "name": "BaseBdev2", 00:16:35.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.059 "is_configured": false, 00:16:35.059 "data_offset": 0, 00:16:35.059 "data_size": 0 00:16:35.059 }, 00:16:35.059 { 00:16:35.059 "name": "BaseBdev3", 00:16:35.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.059 "is_configured": false, 00:16:35.059 "data_offset": 0, 00:16:35.059 "data_size": 0 00:16:35.059 } 00:16:35.059 ] 00:16:35.059 }' 00:16:35.059 00:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:35.059 00:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.317 00:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:35.575 [2024-07-25 00:00:31.405723] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.575 BaseBdev2 00:16:35.575 00:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:35.575 00:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:35.575 00:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:35.575 00:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:35.575 00:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:35.575 00:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:35.575 00:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:35.832 00:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:36.093 [ 00:16:36.093 { 00:16:36.093 "name": "BaseBdev2", 00:16:36.093 "aliases": [ 00:16:36.093 "7576fbdb-20c8-4012-8332-a096f79c8758" 00:16:36.093 ], 00:16:36.093 "product_name": "Malloc disk", 00:16:36.093 "block_size": 512, 00:16:36.093 "num_blocks": 65536, 00:16:36.093 "uuid": "7576fbdb-20c8-4012-8332-a096f79c8758", 00:16:36.093 "assigned_rate_limits": { 00:16:36.093 "rw_ios_per_sec": 0, 00:16:36.093 "rw_mbytes_per_sec": 0, 00:16:36.093 "r_mbytes_per_sec": 0, 00:16:36.093 "w_mbytes_per_sec": 0 00:16:36.093 }, 00:16:36.093 "claimed": true, 00:16:36.093 "claim_type": "exclusive_write", 00:16:36.093 "zoned": false, 00:16:36.093 "supported_io_types": { 00:16:36.093 "read": true, 00:16:36.093 "write": true, 00:16:36.093 "unmap": true, 00:16:36.093 "flush": true, 00:16:36.093 "reset": true, 00:16:36.093 "nvme_admin": false, 00:16:36.093 "nvme_io": false, 00:16:36.093 "nvme_io_md": false, 00:16:36.093 "write_zeroes": true, 00:16:36.093 "zcopy": true, 00:16:36.093 "get_zone_info": false, 00:16:36.093 "zone_management": false, 00:16:36.093 "zone_append": false, 00:16:36.093 "compare": false, 00:16:36.093 "compare_and_write": false, 00:16:36.093 "abort": true, 00:16:36.093 "seek_hole": false, 00:16:36.093 "seek_data": false, 00:16:36.093 "copy": true, 00:16:36.093 "nvme_iov_md": false 00:16:36.093 }, 00:16:36.093 "memory_domains": [ 00:16:36.093 { 00:16:36.093 "dma_device_id": "system", 00:16:36.093 "dma_device_type": 1 00:16:36.093 }, 00:16:36.093 { 00:16:36.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.093 "dma_device_type": 2 00:16:36.093 } 00:16:36.093 ], 00:16:36.093 "driver_specific": {} 00:16:36.093 } 00:16:36.093 ] 00:16:36.093 00:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:36.093 00:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:36.093 00:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:36.093 00:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:36.093 00:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:36.093 00:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:36.093 00:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:36.093 00:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:36.093 00:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:36.093 00:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:36.093 00:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:36.093 00:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:36.093 00:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:36.093 00:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.093 00:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.351 00:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:36.351 "name": "Existed_Raid", 00:16:36.351 "uuid": "60417b31-e186-45a8-8778-77952f983663", 00:16:36.351 "strip_size_kb": 64, 00:16:36.351 "state": "configuring", 00:16:36.351 "raid_level": "raid0", 00:16:36.351 "superblock": true, 00:16:36.351 "num_base_bdevs": 3, 00:16:36.351 "num_base_bdevs_discovered": 2, 00:16:36.351 "num_base_bdevs_operational": 3, 00:16:36.351 "base_bdevs_list": [ 00:16:36.351 { 00:16:36.351 "name": "BaseBdev1", 00:16:36.351 "uuid": "45d82183-955f-4014-aa1c-6661a267ba22", 00:16:36.351 "is_configured": true, 00:16:36.351 "data_offset": 2048, 00:16:36.351 "data_size": 63488 00:16:36.351 }, 00:16:36.351 { 00:16:36.351 "name": "BaseBdev2", 00:16:36.351 "uuid": "7576fbdb-20c8-4012-8332-a096f79c8758", 00:16:36.351 "is_configured": true, 00:16:36.351 "data_offset": 2048, 00:16:36.351 "data_size": 63488 00:16:36.351 }, 00:16:36.351 { 00:16:36.351 "name": "BaseBdev3", 00:16:36.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.351 "is_configured": false, 00:16:36.351 "data_offset": 0, 00:16:36.351 "data_size": 0 00:16:36.351 } 00:16:36.351 ] 00:16:36.352 }' 00:16:36.352 00:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:36.352 00:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.610 00:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:36.871 [2024-07-25 00:00:32.695773] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:36.871 BaseBdev3 00:16:36.871 [2024-07-25 00:00:32.696433] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:16:36.871 [2024-07-25 00:00:32.696466] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:36.871 [2024-07-25 00:00:32.696586] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:16:36.872 [2024-07-25 00:00:32.696971] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:16:36.872 [2024-07-25 00:00:32.696989] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:16:36.872 [2024-07-25 00:00:32.697138] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.872 00:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:36.872 00:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:36.872 00:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:36.872 00:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:36.872 00:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:36.872 00:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:36.872 00:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:37.132 00:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:37.391 [ 00:16:37.391 { 00:16:37.391 "name": "BaseBdev3", 00:16:37.391 "aliases": [ 00:16:37.391 "3141f538-a6c8-4f1b-b602-9313d2cf1897" 00:16:37.391 ], 00:16:37.391 "product_name": "Malloc disk", 00:16:37.391 "block_size": 512, 00:16:37.391 "num_blocks": 65536, 00:16:37.391 "uuid": "3141f538-a6c8-4f1b-b602-9313d2cf1897", 00:16:37.391 "assigned_rate_limits": { 00:16:37.391 "rw_ios_per_sec": 0, 00:16:37.391 "rw_mbytes_per_sec": 0, 00:16:37.391 "r_mbytes_per_sec": 0, 00:16:37.391 "w_mbytes_per_sec": 0 00:16:37.391 }, 00:16:37.391 "claimed": true, 00:16:37.391 "claim_type": "exclusive_write", 00:16:37.391 "zoned": false, 00:16:37.391 "supported_io_types": { 00:16:37.391 "read": true, 00:16:37.391 "write": true, 00:16:37.391 "unmap": true, 00:16:37.391 "flush": true, 00:16:37.391 "reset": true, 00:16:37.391 "nvme_admin": false, 00:16:37.391 "nvme_io": false, 00:16:37.391 "nvme_io_md": false, 00:16:37.391 "write_zeroes": true, 00:16:37.391 "zcopy": true, 00:16:37.391 "get_zone_info": false, 00:16:37.391 "zone_management": false, 00:16:37.391 "zone_append": false, 00:16:37.391 "compare": false, 00:16:37.391 "compare_and_write": false, 00:16:37.391 "abort": true, 00:16:37.391 "seek_hole": false, 00:16:37.391 "seek_data": false, 00:16:37.391 "copy": true, 00:16:37.391 "nvme_iov_md": false 00:16:37.391 }, 00:16:37.391 "memory_domains": [ 00:16:37.391 { 00:16:37.391 "dma_device_id": "system", 00:16:37.391 "dma_device_type": 1 00:16:37.391 }, 00:16:37.391 { 00:16:37.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.391 "dma_device_type": 2 00:16:37.391 } 00:16:37.391 ], 00:16:37.391 "driver_specific": {} 00:16:37.391 } 00:16:37.391 ] 00:16:37.391 00:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:37.391 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:37.391 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:37.391 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:37.391 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:37.391 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:37.391 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:37.391 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:37.391 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:37.391 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:37.391 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:37.391 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:37.391 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:37.391 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.391 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.651 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:37.651 "name": "Existed_Raid", 00:16:37.651 "uuid": "60417b31-e186-45a8-8778-77952f983663", 00:16:37.651 "strip_size_kb": 64, 00:16:37.651 "state": "online", 00:16:37.651 "raid_level": "raid0", 00:16:37.651 "superblock": true, 00:16:37.651 "num_base_bdevs": 3, 00:16:37.651 "num_base_bdevs_discovered": 3, 00:16:37.651 "num_base_bdevs_operational": 3, 00:16:37.651 "base_bdevs_list": [ 00:16:37.651 { 00:16:37.651 "name": "BaseBdev1", 00:16:37.651 "uuid": "45d82183-955f-4014-aa1c-6661a267ba22", 00:16:37.651 "is_configured": true, 00:16:37.651 "data_offset": 2048, 00:16:37.651 "data_size": 63488 00:16:37.651 }, 00:16:37.651 { 00:16:37.651 "name": "BaseBdev2", 00:16:37.651 "uuid": "7576fbdb-20c8-4012-8332-a096f79c8758", 00:16:37.651 "is_configured": true, 00:16:37.651 "data_offset": 2048, 00:16:37.651 "data_size": 63488 00:16:37.651 }, 00:16:37.651 { 00:16:37.651 "name": "BaseBdev3", 00:16:37.651 "uuid": "3141f538-a6c8-4f1b-b602-9313d2cf1897", 00:16:37.651 "is_configured": true, 00:16:37.651 "data_offset": 2048, 00:16:37.651 "data_size": 63488 00:16:37.651 } 00:16:37.651 ] 00:16:37.651 }' 00:16:37.651 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:37.651 00:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.909 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:37.909 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:37.909 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:37.909 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:37.909 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:37.909 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:37.909 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:37.909 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:38.167 [2024-07-25 00:00:33.976601] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.167 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:38.167 "name": "Existed_Raid", 00:16:38.167 "aliases": [ 00:16:38.168 "60417b31-e186-45a8-8778-77952f983663" 00:16:38.168 ], 00:16:38.168 "product_name": "Raid Volume", 00:16:38.168 "block_size": 512, 00:16:38.168 "num_blocks": 190464, 00:16:38.168 "uuid": "60417b31-e186-45a8-8778-77952f983663", 00:16:38.168 "assigned_rate_limits": { 00:16:38.168 "rw_ios_per_sec": 0, 00:16:38.168 "rw_mbytes_per_sec": 0, 00:16:38.168 "r_mbytes_per_sec": 0, 00:16:38.168 "w_mbytes_per_sec": 0 00:16:38.168 }, 00:16:38.168 "claimed": false, 00:16:38.168 "zoned": false, 00:16:38.168 "supported_io_types": { 00:16:38.168 "read": true, 00:16:38.168 "write": true, 00:16:38.168 "unmap": true, 00:16:38.168 "flush": true, 00:16:38.168 "reset": true, 00:16:38.168 "nvme_admin": false, 00:16:38.168 "nvme_io": false, 00:16:38.168 "nvme_io_md": false, 00:16:38.168 "write_zeroes": true, 00:16:38.168 "zcopy": false, 00:16:38.168 "get_zone_info": false, 00:16:38.168 "zone_management": false, 00:16:38.168 "zone_append": false, 00:16:38.168 "compare": false, 00:16:38.168 "compare_and_write": false, 00:16:38.168 "abort": false, 00:16:38.168 "seek_hole": false, 00:16:38.168 "seek_data": false, 00:16:38.168 "copy": false, 00:16:38.168 "nvme_iov_md": false 00:16:38.168 }, 00:16:38.168 "memory_domains": [ 00:16:38.168 { 00:16:38.168 "dma_device_id": "system", 00:16:38.168 "dma_device_type": 1 00:16:38.168 }, 00:16:38.168 { 00:16:38.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.168 "dma_device_type": 2 00:16:38.168 }, 00:16:38.168 { 00:16:38.168 "dma_device_id": "system", 00:16:38.168 "dma_device_type": 1 00:16:38.168 }, 00:16:38.168 { 00:16:38.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.168 "dma_device_type": 2 00:16:38.168 }, 00:16:38.168 { 00:16:38.168 "dma_device_id": "system", 00:16:38.168 "dma_device_type": 1 00:16:38.168 }, 00:16:38.168 { 00:16:38.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.168 "dma_device_type": 2 00:16:38.168 } 00:16:38.168 ], 00:16:38.168 "driver_specific": { 00:16:38.168 "raid": { 00:16:38.168 "uuid": "60417b31-e186-45a8-8778-77952f983663", 00:16:38.168 "strip_size_kb": 64, 00:16:38.168 "state": "online", 00:16:38.168 "raid_level": "raid0", 00:16:38.168 "superblock": true, 00:16:38.168 "num_base_bdevs": 3, 00:16:38.168 "num_base_bdevs_discovered": 3, 00:16:38.168 "num_base_bdevs_operational": 3, 00:16:38.168 "base_bdevs_list": [ 00:16:38.168 { 00:16:38.168 "name": "BaseBdev1", 00:16:38.168 "uuid": "45d82183-955f-4014-aa1c-6661a267ba22", 00:16:38.168 "is_configured": true, 00:16:38.168 "data_offset": 2048, 00:16:38.168 "data_size": 63488 00:16:38.168 }, 00:16:38.168 { 00:16:38.168 "name": "BaseBdev2", 00:16:38.168 "uuid": "7576fbdb-20c8-4012-8332-a096f79c8758", 00:16:38.168 "is_configured": true, 00:16:38.168 "data_offset": 2048, 00:16:38.168 "data_size": 63488 00:16:38.168 }, 00:16:38.168 { 00:16:38.168 "name": "BaseBdev3", 00:16:38.168 "uuid": "3141f538-a6c8-4f1b-b602-9313d2cf1897", 00:16:38.168 "is_configured": true, 00:16:38.168 "data_offset": 2048, 00:16:38.168 "data_size": 63488 00:16:38.168 } 00:16:38.168 ] 00:16:38.168 } 00:16:38.168 } 00:16:38.168 }' 00:16:38.168 00:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:38.168 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:38.168 BaseBdev2 00:16:38.168 BaseBdev3' 00:16:38.168 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:38.168 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:38.168 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:38.425 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:38.425 "name": "BaseBdev1", 00:16:38.425 "aliases": [ 00:16:38.425 "45d82183-955f-4014-aa1c-6661a267ba22" 00:16:38.425 ], 00:16:38.425 "product_name": "Malloc disk", 00:16:38.425 "block_size": 512, 00:16:38.425 "num_blocks": 65536, 00:16:38.425 "uuid": "45d82183-955f-4014-aa1c-6661a267ba22", 00:16:38.425 "assigned_rate_limits": { 00:16:38.425 "rw_ios_per_sec": 0, 00:16:38.425 "rw_mbytes_per_sec": 0, 00:16:38.425 "r_mbytes_per_sec": 0, 00:16:38.425 "w_mbytes_per_sec": 0 00:16:38.425 }, 00:16:38.425 "claimed": true, 00:16:38.425 "claim_type": "exclusive_write", 00:16:38.425 "zoned": false, 00:16:38.425 "supported_io_types": { 00:16:38.425 "read": true, 00:16:38.425 "write": true, 00:16:38.425 "unmap": true, 00:16:38.425 "flush": true, 00:16:38.425 "reset": true, 00:16:38.425 "nvme_admin": false, 00:16:38.425 "nvme_io": false, 00:16:38.425 "nvme_io_md": false, 00:16:38.425 "write_zeroes": true, 00:16:38.425 "zcopy": true, 00:16:38.425 "get_zone_info": false, 00:16:38.425 "zone_management": false, 00:16:38.425 "zone_append": false, 00:16:38.425 "compare": false, 00:16:38.426 "compare_and_write": false, 00:16:38.426 "abort": true, 00:16:38.426 "seek_hole": false, 00:16:38.426 "seek_data": false, 00:16:38.426 "copy": true, 00:16:38.426 "nvme_iov_md": false 00:16:38.426 }, 00:16:38.426 "memory_domains": [ 00:16:38.426 { 00:16:38.426 "dma_device_id": "system", 00:16:38.426 "dma_device_type": 1 00:16:38.426 }, 00:16:38.426 { 00:16:38.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.426 "dma_device_type": 2 00:16:38.426 } 00:16:38.426 ], 00:16:38.426 "driver_specific": {} 00:16:38.426 }' 00:16:38.426 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:38.426 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:38.426 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:38.426 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:38.426 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:38.684 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:38.684 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:38.684 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:38.684 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:38.684 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:38.684 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:38.684 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:38.684 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:38.684 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:38.684 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:38.943 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:38.943 "name": "BaseBdev2", 00:16:38.943 "aliases": [ 00:16:38.943 "7576fbdb-20c8-4012-8332-a096f79c8758" 00:16:38.943 ], 00:16:38.943 "product_name": "Malloc disk", 00:16:38.943 "block_size": 512, 00:16:38.943 "num_blocks": 65536, 00:16:38.943 "uuid": "7576fbdb-20c8-4012-8332-a096f79c8758", 00:16:38.943 "assigned_rate_limits": { 00:16:38.943 "rw_ios_per_sec": 0, 00:16:38.943 "rw_mbytes_per_sec": 0, 00:16:38.943 "r_mbytes_per_sec": 0, 00:16:38.943 "w_mbytes_per_sec": 0 00:16:38.943 }, 00:16:38.943 "claimed": true, 00:16:38.943 "claim_type": "exclusive_write", 00:16:38.943 "zoned": false, 00:16:38.943 "supported_io_types": { 00:16:38.943 "read": true, 00:16:38.943 "write": true, 00:16:38.943 "unmap": true, 00:16:38.943 "flush": true, 00:16:38.943 "reset": true, 00:16:38.943 "nvme_admin": false, 00:16:38.943 "nvme_io": false, 00:16:38.943 "nvme_io_md": false, 00:16:38.943 "write_zeroes": true, 00:16:38.943 "zcopy": true, 00:16:38.943 "get_zone_info": false, 00:16:38.943 "zone_management": false, 00:16:38.943 "zone_append": false, 00:16:38.943 "compare": false, 00:16:38.943 "compare_and_write": false, 00:16:38.943 "abort": true, 00:16:38.943 "seek_hole": false, 00:16:38.943 "seek_data": false, 00:16:38.943 "copy": true, 00:16:38.943 "nvme_iov_md": false 00:16:38.943 }, 00:16:38.943 "memory_domains": [ 00:16:38.943 { 00:16:38.943 "dma_device_id": "system", 00:16:38.943 "dma_device_type": 1 00:16:38.943 }, 00:16:38.943 { 00:16:38.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.943 "dma_device_type": 2 00:16:38.943 } 00:16:38.943 ], 00:16:38.943 "driver_specific": {} 00:16:38.943 }' 00:16:38.943 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:38.943 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:38.943 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:38.943 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:38.943 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:38.943 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:38.943 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:38.943 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:38.943 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:38.943 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:38.943 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:38.943 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:38.943 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:38.943 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:38.943 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:39.201 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:39.201 "name": "BaseBdev3", 00:16:39.201 "aliases": [ 00:16:39.201 "3141f538-a6c8-4f1b-b602-9313d2cf1897" 00:16:39.201 ], 00:16:39.201 "product_name": "Malloc disk", 00:16:39.201 "block_size": 512, 00:16:39.201 "num_blocks": 65536, 00:16:39.201 "uuid": "3141f538-a6c8-4f1b-b602-9313d2cf1897", 00:16:39.201 "assigned_rate_limits": { 00:16:39.201 "rw_ios_per_sec": 0, 00:16:39.201 "rw_mbytes_per_sec": 0, 00:16:39.201 "r_mbytes_per_sec": 0, 00:16:39.201 "w_mbytes_per_sec": 0 00:16:39.201 }, 00:16:39.201 "claimed": true, 00:16:39.201 "claim_type": "exclusive_write", 00:16:39.201 "zoned": false, 00:16:39.201 "supported_io_types": { 00:16:39.201 "read": true, 00:16:39.201 "write": true, 00:16:39.201 "unmap": true, 00:16:39.201 "flush": true, 00:16:39.201 "reset": true, 00:16:39.201 "nvme_admin": false, 00:16:39.201 "nvme_io": false, 00:16:39.201 "nvme_io_md": false, 00:16:39.201 "write_zeroes": true, 00:16:39.201 "zcopy": true, 00:16:39.201 "get_zone_info": false, 00:16:39.201 "zone_management": false, 00:16:39.201 "zone_append": false, 00:16:39.201 "compare": false, 00:16:39.201 "compare_and_write": false, 00:16:39.201 "abort": true, 00:16:39.201 "seek_hole": false, 00:16:39.201 "seek_data": false, 00:16:39.201 "copy": true, 00:16:39.201 "nvme_iov_md": false 00:16:39.201 }, 00:16:39.201 "memory_domains": [ 00:16:39.201 { 00:16:39.201 "dma_device_id": "system", 00:16:39.201 "dma_device_type": 1 00:16:39.201 }, 00:16:39.201 { 00:16:39.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.201 "dma_device_type": 2 00:16:39.201 } 00:16:39.201 ], 00:16:39.201 "driver_specific": {} 00:16:39.201 }' 00:16:39.201 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:39.201 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:39.201 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:39.201 00:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:39.201 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:39.201 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:39.201 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:39.201 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:39.201 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:39.201 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:39.201 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:39.201 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:39.201 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:39.458 [2024-07-25 00:00:35.308725] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:39.458 [2024-07-25 00:00:35.308773] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:39.458 [2024-07-25 00:00:35.308869] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.717 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:39.717 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:16:39.717 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:39.717 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:16:39.717 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:39.717 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:39.717 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:39.717 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:39.717 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:39.717 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:39.717 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:39.717 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:39.717 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:39.717 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:39.717 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:39.717 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.717 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.975 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.975 "name": "Existed_Raid", 00:16:39.975 "uuid": "60417b31-e186-45a8-8778-77952f983663", 00:16:39.975 "strip_size_kb": 64, 00:16:39.975 "state": "offline", 00:16:39.975 "raid_level": "raid0", 00:16:39.975 "superblock": true, 00:16:39.975 "num_base_bdevs": 3, 00:16:39.975 "num_base_bdevs_discovered": 2, 00:16:39.975 "num_base_bdevs_operational": 2, 00:16:39.975 "base_bdevs_list": [ 00:16:39.975 { 00:16:39.975 "name": null, 00:16:39.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.976 "is_configured": false, 00:16:39.976 "data_offset": 2048, 00:16:39.976 "data_size": 63488 00:16:39.976 }, 00:16:39.976 { 00:16:39.976 "name": "BaseBdev2", 00:16:39.976 "uuid": "7576fbdb-20c8-4012-8332-a096f79c8758", 00:16:39.976 "is_configured": true, 00:16:39.976 "data_offset": 2048, 00:16:39.976 "data_size": 63488 00:16:39.976 }, 00:16:39.976 { 00:16:39.976 "name": "BaseBdev3", 00:16:39.976 "uuid": "3141f538-a6c8-4f1b-b602-9313d2cf1897", 00:16:39.976 "is_configured": true, 00:16:39.976 "data_offset": 2048, 00:16:39.976 "data_size": 63488 00:16:39.976 } 00:16:39.976 ] 00:16:39.976 }' 00:16:39.976 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.976 00:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.235 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:40.235 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:40.235 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.235 00:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:40.494 00:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:40.494 00:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:40.494 00:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:40.752 [2024-07-25 00:00:36.394426] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:40.752 00:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:40.752 00:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:40.752 00:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:40.752 00:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.010 00:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:41.010 00:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:41.010 00:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:41.268 [2024-07-25 00:00:36.939402] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:41.268 [2024-07-25 00:00:36.939697] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:16:41.268 00:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:41.268 00:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:41.268 00:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.268 00:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:41.528 00:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:41.528 00:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:41.528 00:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:16:41.528 00:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:41.528 00:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:41.528 00:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:41.789 BaseBdev2 00:16:41.789 00:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:41.789 00:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:41.789 00:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:41.789 00:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:41.789 00:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:41.789 00:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:41.789 00:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:42.051 00:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:42.311 [ 00:16:42.311 { 00:16:42.311 "name": "BaseBdev2", 00:16:42.311 "aliases": [ 00:16:42.311 "b3213e77-e109-451e-b150-334ddef206f2" 00:16:42.311 ], 00:16:42.311 "product_name": "Malloc disk", 00:16:42.311 "block_size": 512, 00:16:42.311 "num_blocks": 65536, 00:16:42.311 "uuid": "b3213e77-e109-451e-b150-334ddef206f2", 00:16:42.311 "assigned_rate_limits": { 00:16:42.311 "rw_ios_per_sec": 0, 00:16:42.311 "rw_mbytes_per_sec": 0, 00:16:42.311 "r_mbytes_per_sec": 0, 00:16:42.311 "w_mbytes_per_sec": 0 00:16:42.311 }, 00:16:42.311 "claimed": false, 00:16:42.311 "zoned": false, 00:16:42.311 "supported_io_types": { 00:16:42.311 "read": true, 00:16:42.311 "write": true, 00:16:42.311 "unmap": true, 00:16:42.311 "flush": true, 00:16:42.311 "reset": true, 00:16:42.311 "nvme_admin": false, 00:16:42.311 "nvme_io": false, 00:16:42.311 "nvme_io_md": false, 00:16:42.311 "write_zeroes": true, 00:16:42.311 "zcopy": true, 00:16:42.311 "get_zone_info": false, 00:16:42.311 "zone_management": false, 00:16:42.311 "zone_append": false, 00:16:42.311 "compare": false, 00:16:42.311 "compare_and_write": false, 00:16:42.311 "abort": true, 00:16:42.311 "seek_hole": false, 00:16:42.311 "seek_data": false, 00:16:42.311 "copy": true, 00:16:42.311 "nvme_iov_md": false 00:16:42.311 }, 00:16:42.311 "memory_domains": [ 00:16:42.311 { 00:16:42.311 "dma_device_id": "system", 00:16:42.311 "dma_device_type": 1 00:16:42.311 }, 00:16:42.311 { 00:16:42.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.311 "dma_device_type": 2 00:16:42.311 } 00:16:42.311 ], 00:16:42.311 "driver_specific": {} 00:16:42.311 } 00:16:42.311 ] 00:16:42.311 00:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:42.311 00:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:42.311 00:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:42.311 00:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:42.569 BaseBdev3 00:16:42.569 00:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:42.569 00:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:42.569 00:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:42.569 00:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:42.569 00:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:42.569 00:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:42.569 00:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:42.828 00:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:42.828 [ 00:16:42.828 { 00:16:42.828 "name": "BaseBdev3", 00:16:42.828 "aliases": [ 00:16:42.828 "16050cf1-7eb2-45b5-bb6d-e9554b2ee39a" 00:16:42.828 ], 00:16:42.828 "product_name": "Malloc disk", 00:16:42.828 "block_size": 512, 00:16:42.828 "num_blocks": 65536, 00:16:42.828 "uuid": "16050cf1-7eb2-45b5-bb6d-e9554b2ee39a", 00:16:42.828 "assigned_rate_limits": { 00:16:42.828 "rw_ios_per_sec": 0, 00:16:42.828 "rw_mbytes_per_sec": 0, 00:16:42.828 "r_mbytes_per_sec": 0, 00:16:42.828 "w_mbytes_per_sec": 0 00:16:42.828 }, 00:16:42.828 "claimed": false, 00:16:42.828 "zoned": false, 00:16:42.828 "supported_io_types": { 00:16:42.828 "read": true, 00:16:42.828 "write": true, 00:16:42.828 "unmap": true, 00:16:42.828 "flush": true, 00:16:42.828 "reset": true, 00:16:42.828 "nvme_admin": false, 00:16:42.828 "nvme_io": false, 00:16:42.828 "nvme_io_md": false, 00:16:42.828 "write_zeroes": true, 00:16:42.828 "zcopy": true, 00:16:42.828 "get_zone_info": false, 00:16:42.828 "zone_management": false, 00:16:42.828 "zone_append": false, 00:16:42.828 "compare": false, 00:16:42.828 "compare_and_write": false, 00:16:42.828 "abort": true, 00:16:42.828 "seek_hole": false, 00:16:42.828 "seek_data": false, 00:16:42.828 "copy": true, 00:16:42.828 "nvme_iov_md": false 00:16:42.828 }, 00:16:42.828 "memory_domains": [ 00:16:42.828 { 00:16:42.828 "dma_device_id": "system", 00:16:42.828 "dma_device_type": 1 00:16:42.828 }, 00:16:42.828 { 00:16:42.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.828 "dma_device_type": 2 00:16:42.828 } 00:16:42.828 ], 00:16:42.828 "driver_specific": {} 00:16:42.828 } 00:16:42.828 ] 00:16:42.828 00:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:42.828 00:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:42.828 00:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:42.828 00:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:43.087 [2024-07-25 00:00:38.845327] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:43.087 [2024-07-25 00:00:38.845385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:43.087 [2024-07-25 00:00:38.845417] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.087 [2024-07-25 00:00:38.847372] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:43.087 00:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:43.087 00:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:43.087 00:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:43.087 00:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:43.087 00:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:43.087 00:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:43.087 00:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:43.087 00:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:43.087 00:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:43.087 00:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:43.087 00:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.087 00:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.345 00:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:43.345 "name": "Existed_Raid", 00:16:43.345 "uuid": "e0ced88c-c216-43dc-9f76-752f0f52f33e", 00:16:43.345 "strip_size_kb": 64, 00:16:43.345 "state": "configuring", 00:16:43.345 "raid_level": "raid0", 00:16:43.345 "superblock": true, 00:16:43.345 "num_base_bdevs": 3, 00:16:43.345 "num_base_bdevs_discovered": 2, 00:16:43.345 "num_base_bdevs_operational": 3, 00:16:43.345 "base_bdevs_list": [ 00:16:43.345 { 00:16:43.345 "name": "BaseBdev1", 00:16:43.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.345 "is_configured": false, 00:16:43.346 "data_offset": 0, 00:16:43.346 "data_size": 0 00:16:43.346 }, 00:16:43.346 { 00:16:43.346 "name": "BaseBdev2", 00:16:43.346 "uuid": "b3213e77-e109-451e-b150-334ddef206f2", 00:16:43.346 "is_configured": true, 00:16:43.346 "data_offset": 2048, 00:16:43.346 "data_size": 63488 00:16:43.346 }, 00:16:43.346 { 00:16:43.346 "name": "BaseBdev3", 00:16:43.346 "uuid": "16050cf1-7eb2-45b5-bb6d-e9554b2ee39a", 00:16:43.346 "is_configured": true, 00:16:43.346 "data_offset": 2048, 00:16:43.346 "data_size": 63488 00:16:43.346 } 00:16:43.346 ] 00:16:43.346 }' 00:16:43.346 00:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:43.346 00:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.605 00:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:43.863 [2024-07-25 00:00:39.597734] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:43.863 00:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:43.863 00:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:43.863 00:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:43.863 00:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:43.863 00:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:43.863 00:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:43.863 00:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:43.863 00:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:43.863 00:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:43.863 00:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:43.863 00:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.863 00:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.122 00:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:44.122 "name": "Existed_Raid", 00:16:44.122 "uuid": "e0ced88c-c216-43dc-9f76-752f0f52f33e", 00:16:44.122 "strip_size_kb": 64, 00:16:44.122 "state": "configuring", 00:16:44.122 "raid_level": "raid0", 00:16:44.122 "superblock": true, 00:16:44.122 "num_base_bdevs": 3, 00:16:44.122 "num_base_bdevs_discovered": 1, 00:16:44.122 "num_base_bdevs_operational": 3, 00:16:44.122 "base_bdevs_list": [ 00:16:44.122 { 00:16:44.122 "name": "BaseBdev1", 00:16:44.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.122 "is_configured": false, 00:16:44.122 "data_offset": 0, 00:16:44.122 "data_size": 0 00:16:44.122 }, 00:16:44.122 { 00:16:44.122 "name": null, 00:16:44.122 "uuid": "b3213e77-e109-451e-b150-334ddef206f2", 00:16:44.122 "is_configured": false, 00:16:44.122 "data_offset": 2048, 00:16:44.122 "data_size": 63488 00:16:44.122 }, 00:16:44.122 { 00:16:44.122 "name": "BaseBdev3", 00:16:44.122 "uuid": "16050cf1-7eb2-45b5-bb6d-e9554b2ee39a", 00:16:44.122 "is_configured": true, 00:16:44.122 "data_offset": 2048, 00:16:44.122 "data_size": 63488 00:16:44.122 } 00:16:44.122 ] 00:16:44.122 }' 00:16:44.122 00:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:44.122 00:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.443 00:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.443 00:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:44.700 00:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:44.700 00:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:44.961 [2024-07-25 00:00:40.589326] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:44.961 BaseBdev1 00:16:44.961 00:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:44.961 00:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:44.961 00:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:44.961 00:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:44.961 00:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:44.961 00:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:44.961 00:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:45.221 00:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:45.221 [ 00:16:45.221 { 00:16:45.221 "name": "BaseBdev1", 00:16:45.221 "aliases": [ 00:16:45.221 "f557021a-ce6e-49fc-8efe-a7d69cded8b5" 00:16:45.221 ], 00:16:45.221 "product_name": "Malloc disk", 00:16:45.221 "block_size": 512, 00:16:45.221 "num_blocks": 65536, 00:16:45.221 "uuid": "f557021a-ce6e-49fc-8efe-a7d69cded8b5", 00:16:45.221 "assigned_rate_limits": { 00:16:45.221 "rw_ios_per_sec": 0, 00:16:45.221 "rw_mbytes_per_sec": 0, 00:16:45.221 "r_mbytes_per_sec": 0, 00:16:45.221 "w_mbytes_per_sec": 0 00:16:45.221 }, 00:16:45.221 "claimed": true, 00:16:45.221 "claim_type": "exclusive_write", 00:16:45.221 "zoned": false, 00:16:45.221 "supported_io_types": { 00:16:45.221 "read": true, 00:16:45.221 "write": true, 00:16:45.221 "unmap": true, 00:16:45.221 "flush": true, 00:16:45.221 "reset": true, 00:16:45.221 "nvme_admin": false, 00:16:45.221 "nvme_io": false, 00:16:45.221 "nvme_io_md": false, 00:16:45.221 "write_zeroes": true, 00:16:45.221 "zcopy": true, 00:16:45.221 "get_zone_info": false, 00:16:45.221 "zone_management": false, 00:16:45.221 "zone_append": false, 00:16:45.221 "compare": false, 00:16:45.222 "compare_and_write": false, 00:16:45.222 "abort": true, 00:16:45.222 "seek_hole": false, 00:16:45.222 "seek_data": false, 00:16:45.222 "copy": true, 00:16:45.222 "nvme_iov_md": false 00:16:45.222 }, 00:16:45.222 "memory_domains": [ 00:16:45.222 { 00:16:45.222 "dma_device_id": "system", 00:16:45.222 "dma_device_type": 1 00:16:45.222 }, 00:16:45.222 { 00:16:45.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.222 "dma_device_type": 2 00:16:45.222 } 00:16:45.222 ], 00:16:45.222 "driver_specific": {} 00:16:45.222 } 00:16:45.222 ] 00:16:45.222 00:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:45.222 00:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:45.222 00:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:45.222 00:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:45.222 00:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:45.222 00:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:45.222 00:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:45.222 00:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:45.222 00:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:45.222 00:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:45.222 00:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:45.222 00:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.222 00:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.787 00:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:45.787 "name": "Existed_Raid", 00:16:45.787 "uuid": "e0ced88c-c216-43dc-9f76-752f0f52f33e", 00:16:45.787 "strip_size_kb": 64, 00:16:45.787 "state": "configuring", 00:16:45.787 "raid_level": "raid0", 00:16:45.787 "superblock": true, 00:16:45.787 "num_base_bdevs": 3, 00:16:45.787 "num_base_bdevs_discovered": 2, 00:16:45.787 "num_base_bdevs_operational": 3, 00:16:45.787 "base_bdevs_list": [ 00:16:45.787 { 00:16:45.787 "name": "BaseBdev1", 00:16:45.787 "uuid": "f557021a-ce6e-49fc-8efe-a7d69cded8b5", 00:16:45.787 "is_configured": true, 00:16:45.787 "data_offset": 2048, 00:16:45.787 "data_size": 63488 00:16:45.787 }, 00:16:45.787 { 00:16:45.787 "name": null, 00:16:45.787 "uuid": "b3213e77-e109-451e-b150-334ddef206f2", 00:16:45.787 "is_configured": false, 00:16:45.787 "data_offset": 2048, 00:16:45.787 "data_size": 63488 00:16:45.787 }, 00:16:45.787 { 00:16:45.787 "name": "BaseBdev3", 00:16:45.787 "uuid": "16050cf1-7eb2-45b5-bb6d-e9554b2ee39a", 00:16:45.787 "is_configured": true, 00:16:45.787 "data_offset": 2048, 00:16:45.787 "data_size": 63488 00:16:45.787 } 00:16:45.787 ] 00:16:45.787 }' 00:16:45.787 00:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:45.787 00:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.787 00:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:45.787 00:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.044 00:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:46.044 00:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:46.302 [2024-07-25 00:00:42.069937] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:46.302 00:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:46.302 00:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:46.302 00:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:46.302 00:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:46.302 00:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:46.302 00:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:46.302 00:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:46.302 00:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:46.302 00:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:46.302 00:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:46.302 00:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.302 00:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.559 00:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:46.559 "name": "Existed_Raid", 00:16:46.559 "uuid": "e0ced88c-c216-43dc-9f76-752f0f52f33e", 00:16:46.559 "strip_size_kb": 64, 00:16:46.559 "state": "configuring", 00:16:46.559 "raid_level": "raid0", 00:16:46.559 "superblock": true, 00:16:46.560 "num_base_bdevs": 3, 00:16:46.560 "num_base_bdevs_discovered": 1, 00:16:46.560 "num_base_bdevs_operational": 3, 00:16:46.560 "base_bdevs_list": [ 00:16:46.560 { 00:16:46.560 "name": "BaseBdev1", 00:16:46.560 "uuid": "f557021a-ce6e-49fc-8efe-a7d69cded8b5", 00:16:46.560 "is_configured": true, 00:16:46.560 "data_offset": 2048, 00:16:46.560 "data_size": 63488 00:16:46.560 }, 00:16:46.560 { 00:16:46.560 "name": null, 00:16:46.560 "uuid": "b3213e77-e109-451e-b150-334ddef206f2", 00:16:46.560 "is_configured": false, 00:16:46.560 "data_offset": 2048, 00:16:46.560 "data_size": 63488 00:16:46.560 }, 00:16:46.560 { 00:16:46.560 "name": null, 00:16:46.560 "uuid": "16050cf1-7eb2-45b5-bb6d-e9554b2ee39a", 00:16:46.560 "is_configured": false, 00:16:46.560 "data_offset": 2048, 00:16:46.560 "data_size": 63488 00:16:46.560 } 00:16:46.560 ] 00:16:46.560 }' 00:16:46.560 00:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:46.560 00:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.131 00:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.131 00:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:47.131 00:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:47.131 00:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:47.397 [2024-07-25 00:00:43.122468] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:47.397 00:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:47.397 00:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:47.397 00:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:47.397 00:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:47.397 00:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:47.397 00:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:47.397 00:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:47.397 00:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:47.397 00:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:47.397 00:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:47.397 00:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.397 00:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.663 00:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:47.663 "name": "Existed_Raid", 00:16:47.663 "uuid": "e0ced88c-c216-43dc-9f76-752f0f52f33e", 00:16:47.663 "strip_size_kb": 64, 00:16:47.663 "state": "configuring", 00:16:47.663 "raid_level": "raid0", 00:16:47.663 "superblock": true, 00:16:47.663 "num_base_bdevs": 3, 00:16:47.663 "num_base_bdevs_discovered": 2, 00:16:47.663 "num_base_bdevs_operational": 3, 00:16:47.663 "base_bdevs_list": [ 00:16:47.663 { 00:16:47.663 "name": "BaseBdev1", 00:16:47.663 "uuid": "f557021a-ce6e-49fc-8efe-a7d69cded8b5", 00:16:47.663 "is_configured": true, 00:16:47.663 "data_offset": 2048, 00:16:47.663 "data_size": 63488 00:16:47.663 }, 00:16:47.663 { 00:16:47.663 "name": null, 00:16:47.663 "uuid": "b3213e77-e109-451e-b150-334ddef206f2", 00:16:47.663 "is_configured": false, 00:16:47.663 "data_offset": 2048, 00:16:47.663 "data_size": 63488 00:16:47.663 }, 00:16:47.663 { 00:16:47.663 "name": "BaseBdev3", 00:16:47.664 "uuid": "16050cf1-7eb2-45b5-bb6d-e9554b2ee39a", 00:16:47.664 "is_configured": true, 00:16:47.664 "data_offset": 2048, 00:16:47.664 "data_size": 63488 00:16:47.664 } 00:16:47.664 ] 00:16:47.664 }' 00:16:47.664 00:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:47.664 00:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.925 00:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.925 00:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:48.183 00:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:48.183 00:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:48.442 [2024-07-25 00:00:44.170830] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:48.442 00:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:48.442 00:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:48.442 00:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:48.442 00:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:48.442 00:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:48.442 00:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:48.442 00:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:48.442 00:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:48.442 00:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:48.442 00:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:48.442 00:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.442 00:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.701 00:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:48.701 "name": "Existed_Raid", 00:16:48.701 "uuid": "e0ced88c-c216-43dc-9f76-752f0f52f33e", 00:16:48.701 "strip_size_kb": 64, 00:16:48.701 "state": "configuring", 00:16:48.701 "raid_level": "raid0", 00:16:48.701 "superblock": true, 00:16:48.701 "num_base_bdevs": 3, 00:16:48.701 "num_base_bdevs_discovered": 1, 00:16:48.701 "num_base_bdevs_operational": 3, 00:16:48.701 "base_bdevs_list": [ 00:16:48.701 { 00:16:48.701 "name": null, 00:16:48.701 "uuid": "f557021a-ce6e-49fc-8efe-a7d69cded8b5", 00:16:48.701 "is_configured": false, 00:16:48.701 "data_offset": 2048, 00:16:48.701 "data_size": 63488 00:16:48.701 }, 00:16:48.701 { 00:16:48.701 "name": null, 00:16:48.701 "uuid": "b3213e77-e109-451e-b150-334ddef206f2", 00:16:48.701 "is_configured": false, 00:16:48.701 "data_offset": 2048, 00:16:48.701 "data_size": 63488 00:16:48.701 }, 00:16:48.701 { 00:16:48.701 "name": "BaseBdev3", 00:16:48.701 "uuid": "16050cf1-7eb2-45b5-bb6d-e9554b2ee39a", 00:16:48.701 "is_configured": true, 00:16:48.701 "data_offset": 2048, 00:16:48.701 "data_size": 63488 00:16:48.701 } 00:16:48.701 ] 00:16:48.701 }' 00:16:48.701 00:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:48.701 00:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.961 00:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.961 00:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:49.220 00:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:49.220 00:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:49.479 [2024-07-25 00:00:45.221953] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:49.479 00:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:49.479 00:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:49.479 00:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:49.479 00:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:49.479 00:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:49.479 00:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:49.479 00:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:49.479 00:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:49.479 00:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:49.479 00:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:49.479 00:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.479 00:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.737 00:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:49.737 "name": "Existed_Raid", 00:16:49.737 "uuid": "e0ced88c-c216-43dc-9f76-752f0f52f33e", 00:16:49.737 "strip_size_kb": 64, 00:16:49.737 "state": "configuring", 00:16:49.737 "raid_level": "raid0", 00:16:49.737 "superblock": true, 00:16:49.737 "num_base_bdevs": 3, 00:16:49.737 "num_base_bdevs_discovered": 2, 00:16:49.737 "num_base_bdevs_operational": 3, 00:16:49.737 "base_bdevs_list": [ 00:16:49.737 { 00:16:49.737 "name": null, 00:16:49.737 "uuid": "f557021a-ce6e-49fc-8efe-a7d69cded8b5", 00:16:49.737 "is_configured": false, 00:16:49.737 "data_offset": 2048, 00:16:49.737 "data_size": 63488 00:16:49.737 }, 00:16:49.737 { 00:16:49.737 "name": "BaseBdev2", 00:16:49.737 "uuid": "b3213e77-e109-451e-b150-334ddef206f2", 00:16:49.737 "is_configured": true, 00:16:49.737 "data_offset": 2048, 00:16:49.737 "data_size": 63488 00:16:49.737 }, 00:16:49.737 { 00:16:49.737 "name": "BaseBdev3", 00:16:49.737 "uuid": "16050cf1-7eb2-45b5-bb6d-e9554b2ee39a", 00:16:49.737 "is_configured": true, 00:16:49.737 "data_offset": 2048, 00:16:49.737 "data_size": 63488 00:16:49.737 } 00:16:49.737 ] 00:16:49.737 }' 00:16:49.737 00:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:49.737 00:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.026 00:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.026 00:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:50.286 00:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:50.286 00:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.286 00:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:50.544 00:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f557021a-ce6e-49fc-8efe-a7d69cded8b5 00:16:50.803 [2024-07-25 00:00:46.478377] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:50.803 [2024-07-25 00:00:46.478593] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:16:50.803 [2024-07-25 00:00:46.478614] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:50.803 [2024-07-25 00:00:46.478726] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005d40 00:16:50.803 [2024-07-25 00:00:46.479156] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:16:50.803 [2024-07-25 00:00:46.479175] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000008a80 00:16:50.803 [2024-07-25 00:00:46.479329] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.803 NewBaseBdev 00:16:50.803 00:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:50.803 00:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:50.803 00:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:50.803 00:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:50.803 00:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:50.803 00:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:50.803 00:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:51.062 00:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:51.320 [ 00:16:51.320 { 00:16:51.320 "name": "NewBaseBdev", 00:16:51.320 "aliases": [ 00:16:51.320 "f557021a-ce6e-49fc-8efe-a7d69cded8b5" 00:16:51.320 ], 00:16:51.320 "product_name": "Malloc disk", 00:16:51.320 "block_size": 512, 00:16:51.320 "num_blocks": 65536, 00:16:51.320 "uuid": "f557021a-ce6e-49fc-8efe-a7d69cded8b5", 00:16:51.320 "assigned_rate_limits": { 00:16:51.320 "rw_ios_per_sec": 0, 00:16:51.320 "rw_mbytes_per_sec": 0, 00:16:51.320 "r_mbytes_per_sec": 0, 00:16:51.320 "w_mbytes_per_sec": 0 00:16:51.320 }, 00:16:51.320 "claimed": true, 00:16:51.320 "claim_type": "exclusive_write", 00:16:51.320 "zoned": false, 00:16:51.320 "supported_io_types": { 00:16:51.320 "read": true, 00:16:51.320 "write": true, 00:16:51.320 "unmap": true, 00:16:51.320 "flush": true, 00:16:51.320 "reset": true, 00:16:51.320 "nvme_admin": false, 00:16:51.320 "nvme_io": false, 00:16:51.321 "nvme_io_md": false, 00:16:51.321 "write_zeroes": true, 00:16:51.321 "zcopy": true, 00:16:51.321 "get_zone_info": false, 00:16:51.321 "zone_management": false, 00:16:51.321 "zone_append": false, 00:16:51.321 "compare": false, 00:16:51.321 "compare_and_write": false, 00:16:51.321 "abort": true, 00:16:51.321 "seek_hole": false, 00:16:51.321 "seek_data": false, 00:16:51.321 "copy": true, 00:16:51.321 "nvme_iov_md": false 00:16:51.321 }, 00:16:51.321 "memory_domains": [ 00:16:51.321 { 00:16:51.321 "dma_device_id": "system", 00:16:51.321 "dma_device_type": 1 00:16:51.321 }, 00:16:51.321 { 00:16:51.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.321 "dma_device_type": 2 00:16:51.321 } 00:16:51.321 ], 00:16:51.321 "driver_specific": {} 00:16:51.321 } 00:16:51.321 ] 00:16:51.321 00:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:51.321 00:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:51.321 00:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:51.321 00:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:51.321 00:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:51.321 00:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:51.321 00:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:51.321 00:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:51.321 00:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:51.321 00:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:51.321 00:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:51.321 00:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.321 00:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.321 00:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:51.321 "name": "Existed_Raid", 00:16:51.321 "uuid": "e0ced88c-c216-43dc-9f76-752f0f52f33e", 00:16:51.321 "strip_size_kb": 64, 00:16:51.321 "state": "online", 00:16:51.321 "raid_level": "raid0", 00:16:51.321 "superblock": true, 00:16:51.321 "num_base_bdevs": 3, 00:16:51.321 "num_base_bdevs_discovered": 3, 00:16:51.321 "num_base_bdevs_operational": 3, 00:16:51.321 "base_bdevs_list": [ 00:16:51.321 { 00:16:51.321 "name": "NewBaseBdev", 00:16:51.321 "uuid": "f557021a-ce6e-49fc-8efe-a7d69cded8b5", 00:16:51.321 "is_configured": true, 00:16:51.321 "data_offset": 2048, 00:16:51.321 "data_size": 63488 00:16:51.321 }, 00:16:51.321 { 00:16:51.321 "name": "BaseBdev2", 00:16:51.321 "uuid": "b3213e77-e109-451e-b150-334ddef206f2", 00:16:51.321 "is_configured": true, 00:16:51.321 "data_offset": 2048, 00:16:51.321 "data_size": 63488 00:16:51.321 }, 00:16:51.321 { 00:16:51.321 "name": "BaseBdev3", 00:16:51.321 "uuid": "16050cf1-7eb2-45b5-bb6d-e9554b2ee39a", 00:16:51.321 "is_configured": true, 00:16:51.321 "data_offset": 2048, 00:16:51.321 "data_size": 63488 00:16:51.321 } 00:16:51.321 ] 00:16:51.321 }' 00:16:51.321 00:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:51.321 00:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.888 00:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:51.888 00:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:51.888 00:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:51.888 00:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:51.888 00:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:51.888 00:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:51.888 00:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:51.888 00:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:52.145 [2024-07-25 00:00:47.775135] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.145 00:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:52.145 "name": "Existed_Raid", 00:16:52.145 "aliases": [ 00:16:52.145 "e0ced88c-c216-43dc-9f76-752f0f52f33e" 00:16:52.145 ], 00:16:52.145 "product_name": "Raid Volume", 00:16:52.145 "block_size": 512, 00:16:52.145 "num_blocks": 190464, 00:16:52.145 "uuid": "e0ced88c-c216-43dc-9f76-752f0f52f33e", 00:16:52.145 "assigned_rate_limits": { 00:16:52.145 "rw_ios_per_sec": 0, 00:16:52.145 "rw_mbytes_per_sec": 0, 00:16:52.145 "r_mbytes_per_sec": 0, 00:16:52.145 "w_mbytes_per_sec": 0 00:16:52.145 }, 00:16:52.145 "claimed": false, 00:16:52.145 "zoned": false, 00:16:52.145 "supported_io_types": { 00:16:52.145 "read": true, 00:16:52.145 "write": true, 00:16:52.145 "unmap": true, 00:16:52.145 "flush": true, 00:16:52.145 "reset": true, 00:16:52.145 "nvme_admin": false, 00:16:52.145 "nvme_io": false, 00:16:52.145 "nvme_io_md": false, 00:16:52.145 "write_zeroes": true, 00:16:52.145 "zcopy": false, 00:16:52.145 "get_zone_info": false, 00:16:52.145 "zone_management": false, 00:16:52.145 "zone_append": false, 00:16:52.145 "compare": false, 00:16:52.145 "compare_and_write": false, 00:16:52.145 "abort": false, 00:16:52.145 "seek_hole": false, 00:16:52.145 "seek_data": false, 00:16:52.145 "copy": false, 00:16:52.145 "nvme_iov_md": false 00:16:52.145 }, 00:16:52.145 "memory_domains": [ 00:16:52.145 { 00:16:52.145 "dma_device_id": "system", 00:16:52.145 "dma_device_type": 1 00:16:52.145 }, 00:16:52.145 { 00:16:52.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.145 "dma_device_type": 2 00:16:52.145 }, 00:16:52.145 { 00:16:52.145 "dma_device_id": "system", 00:16:52.145 "dma_device_type": 1 00:16:52.145 }, 00:16:52.145 { 00:16:52.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.145 "dma_device_type": 2 00:16:52.145 }, 00:16:52.145 { 00:16:52.145 "dma_device_id": "system", 00:16:52.145 "dma_device_type": 1 00:16:52.145 }, 00:16:52.145 { 00:16:52.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.145 "dma_device_type": 2 00:16:52.145 } 00:16:52.145 ], 00:16:52.145 "driver_specific": { 00:16:52.145 "raid": { 00:16:52.145 "uuid": "e0ced88c-c216-43dc-9f76-752f0f52f33e", 00:16:52.145 "strip_size_kb": 64, 00:16:52.146 "state": "online", 00:16:52.146 "raid_level": "raid0", 00:16:52.146 "superblock": true, 00:16:52.146 "num_base_bdevs": 3, 00:16:52.146 "num_base_bdevs_discovered": 3, 00:16:52.146 "num_base_bdevs_operational": 3, 00:16:52.146 "base_bdevs_list": [ 00:16:52.146 { 00:16:52.146 "name": "NewBaseBdev", 00:16:52.146 "uuid": "f557021a-ce6e-49fc-8efe-a7d69cded8b5", 00:16:52.146 "is_configured": true, 00:16:52.146 "data_offset": 2048, 00:16:52.146 "data_size": 63488 00:16:52.146 }, 00:16:52.146 { 00:16:52.146 "name": "BaseBdev2", 00:16:52.146 "uuid": "b3213e77-e109-451e-b150-334ddef206f2", 00:16:52.146 "is_configured": true, 00:16:52.146 "data_offset": 2048, 00:16:52.146 "data_size": 63488 00:16:52.146 }, 00:16:52.146 { 00:16:52.146 "name": "BaseBdev3", 00:16:52.146 "uuid": "16050cf1-7eb2-45b5-bb6d-e9554b2ee39a", 00:16:52.146 "is_configured": true, 00:16:52.146 "data_offset": 2048, 00:16:52.146 "data_size": 63488 00:16:52.146 } 00:16:52.146 ] 00:16:52.146 } 00:16:52.146 } 00:16:52.146 }' 00:16:52.146 00:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:52.146 00:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:52.146 BaseBdev2 00:16:52.146 BaseBdev3' 00:16:52.146 00:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:52.146 00:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:52.146 00:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:52.404 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:52.404 "name": "NewBaseBdev", 00:16:52.404 "aliases": [ 00:16:52.404 "f557021a-ce6e-49fc-8efe-a7d69cded8b5" 00:16:52.404 ], 00:16:52.404 "product_name": "Malloc disk", 00:16:52.404 "block_size": 512, 00:16:52.404 "num_blocks": 65536, 00:16:52.404 "uuid": "f557021a-ce6e-49fc-8efe-a7d69cded8b5", 00:16:52.404 "assigned_rate_limits": { 00:16:52.404 "rw_ios_per_sec": 0, 00:16:52.404 "rw_mbytes_per_sec": 0, 00:16:52.404 "r_mbytes_per_sec": 0, 00:16:52.404 "w_mbytes_per_sec": 0 00:16:52.404 }, 00:16:52.404 "claimed": true, 00:16:52.404 "claim_type": "exclusive_write", 00:16:52.404 "zoned": false, 00:16:52.404 "supported_io_types": { 00:16:52.404 "read": true, 00:16:52.404 "write": true, 00:16:52.404 "unmap": true, 00:16:52.404 "flush": true, 00:16:52.404 "reset": true, 00:16:52.404 "nvme_admin": false, 00:16:52.404 "nvme_io": false, 00:16:52.404 "nvme_io_md": false, 00:16:52.404 "write_zeroes": true, 00:16:52.404 "zcopy": true, 00:16:52.404 "get_zone_info": false, 00:16:52.404 "zone_management": false, 00:16:52.404 "zone_append": false, 00:16:52.404 "compare": false, 00:16:52.404 "compare_and_write": false, 00:16:52.404 "abort": true, 00:16:52.404 "seek_hole": false, 00:16:52.404 "seek_data": false, 00:16:52.404 "copy": true, 00:16:52.404 "nvme_iov_md": false 00:16:52.404 }, 00:16:52.404 "memory_domains": [ 00:16:52.404 { 00:16:52.404 "dma_device_id": "system", 00:16:52.404 "dma_device_type": 1 00:16:52.404 }, 00:16:52.404 { 00:16:52.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.404 "dma_device_type": 2 00:16:52.404 } 00:16:52.404 ], 00:16:52.404 "driver_specific": {} 00:16:52.404 }' 00:16:52.404 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.404 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.404 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:52.404 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.404 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.404 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:52.404 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.404 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.404 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:52.404 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:52.404 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:52.404 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:52.404 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:52.404 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:52.404 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:52.663 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:52.663 "name": "BaseBdev2", 00:16:52.663 "aliases": [ 00:16:52.663 "b3213e77-e109-451e-b150-334ddef206f2" 00:16:52.663 ], 00:16:52.663 "product_name": "Malloc disk", 00:16:52.663 "block_size": 512, 00:16:52.663 "num_blocks": 65536, 00:16:52.663 "uuid": "b3213e77-e109-451e-b150-334ddef206f2", 00:16:52.663 "assigned_rate_limits": { 00:16:52.663 "rw_ios_per_sec": 0, 00:16:52.663 "rw_mbytes_per_sec": 0, 00:16:52.663 "r_mbytes_per_sec": 0, 00:16:52.663 "w_mbytes_per_sec": 0 00:16:52.663 }, 00:16:52.663 "claimed": true, 00:16:52.663 "claim_type": "exclusive_write", 00:16:52.663 "zoned": false, 00:16:52.663 "supported_io_types": { 00:16:52.663 "read": true, 00:16:52.663 "write": true, 00:16:52.663 "unmap": true, 00:16:52.663 "flush": true, 00:16:52.663 "reset": true, 00:16:52.663 "nvme_admin": false, 00:16:52.663 "nvme_io": false, 00:16:52.663 "nvme_io_md": false, 00:16:52.663 "write_zeroes": true, 00:16:52.663 "zcopy": true, 00:16:52.663 "get_zone_info": false, 00:16:52.663 "zone_management": false, 00:16:52.663 "zone_append": false, 00:16:52.663 "compare": false, 00:16:52.663 "compare_and_write": false, 00:16:52.663 "abort": true, 00:16:52.663 "seek_hole": false, 00:16:52.663 "seek_data": false, 00:16:52.663 "copy": true, 00:16:52.663 "nvme_iov_md": false 00:16:52.663 }, 00:16:52.663 "memory_domains": [ 00:16:52.663 { 00:16:52.663 "dma_device_id": "system", 00:16:52.663 "dma_device_type": 1 00:16:52.663 }, 00:16:52.663 { 00:16:52.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.663 "dma_device_type": 2 00:16:52.663 } 00:16:52.663 ], 00:16:52.663 "driver_specific": {} 00:16:52.663 }' 00:16:52.663 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.663 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.663 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:52.663 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.663 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.663 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:52.663 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.663 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.663 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:52.663 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:52.663 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:52.663 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:52.663 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:52.663 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:52.663 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:52.921 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:52.921 "name": "BaseBdev3", 00:16:52.921 "aliases": [ 00:16:52.921 "16050cf1-7eb2-45b5-bb6d-e9554b2ee39a" 00:16:52.921 ], 00:16:52.921 "product_name": "Malloc disk", 00:16:52.921 "block_size": 512, 00:16:52.921 "num_blocks": 65536, 00:16:52.921 "uuid": "16050cf1-7eb2-45b5-bb6d-e9554b2ee39a", 00:16:52.921 "assigned_rate_limits": { 00:16:52.921 "rw_ios_per_sec": 0, 00:16:52.921 "rw_mbytes_per_sec": 0, 00:16:52.921 "r_mbytes_per_sec": 0, 00:16:52.921 "w_mbytes_per_sec": 0 00:16:52.921 }, 00:16:52.921 "claimed": true, 00:16:52.921 "claim_type": "exclusive_write", 00:16:52.921 "zoned": false, 00:16:52.921 "supported_io_types": { 00:16:52.922 "read": true, 00:16:52.922 "write": true, 00:16:52.922 "unmap": true, 00:16:52.922 "flush": true, 00:16:52.922 "reset": true, 00:16:52.922 "nvme_admin": false, 00:16:52.922 "nvme_io": false, 00:16:52.922 "nvme_io_md": false, 00:16:52.922 "write_zeroes": true, 00:16:52.922 "zcopy": true, 00:16:52.922 "get_zone_info": false, 00:16:52.922 "zone_management": false, 00:16:52.922 "zone_append": false, 00:16:52.922 "compare": false, 00:16:52.922 "compare_and_write": false, 00:16:52.922 "abort": true, 00:16:52.922 "seek_hole": false, 00:16:52.922 "seek_data": false, 00:16:52.922 "copy": true, 00:16:52.922 "nvme_iov_md": false 00:16:52.922 }, 00:16:52.922 "memory_domains": [ 00:16:52.922 { 00:16:52.922 "dma_device_id": "system", 00:16:52.922 "dma_device_type": 1 00:16:52.922 }, 00:16:52.922 { 00:16:52.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.922 "dma_device_type": 2 00:16:52.922 } 00:16:52.922 ], 00:16:52.922 "driver_specific": {} 00:16:52.922 }' 00:16:52.922 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.922 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.922 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:52.922 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.922 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.922 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:52.922 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.922 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.922 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:52.922 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:52.922 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.180 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:53.181 00:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:53.181 [2024-07-25 00:00:48.995204] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:53.181 [2024-07-25 00:00:48.995242] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:53.181 [2024-07-25 00:00:48.995326] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.181 [2024-07-25 00:00:48.995434] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:53.181 [2024-07-25 00:00:48.995470] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name Existed_Raid, state offline 00:16:53.181 00:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 80958 00:16:53.181 00:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 80958 ']' 00:16:53.181 00:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 80958 00:16:53.181 00:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:53.181 00:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:53.181 00:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80958 00:16:53.181 killing process with pid 80958 00:16:53.181 00:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:53.181 00:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:53.181 00:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80958' 00:16:53.181 00:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 80958 00:16:53.181 [2024-07-25 00:00:49.044196] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:53.181 00:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 80958 00:16:53.439 [2024-07-25 00:00:49.259395] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:54.825 00:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:54.825 00:16:54.825 real 0m23.538s 00:16:54.825 user 0m40.992s 00:16:54.825 sys 0m3.797s 00:16:54.825 ************************************ 00:16:54.825 END TEST raid_state_function_test_sb 00:16:54.825 ************************************ 00:16:54.825 00:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:54.825 00:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.825 00:00:50 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:16:54.825 00:00:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:54.825 00:00:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:54.825 00:00:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:54.825 ************************************ 00:16:54.825 START TEST raid_superblock_test 00:16:54.825 ************************************ 00:16:54.825 00:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=81834 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 81834 /var/tmp/spdk-raid.sock 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81834 ']' 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:54.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:54.826 00:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.826 [2024-07-25 00:00:50.392423] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:16:54.826 [2024-07-25 00:00:50.392797] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81834 ] 00:16:54.826 [2024-07-25 00:00:50.550207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.084 [2024-07-25 00:00:50.716774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.084 [2024-07-25 00:00:50.880805] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.651 00:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:55.651 00:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:16:55.651 00:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:16:55.651 00:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:55.651 00:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:16:55.651 00:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:16:55.651 00:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:55.651 00:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:55.651 00:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:55.651 00:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:55.651 00:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:55.909 malloc1 00:16:55.909 00:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:55.909 [2024-07-25 00:00:51.760154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:55.909 [2024-07-25 00:00:51.760287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.909 [2024-07-25 00:00:51.760323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:16:55.909 [2024-07-25 00:00:51.760338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.909 [2024-07-25 00:00:51.763003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.909 [2024-07-25 00:00:51.763231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:55.909 pt1 00:16:56.167 00:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:56.167 00:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:56.167 00:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:16:56.167 00:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:16:56.167 00:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:56.167 00:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:56.167 00:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:56.167 00:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:56.167 00:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:56.167 malloc2 00:16:56.167 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:56.426 [2024-07-25 00:00:52.213502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:56.426 [2024-07-25 00:00:52.213594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.426 [2024-07-25 00:00:52.213627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:16:56.426 [2024-07-25 00:00:52.213640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.426 [2024-07-25 00:00:52.216177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.426 [2024-07-25 00:00:52.216219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:56.426 pt2 00:16:56.426 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:56.426 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:56.426 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:16:56.426 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:16:56.426 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:56.426 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:56.426 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:56.426 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:56.426 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:56.684 malloc3 00:16:56.684 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:56.942 [2024-07-25 00:00:52.662099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:56.942 [2024-07-25 00:00:52.662418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.942 [2024-07-25 00:00:52.662498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:16:56.942 [2024-07-25 00:00:52.662735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.942 [2024-07-25 00:00:52.665230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.942 [2024-07-25 00:00:52.665426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:56.942 pt3 00:16:56.942 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:56.942 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:56.942 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:57.201 [2024-07-25 00:00:52.862195] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:57.201 [2024-07-25 00:00:52.864556] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:57.201 [2024-07-25 00:00:52.864638] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:57.201 [2024-07-25 00:00:52.864909] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:16:57.201 [2024-07-25 00:00:52.864948] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:57.201 [2024-07-25 00:00:52.865093] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:16:57.201 [2024-07-25 00:00:52.865544] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:16:57.201 [2024-07-25 00:00:52.865572] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008a80 00:16:57.201 [2024-07-25 00:00:52.865771] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.201 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:57.201 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:57.201 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:57.201 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:57.201 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:57.201 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:57.201 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.201 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.201 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.201 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.201 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.201 00:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.459 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.459 "name": "raid_bdev1", 00:16:57.459 "uuid": "77e50346-d739-4732-ba51-1c7a90d0f5bb", 00:16:57.459 "strip_size_kb": 64, 00:16:57.459 "state": "online", 00:16:57.459 "raid_level": "raid0", 00:16:57.459 "superblock": true, 00:16:57.459 "num_base_bdevs": 3, 00:16:57.459 "num_base_bdevs_discovered": 3, 00:16:57.459 "num_base_bdevs_operational": 3, 00:16:57.459 "base_bdevs_list": [ 00:16:57.459 { 00:16:57.459 "name": "pt1", 00:16:57.459 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.459 "is_configured": true, 00:16:57.459 "data_offset": 2048, 00:16:57.460 "data_size": 63488 00:16:57.460 }, 00:16:57.460 { 00:16:57.460 "name": "pt2", 00:16:57.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.460 "is_configured": true, 00:16:57.460 "data_offset": 2048, 00:16:57.460 "data_size": 63488 00:16:57.460 }, 00:16:57.460 { 00:16:57.460 "name": "pt3", 00:16:57.460 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:57.460 "is_configured": true, 00:16:57.460 "data_offset": 2048, 00:16:57.460 "data_size": 63488 00:16:57.460 } 00:16:57.460 ] 00:16:57.460 }' 00:16:57.460 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.460 00:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.717 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:16:57.717 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:57.717 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:57.717 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:57.717 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:57.717 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:57.717 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:57.717 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:57.976 [2024-07-25 00:00:53.662821] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.976 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:57.976 "name": "raid_bdev1", 00:16:57.976 "aliases": [ 00:16:57.976 "77e50346-d739-4732-ba51-1c7a90d0f5bb" 00:16:57.976 ], 00:16:57.976 "product_name": "Raid Volume", 00:16:57.976 "block_size": 512, 00:16:57.976 "num_blocks": 190464, 00:16:57.976 "uuid": "77e50346-d739-4732-ba51-1c7a90d0f5bb", 00:16:57.976 "assigned_rate_limits": { 00:16:57.976 "rw_ios_per_sec": 0, 00:16:57.976 "rw_mbytes_per_sec": 0, 00:16:57.976 "r_mbytes_per_sec": 0, 00:16:57.976 "w_mbytes_per_sec": 0 00:16:57.976 }, 00:16:57.976 "claimed": false, 00:16:57.976 "zoned": false, 00:16:57.976 "supported_io_types": { 00:16:57.976 "read": true, 00:16:57.976 "write": true, 00:16:57.976 "unmap": true, 00:16:57.976 "flush": true, 00:16:57.976 "reset": true, 00:16:57.976 "nvme_admin": false, 00:16:57.976 "nvme_io": false, 00:16:57.976 "nvme_io_md": false, 00:16:57.976 "write_zeroes": true, 00:16:57.976 "zcopy": false, 00:16:57.976 "get_zone_info": false, 00:16:57.976 "zone_management": false, 00:16:57.976 "zone_append": false, 00:16:57.976 "compare": false, 00:16:57.976 "compare_and_write": false, 00:16:57.976 "abort": false, 00:16:57.976 "seek_hole": false, 00:16:57.976 "seek_data": false, 00:16:57.976 "copy": false, 00:16:57.976 "nvme_iov_md": false 00:16:57.976 }, 00:16:57.976 "memory_domains": [ 00:16:57.976 { 00:16:57.976 "dma_device_id": "system", 00:16:57.976 "dma_device_type": 1 00:16:57.976 }, 00:16:57.976 { 00:16:57.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.976 "dma_device_type": 2 00:16:57.976 }, 00:16:57.976 { 00:16:57.976 "dma_device_id": "system", 00:16:57.976 "dma_device_type": 1 00:16:57.976 }, 00:16:57.976 { 00:16:57.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.976 "dma_device_type": 2 00:16:57.976 }, 00:16:57.976 { 00:16:57.976 "dma_device_id": "system", 00:16:57.976 "dma_device_type": 1 00:16:57.976 }, 00:16:57.976 { 00:16:57.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.976 "dma_device_type": 2 00:16:57.976 } 00:16:57.976 ], 00:16:57.976 "driver_specific": { 00:16:57.976 "raid": { 00:16:57.976 "uuid": "77e50346-d739-4732-ba51-1c7a90d0f5bb", 00:16:57.976 "strip_size_kb": 64, 00:16:57.976 "state": "online", 00:16:57.976 "raid_level": "raid0", 00:16:57.976 "superblock": true, 00:16:57.976 "num_base_bdevs": 3, 00:16:57.976 "num_base_bdevs_discovered": 3, 00:16:57.976 "num_base_bdevs_operational": 3, 00:16:57.976 "base_bdevs_list": [ 00:16:57.976 { 00:16:57.976 "name": "pt1", 00:16:57.976 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.976 "is_configured": true, 00:16:57.976 "data_offset": 2048, 00:16:57.976 "data_size": 63488 00:16:57.976 }, 00:16:57.976 { 00:16:57.976 "name": "pt2", 00:16:57.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.976 "is_configured": true, 00:16:57.976 "data_offset": 2048, 00:16:57.976 "data_size": 63488 00:16:57.976 }, 00:16:57.976 { 00:16:57.976 "name": "pt3", 00:16:57.976 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:57.976 "is_configured": true, 00:16:57.976 "data_offset": 2048, 00:16:57.976 "data_size": 63488 00:16:57.976 } 00:16:57.976 ] 00:16:57.976 } 00:16:57.976 } 00:16:57.976 }' 00:16:57.976 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:57.976 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:57.976 pt2 00:16:57.976 pt3' 00:16:57.976 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:57.976 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:57.976 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:58.235 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:58.235 "name": "pt1", 00:16:58.235 "aliases": [ 00:16:58.235 "00000000-0000-0000-0000-000000000001" 00:16:58.235 ], 00:16:58.235 "product_name": "passthru", 00:16:58.235 "block_size": 512, 00:16:58.235 "num_blocks": 65536, 00:16:58.235 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.235 "assigned_rate_limits": { 00:16:58.235 "rw_ios_per_sec": 0, 00:16:58.235 "rw_mbytes_per_sec": 0, 00:16:58.235 "r_mbytes_per_sec": 0, 00:16:58.235 "w_mbytes_per_sec": 0 00:16:58.235 }, 00:16:58.235 "claimed": true, 00:16:58.235 "claim_type": "exclusive_write", 00:16:58.235 "zoned": false, 00:16:58.235 "supported_io_types": { 00:16:58.235 "read": true, 00:16:58.235 "write": true, 00:16:58.235 "unmap": true, 00:16:58.235 "flush": true, 00:16:58.235 "reset": true, 00:16:58.235 "nvme_admin": false, 00:16:58.235 "nvme_io": false, 00:16:58.235 "nvme_io_md": false, 00:16:58.235 "write_zeroes": true, 00:16:58.235 "zcopy": true, 00:16:58.235 "get_zone_info": false, 00:16:58.235 "zone_management": false, 00:16:58.235 "zone_append": false, 00:16:58.235 "compare": false, 00:16:58.235 "compare_and_write": false, 00:16:58.235 "abort": true, 00:16:58.235 "seek_hole": false, 00:16:58.235 "seek_data": false, 00:16:58.235 "copy": true, 00:16:58.235 "nvme_iov_md": false 00:16:58.235 }, 00:16:58.235 "memory_domains": [ 00:16:58.235 { 00:16:58.235 "dma_device_id": "system", 00:16:58.235 "dma_device_type": 1 00:16:58.235 }, 00:16:58.235 { 00:16:58.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.235 "dma_device_type": 2 00:16:58.235 } 00:16:58.235 ], 00:16:58.235 "driver_specific": { 00:16:58.235 "passthru": { 00:16:58.235 "name": "pt1", 00:16:58.235 "base_bdev_name": "malloc1" 00:16:58.235 } 00:16:58.235 } 00:16:58.235 }' 00:16:58.235 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.235 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.235 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:58.235 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.235 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.235 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:58.235 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:58.235 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:58.235 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:58.235 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:58.235 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:58.235 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:58.235 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:58.235 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:58.235 00:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:58.494 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:58.494 "name": "pt2", 00:16:58.494 "aliases": [ 00:16:58.494 "00000000-0000-0000-0000-000000000002" 00:16:58.494 ], 00:16:58.494 "product_name": "passthru", 00:16:58.494 "block_size": 512, 00:16:58.494 "num_blocks": 65536, 00:16:58.494 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.494 "assigned_rate_limits": { 00:16:58.494 "rw_ios_per_sec": 0, 00:16:58.494 "rw_mbytes_per_sec": 0, 00:16:58.494 "r_mbytes_per_sec": 0, 00:16:58.494 "w_mbytes_per_sec": 0 00:16:58.494 }, 00:16:58.494 "claimed": true, 00:16:58.494 "claim_type": "exclusive_write", 00:16:58.494 "zoned": false, 00:16:58.494 "supported_io_types": { 00:16:58.494 "read": true, 00:16:58.494 "write": true, 00:16:58.494 "unmap": true, 00:16:58.494 "flush": true, 00:16:58.494 "reset": true, 00:16:58.495 "nvme_admin": false, 00:16:58.495 "nvme_io": false, 00:16:58.495 "nvme_io_md": false, 00:16:58.495 "write_zeroes": true, 00:16:58.495 "zcopy": true, 00:16:58.495 "get_zone_info": false, 00:16:58.495 "zone_management": false, 00:16:58.495 "zone_append": false, 00:16:58.495 "compare": false, 00:16:58.495 "compare_and_write": false, 00:16:58.495 "abort": true, 00:16:58.495 "seek_hole": false, 00:16:58.495 "seek_data": false, 00:16:58.495 "copy": true, 00:16:58.495 "nvme_iov_md": false 00:16:58.495 }, 00:16:58.495 "memory_domains": [ 00:16:58.495 { 00:16:58.495 "dma_device_id": "system", 00:16:58.495 "dma_device_type": 1 00:16:58.495 }, 00:16:58.495 { 00:16:58.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.495 "dma_device_type": 2 00:16:58.495 } 00:16:58.495 ], 00:16:58.495 "driver_specific": { 00:16:58.495 "passthru": { 00:16:58.495 "name": "pt2", 00:16:58.495 "base_bdev_name": "malloc2" 00:16:58.495 } 00:16:58.495 } 00:16:58.495 }' 00:16:58.495 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.495 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.495 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:58.495 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.495 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.495 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:58.495 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:58.495 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:58.495 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:58.495 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:58.495 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:58.495 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:58.495 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:58.495 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:58.495 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:58.754 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:58.754 "name": "pt3", 00:16:58.754 "aliases": [ 00:16:58.754 "00000000-0000-0000-0000-000000000003" 00:16:58.754 ], 00:16:58.754 "product_name": "passthru", 00:16:58.754 "block_size": 512, 00:16:58.754 "num_blocks": 65536, 00:16:58.754 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:58.754 "assigned_rate_limits": { 00:16:58.754 "rw_ios_per_sec": 0, 00:16:58.754 "rw_mbytes_per_sec": 0, 00:16:58.754 "r_mbytes_per_sec": 0, 00:16:58.754 "w_mbytes_per_sec": 0 00:16:58.754 }, 00:16:58.754 "claimed": true, 00:16:58.754 "claim_type": "exclusive_write", 00:16:58.754 "zoned": false, 00:16:58.754 "supported_io_types": { 00:16:58.754 "read": true, 00:16:58.754 "write": true, 00:16:58.754 "unmap": true, 00:16:58.754 "flush": true, 00:16:58.754 "reset": true, 00:16:58.754 "nvme_admin": false, 00:16:58.754 "nvme_io": false, 00:16:58.754 "nvme_io_md": false, 00:16:58.754 "write_zeroes": true, 00:16:58.754 "zcopy": true, 00:16:58.754 "get_zone_info": false, 00:16:58.754 "zone_management": false, 00:16:58.754 "zone_append": false, 00:16:58.754 "compare": false, 00:16:58.754 "compare_and_write": false, 00:16:58.754 "abort": true, 00:16:58.754 "seek_hole": false, 00:16:58.754 "seek_data": false, 00:16:58.754 "copy": true, 00:16:58.754 "nvme_iov_md": false 00:16:58.754 }, 00:16:58.754 "memory_domains": [ 00:16:58.754 { 00:16:58.754 "dma_device_id": "system", 00:16:58.754 "dma_device_type": 1 00:16:58.754 }, 00:16:58.754 { 00:16:58.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.754 "dma_device_type": 2 00:16:58.754 } 00:16:58.754 ], 00:16:58.754 "driver_specific": { 00:16:58.754 "passthru": { 00:16:58.754 "name": "pt3", 00:16:58.754 "base_bdev_name": "malloc3" 00:16:58.754 } 00:16:58.754 } 00:16:58.754 }' 00:16:58.754 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.754 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.754 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:58.754 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.754 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.754 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:58.754 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:58.754 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:58.754 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:58.754 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:58.754 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:58.754 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:59.013 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:59.013 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:16:59.013 [2024-07-25 00:00:54.875070] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.272 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=77e50346-d739-4732-ba51-1c7a90d0f5bb 00:16:59.272 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 77e50346-d739-4732-ba51-1c7a90d0f5bb ']' 00:16:59.272 00:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:59.272 [2024-07-25 00:00:55.074796] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.272 [2024-07-25 00:00:55.074848] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.272 [2024-07-25 00:00:55.074932] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.272 [2024-07-25 00:00:55.075003] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.272 [2024-07-25 00:00:55.075018] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name raid_bdev1, state offline 00:16:59.272 00:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.272 00:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:16:59.531 00:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:16:59.531 00:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:16:59.531 00:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:59.531 00:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:59.789 00:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:59.789 00:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:00.048 00:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:17:00.048 00:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:00.307 00:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:00.307 00:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:00.566 00:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:17:00.566 00:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:00.566 00:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:00.566 00:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:00.566 00:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:00.566 00:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.566 00:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:00.566 00:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.567 00:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:00.567 00:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.567 00:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:00.567 00:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:00.567 00:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:00.826 [2024-07-25 00:00:56.451267] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:00.826 [2024-07-25 00:00:56.453586] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:00.826 [2024-07-25 00:00:56.453664] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:00.826 [2024-07-25 00:00:56.453775] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:00.826 [2024-07-25 00:00:56.453915] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:00.826 [2024-07-25 00:00:56.453953] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:00.826 [2024-07-25 00:00:56.453980] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:00.826 [2024-07-25 00:00:56.453997] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state configuring 00:17:00.826 request: 00:17:00.826 { 00:17:00.826 "name": "raid_bdev1", 00:17:00.826 "raid_level": "raid0", 00:17:00.826 "base_bdevs": [ 00:17:00.826 "malloc1", 00:17:00.826 "malloc2", 00:17:00.826 "malloc3" 00:17:00.826 ], 00:17:00.826 "strip_size_kb": 64, 00:17:00.826 "superblock": false, 00:17:00.826 "method": "bdev_raid_create", 00:17:00.826 "req_id": 1 00:17:00.826 } 00:17:00.826 Got JSON-RPC error response 00:17:00.826 response: 00:17:00.826 { 00:17:00.826 "code": -17, 00:17:00.826 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:00.826 } 00:17:00.826 00:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:00.826 00:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:00.826 00:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:00.826 00:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:00.826 00:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.826 00:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:17:01.100 00:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:17:01.100 00:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:17:01.100 00:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:01.100 [2024-07-25 00:00:56.927284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:01.101 [2024-07-25 00:00:56.927374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.101 [2024-07-25 00:00:56.927419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009680 00:17:01.101 [2024-07-25 00:00:56.927448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.101 [2024-07-25 00:00:56.930155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.101 [2024-07-25 00:00:56.930393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:01.101 [2024-07-25 00:00:56.930622] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:01.101 [2024-07-25 00:00:56.930851] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:01.101 pt1 00:17:01.101 00:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:01.101 00:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:01.101 00:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:01.101 00:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:01.101 00:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:01.101 00:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:01.101 00:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:01.101 00:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:01.101 00:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:01.101 00:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:01.101 00:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.101 00:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.377 00:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:01.377 "name": "raid_bdev1", 00:17:01.377 "uuid": "77e50346-d739-4732-ba51-1c7a90d0f5bb", 00:17:01.377 "strip_size_kb": 64, 00:17:01.377 "state": "configuring", 00:17:01.377 "raid_level": "raid0", 00:17:01.377 "superblock": true, 00:17:01.377 "num_base_bdevs": 3, 00:17:01.377 "num_base_bdevs_discovered": 1, 00:17:01.377 "num_base_bdevs_operational": 3, 00:17:01.377 "base_bdevs_list": [ 00:17:01.377 { 00:17:01.377 "name": "pt1", 00:17:01.377 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:01.377 "is_configured": true, 00:17:01.377 "data_offset": 2048, 00:17:01.377 "data_size": 63488 00:17:01.377 }, 00:17:01.377 { 00:17:01.377 "name": null, 00:17:01.377 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.377 "is_configured": false, 00:17:01.377 "data_offset": 2048, 00:17:01.377 "data_size": 63488 00:17:01.377 }, 00:17:01.377 { 00:17:01.377 "name": null, 00:17:01.377 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:01.377 "is_configured": false, 00:17:01.377 "data_offset": 2048, 00:17:01.377 "data_size": 63488 00:17:01.377 } 00:17:01.377 ] 00:17:01.377 }' 00:17:01.377 00:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:01.377 00:00:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.636 00:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:17:01.636 00:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:01.895 [2024-07-25 00:00:57.691552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:01.895 [2024-07-25 00:00:57.691903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.895 [2024-07-25 00:00:57.691949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:17:01.895 [2024-07-25 00:00:57.691965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.895 [2024-07-25 00:00:57.692473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.895 [2024-07-25 00:00:57.692496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:01.895 [2024-07-25 00:00:57.692592] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:01.895 [2024-07-25 00:00:57.692626] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:01.895 pt2 00:17:01.895 00:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:02.153 [2024-07-25 00:00:57.899680] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:02.153 00:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:02.153 00:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:02.153 00:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:02.153 00:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:02.153 00:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:02.153 00:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:02.153 00:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:02.153 00:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:02.153 00:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:02.153 00:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:02.153 00:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.153 00:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.412 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:02.412 "name": "raid_bdev1", 00:17:02.412 "uuid": "77e50346-d739-4732-ba51-1c7a90d0f5bb", 00:17:02.412 "strip_size_kb": 64, 00:17:02.412 "state": "configuring", 00:17:02.412 "raid_level": "raid0", 00:17:02.412 "superblock": true, 00:17:02.412 "num_base_bdevs": 3, 00:17:02.412 "num_base_bdevs_discovered": 1, 00:17:02.412 "num_base_bdevs_operational": 3, 00:17:02.412 "base_bdevs_list": [ 00:17:02.412 { 00:17:02.412 "name": "pt1", 00:17:02.412 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:02.412 "is_configured": true, 00:17:02.412 "data_offset": 2048, 00:17:02.412 "data_size": 63488 00:17:02.412 }, 00:17:02.412 { 00:17:02.412 "name": null, 00:17:02.412 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.412 "is_configured": false, 00:17:02.412 "data_offset": 2048, 00:17:02.412 "data_size": 63488 00:17:02.412 }, 00:17:02.412 { 00:17:02.412 "name": null, 00:17:02.412 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:02.412 "is_configured": false, 00:17:02.412 "data_offset": 2048, 00:17:02.412 "data_size": 63488 00:17:02.412 } 00:17:02.412 ] 00:17:02.412 }' 00:17:02.412 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:02.412 00:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.670 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:17:02.670 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:02.670 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:02.929 [2024-07-25 00:00:58.697222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:02.929 [2024-07-25 00:00:58.697596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.929 [2024-07-25 00:00:58.697756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:17:02.929 [2024-07-25 00:00:58.697932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.929 [2024-07-25 00:00:58.698489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.929 [2024-07-25 00:00:58.698727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:02.929 [2024-07-25 00:00:58.699004] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:02.929 [2024-07-25 00:00:58.699198] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:02.929 pt2 00:17:02.929 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:17:02.929 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:02.929 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:03.188 [2024-07-25 00:00:58.905224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:03.188 [2024-07-25 00:00:58.905322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.188 [2024-07-25 00:00:58.905348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:17:03.188 [2024-07-25 00:00:58.905364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.188 [2024-07-25 00:00:58.905853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.188 [2024-07-25 00:00:58.905889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:03.188 [2024-07-25 00:00:58.906025] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:03.188 [2024-07-25 00:00:58.906116] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:03.188 [2024-07-25 00:00:58.906266] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009c80 00:17:03.188 [2024-07-25 00:00:58.906299] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:03.188 [2024-07-25 00:00:58.906409] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:17:03.188 [2024-07-25 00:00:58.906766] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009c80 00:17:03.188 [2024-07-25 00:00:58.906781] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009c80 00:17:03.188 [2024-07-25 00:00:58.906999] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.188 pt3 00:17:03.188 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:17:03.188 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:03.188 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:03.188 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:03.188 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:03.188 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:03.188 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:03.188 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:03.188 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.188 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.188 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.188 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.188 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.188 00:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.447 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:03.447 "name": "raid_bdev1", 00:17:03.447 "uuid": "77e50346-d739-4732-ba51-1c7a90d0f5bb", 00:17:03.447 "strip_size_kb": 64, 00:17:03.447 "state": "online", 00:17:03.447 "raid_level": "raid0", 00:17:03.447 "superblock": true, 00:17:03.447 "num_base_bdevs": 3, 00:17:03.447 "num_base_bdevs_discovered": 3, 00:17:03.447 "num_base_bdevs_operational": 3, 00:17:03.447 "base_bdevs_list": [ 00:17:03.447 { 00:17:03.447 "name": "pt1", 00:17:03.447 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:03.447 "is_configured": true, 00:17:03.447 "data_offset": 2048, 00:17:03.447 "data_size": 63488 00:17:03.447 }, 00:17:03.447 { 00:17:03.447 "name": "pt2", 00:17:03.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.447 "is_configured": true, 00:17:03.447 "data_offset": 2048, 00:17:03.447 "data_size": 63488 00:17:03.447 }, 00:17:03.447 { 00:17:03.447 "name": "pt3", 00:17:03.447 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:03.447 "is_configured": true, 00:17:03.447 "data_offset": 2048, 00:17:03.447 "data_size": 63488 00:17:03.447 } 00:17:03.447 ] 00:17:03.447 }' 00:17:03.447 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:03.447 00:00:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.706 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:17:03.706 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:03.706 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:03.706 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:03.706 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:03.706 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:03.706 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:03.706 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:03.965 [2024-07-25 00:00:59.665760] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.965 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:03.965 "name": "raid_bdev1", 00:17:03.965 "aliases": [ 00:17:03.965 "77e50346-d739-4732-ba51-1c7a90d0f5bb" 00:17:03.965 ], 00:17:03.965 "product_name": "Raid Volume", 00:17:03.965 "block_size": 512, 00:17:03.965 "num_blocks": 190464, 00:17:03.965 "uuid": "77e50346-d739-4732-ba51-1c7a90d0f5bb", 00:17:03.965 "assigned_rate_limits": { 00:17:03.965 "rw_ios_per_sec": 0, 00:17:03.965 "rw_mbytes_per_sec": 0, 00:17:03.965 "r_mbytes_per_sec": 0, 00:17:03.965 "w_mbytes_per_sec": 0 00:17:03.965 }, 00:17:03.965 "claimed": false, 00:17:03.965 "zoned": false, 00:17:03.965 "supported_io_types": { 00:17:03.965 "read": true, 00:17:03.965 "write": true, 00:17:03.965 "unmap": true, 00:17:03.965 "flush": true, 00:17:03.965 "reset": true, 00:17:03.965 "nvme_admin": false, 00:17:03.965 "nvme_io": false, 00:17:03.965 "nvme_io_md": false, 00:17:03.965 "write_zeroes": true, 00:17:03.965 "zcopy": false, 00:17:03.965 "get_zone_info": false, 00:17:03.965 "zone_management": false, 00:17:03.965 "zone_append": false, 00:17:03.965 "compare": false, 00:17:03.965 "compare_and_write": false, 00:17:03.965 "abort": false, 00:17:03.965 "seek_hole": false, 00:17:03.965 "seek_data": false, 00:17:03.965 "copy": false, 00:17:03.965 "nvme_iov_md": false 00:17:03.965 }, 00:17:03.965 "memory_domains": [ 00:17:03.965 { 00:17:03.965 "dma_device_id": "system", 00:17:03.965 "dma_device_type": 1 00:17:03.965 }, 00:17:03.965 { 00:17:03.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.965 "dma_device_type": 2 00:17:03.965 }, 00:17:03.965 { 00:17:03.965 "dma_device_id": "system", 00:17:03.965 "dma_device_type": 1 00:17:03.965 }, 00:17:03.965 { 00:17:03.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.965 "dma_device_type": 2 00:17:03.965 }, 00:17:03.965 { 00:17:03.965 "dma_device_id": "system", 00:17:03.965 "dma_device_type": 1 00:17:03.965 }, 00:17:03.965 { 00:17:03.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.965 "dma_device_type": 2 00:17:03.965 } 00:17:03.965 ], 00:17:03.965 "driver_specific": { 00:17:03.965 "raid": { 00:17:03.965 "uuid": "77e50346-d739-4732-ba51-1c7a90d0f5bb", 00:17:03.965 "strip_size_kb": 64, 00:17:03.965 "state": "online", 00:17:03.965 "raid_level": "raid0", 00:17:03.965 "superblock": true, 00:17:03.965 "num_base_bdevs": 3, 00:17:03.965 "num_base_bdevs_discovered": 3, 00:17:03.965 "num_base_bdevs_operational": 3, 00:17:03.965 "base_bdevs_list": [ 00:17:03.965 { 00:17:03.965 "name": "pt1", 00:17:03.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:03.965 "is_configured": true, 00:17:03.965 "data_offset": 2048, 00:17:03.965 "data_size": 63488 00:17:03.965 }, 00:17:03.965 { 00:17:03.965 "name": "pt2", 00:17:03.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.965 "is_configured": true, 00:17:03.965 "data_offset": 2048, 00:17:03.965 "data_size": 63488 00:17:03.965 }, 00:17:03.965 { 00:17:03.965 "name": "pt3", 00:17:03.965 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:03.965 "is_configured": true, 00:17:03.965 "data_offset": 2048, 00:17:03.965 "data_size": 63488 00:17:03.965 } 00:17:03.965 ] 00:17:03.965 } 00:17:03.965 } 00:17:03.965 }' 00:17:03.965 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:03.965 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:03.965 pt2 00:17:03.965 pt3' 00:17:03.965 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:03.965 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:03.965 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:04.224 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:04.224 "name": "pt1", 00:17:04.224 "aliases": [ 00:17:04.224 "00000000-0000-0000-0000-000000000001" 00:17:04.224 ], 00:17:04.224 "product_name": "passthru", 00:17:04.224 "block_size": 512, 00:17:04.224 "num_blocks": 65536, 00:17:04.224 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:04.224 "assigned_rate_limits": { 00:17:04.224 "rw_ios_per_sec": 0, 00:17:04.224 "rw_mbytes_per_sec": 0, 00:17:04.224 "r_mbytes_per_sec": 0, 00:17:04.224 "w_mbytes_per_sec": 0 00:17:04.224 }, 00:17:04.224 "claimed": true, 00:17:04.224 "claim_type": "exclusive_write", 00:17:04.224 "zoned": false, 00:17:04.224 "supported_io_types": { 00:17:04.224 "read": true, 00:17:04.224 "write": true, 00:17:04.224 "unmap": true, 00:17:04.224 "flush": true, 00:17:04.224 "reset": true, 00:17:04.224 "nvme_admin": false, 00:17:04.224 "nvme_io": false, 00:17:04.224 "nvme_io_md": false, 00:17:04.224 "write_zeroes": true, 00:17:04.224 "zcopy": true, 00:17:04.224 "get_zone_info": false, 00:17:04.224 "zone_management": false, 00:17:04.224 "zone_append": false, 00:17:04.224 "compare": false, 00:17:04.224 "compare_and_write": false, 00:17:04.224 "abort": true, 00:17:04.224 "seek_hole": false, 00:17:04.224 "seek_data": false, 00:17:04.224 "copy": true, 00:17:04.224 "nvme_iov_md": false 00:17:04.224 }, 00:17:04.224 "memory_domains": [ 00:17:04.224 { 00:17:04.224 "dma_device_id": "system", 00:17:04.224 "dma_device_type": 1 00:17:04.224 }, 00:17:04.224 { 00:17:04.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.224 "dma_device_type": 2 00:17:04.224 } 00:17:04.224 ], 00:17:04.224 "driver_specific": { 00:17:04.224 "passthru": { 00:17:04.224 "name": "pt1", 00:17:04.224 "base_bdev_name": "malloc1" 00:17:04.224 } 00:17:04.224 } 00:17:04.224 }' 00:17:04.224 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:04.224 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:04.224 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:04.224 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:04.224 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:04.224 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:04.224 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:04.224 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:04.224 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:04.224 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:04.224 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:04.224 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:04.224 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:04.224 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:04.224 00:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:04.484 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:04.484 "name": "pt2", 00:17:04.484 "aliases": [ 00:17:04.484 "00000000-0000-0000-0000-000000000002" 00:17:04.484 ], 00:17:04.484 "product_name": "passthru", 00:17:04.484 "block_size": 512, 00:17:04.484 "num_blocks": 65536, 00:17:04.484 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.484 "assigned_rate_limits": { 00:17:04.484 "rw_ios_per_sec": 0, 00:17:04.484 "rw_mbytes_per_sec": 0, 00:17:04.484 "r_mbytes_per_sec": 0, 00:17:04.484 "w_mbytes_per_sec": 0 00:17:04.484 }, 00:17:04.484 "claimed": true, 00:17:04.484 "claim_type": "exclusive_write", 00:17:04.484 "zoned": false, 00:17:04.484 "supported_io_types": { 00:17:04.484 "read": true, 00:17:04.484 "write": true, 00:17:04.484 "unmap": true, 00:17:04.484 "flush": true, 00:17:04.484 "reset": true, 00:17:04.484 "nvme_admin": false, 00:17:04.484 "nvme_io": false, 00:17:04.484 "nvme_io_md": false, 00:17:04.484 "write_zeroes": true, 00:17:04.484 "zcopy": true, 00:17:04.484 "get_zone_info": false, 00:17:04.484 "zone_management": false, 00:17:04.484 "zone_append": false, 00:17:04.484 "compare": false, 00:17:04.484 "compare_and_write": false, 00:17:04.484 "abort": true, 00:17:04.484 "seek_hole": false, 00:17:04.484 "seek_data": false, 00:17:04.484 "copy": true, 00:17:04.484 "nvme_iov_md": false 00:17:04.484 }, 00:17:04.484 "memory_domains": [ 00:17:04.484 { 00:17:04.484 "dma_device_id": "system", 00:17:04.484 "dma_device_type": 1 00:17:04.484 }, 00:17:04.484 { 00:17:04.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.484 "dma_device_type": 2 00:17:04.484 } 00:17:04.484 ], 00:17:04.484 "driver_specific": { 00:17:04.484 "passthru": { 00:17:04.484 "name": "pt2", 00:17:04.484 "base_bdev_name": "malloc2" 00:17:04.484 } 00:17:04.484 } 00:17:04.484 }' 00:17:04.484 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:04.484 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:04.484 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:04.484 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:04.484 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:04.484 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:04.484 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:04.484 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:04.484 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:04.484 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:04.484 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:04.484 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:04.484 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:04.484 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:04.484 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:04.744 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:04.744 "name": "pt3", 00:17:04.744 "aliases": [ 00:17:04.744 "00000000-0000-0000-0000-000000000003" 00:17:04.744 ], 00:17:04.744 "product_name": "passthru", 00:17:04.744 "block_size": 512, 00:17:04.744 "num_blocks": 65536, 00:17:04.744 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:04.744 "assigned_rate_limits": { 00:17:04.744 "rw_ios_per_sec": 0, 00:17:04.744 "rw_mbytes_per_sec": 0, 00:17:04.744 "r_mbytes_per_sec": 0, 00:17:04.744 "w_mbytes_per_sec": 0 00:17:04.744 }, 00:17:04.744 "claimed": true, 00:17:04.744 "claim_type": "exclusive_write", 00:17:04.744 "zoned": false, 00:17:04.744 "supported_io_types": { 00:17:04.744 "read": true, 00:17:04.744 "write": true, 00:17:04.744 "unmap": true, 00:17:04.744 "flush": true, 00:17:04.744 "reset": true, 00:17:04.744 "nvme_admin": false, 00:17:04.744 "nvme_io": false, 00:17:04.744 "nvme_io_md": false, 00:17:04.744 "write_zeroes": true, 00:17:04.744 "zcopy": true, 00:17:04.744 "get_zone_info": false, 00:17:04.744 "zone_management": false, 00:17:04.744 "zone_append": false, 00:17:04.744 "compare": false, 00:17:04.744 "compare_and_write": false, 00:17:04.744 "abort": true, 00:17:04.744 "seek_hole": false, 00:17:04.744 "seek_data": false, 00:17:04.744 "copy": true, 00:17:04.744 "nvme_iov_md": false 00:17:04.744 }, 00:17:04.744 "memory_domains": [ 00:17:04.744 { 00:17:04.744 "dma_device_id": "system", 00:17:04.744 "dma_device_type": 1 00:17:04.744 }, 00:17:04.744 { 00:17:04.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.744 "dma_device_type": 2 00:17:04.744 } 00:17:04.744 ], 00:17:04.744 "driver_specific": { 00:17:04.744 "passthru": { 00:17:04.744 "name": "pt3", 00:17:04.744 "base_bdev_name": "malloc3" 00:17:04.744 } 00:17:04.744 } 00:17:04.744 }' 00:17:04.744 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.003 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.003 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:05.003 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:05.003 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:05.003 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:05.003 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:05.003 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:05.003 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:05.003 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.003 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.003 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:05.003 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:05.003 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:17:05.003 [2024-07-25 00:01:00.866107] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.262 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 77e50346-d739-4732-ba51-1c7a90d0f5bb '!=' 77e50346-d739-4732-ba51-1c7a90d0f5bb ']' 00:17:05.262 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:17:05.262 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:05.262 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:05.262 00:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 81834 00:17:05.262 00:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81834 ']' 00:17:05.262 00:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81834 00:17:05.262 00:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:17:05.262 00:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:05.262 00:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81834 00:17:05.262 killing process with pid 81834 00:17:05.262 00:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:05.262 00:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:05.262 00:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81834' 00:17:05.262 00:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 81834 00:17:05.262 00:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 81834 00:17:05.262 [2024-07-25 00:01:00.915421] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:05.262 [2024-07-25 00:01:00.915535] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.262 [2024-07-25 00:01:00.915596] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.262 [2024-07-25 00:01:00.915615] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009c80 name raid_bdev1, state offline 00:17:05.262 [2024-07-25 00:01:01.116080] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:06.640 00:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:17:06.640 00:17:06.640 real 0m11.810s 00:17:06.640 user 0m19.790s 00:17:06.640 sys 0m1.872s 00:17:06.640 00:01:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:06.640 ************************************ 00:17:06.640 END TEST raid_superblock_test 00:17:06.640 ************************************ 00:17:06.640 00:01:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.641 00:01:02 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:17:06.641 00:01:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:06.641 00:01:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:06.641 00:01:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.641 ************************************ 00:17:06.641 START TEST raid_read_error_test 00:17:06.641 ************************************ 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.HH0zoWusNA 00:17:06.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=82258 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 82258 /var/tmp/spdk-raid.sock 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 82258 ']' 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:06.641 00:01:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.641 [2024-07-25 00:01:02.283585] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:17:06.641 [2024-07-25 00:01:02.284056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82258 ] 00:17:06.641 [2024-07-25 00:01:02.448350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.900 [2024-07-25 00:01:02.615804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.159 [2024-07-25 00:01:02.781703] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.418 00:01:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:07.418 00:01:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:17:07.418 00:01:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:07.418 00:01:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:07.676 BaseBdev1_malloc 00:17:07.676 00:01:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:07.935 true 00:17:07.935 00:01:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:08.193 [2024-07-25 00:01:03.869251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:08.193 [2024-07-25 00:01:03.869352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.193 [2024-07-25 00:01:03.869402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:17:08.193 [2024-07-25 00:01:03.869419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.193 [2024-07-25 00:01:03.872155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.193 [2024-07-25 00:01:03.872235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:08.193 BaseBdev1 00:17:08.193 00:01:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:08.193 00:01:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:08.451 BaseBdev2_malloc 00:17:08.451 00:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:08.708 true 00:17:08.708 00:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:08.966 [2024-07-25 00:01:04.596509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:08.966 [2024-07-25 00:01:04.596611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.966 [2024-07-25 00:01:04.596642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:17:08.966 [2024-07-25 00:01:04.596660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.966 [2024-07-25 00:01:04.599079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.966 [2024-07-25 00:01:04.599153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:08.966 BaseBdev2 00:17:08.966 00:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:08.966 00:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:08.966 BaseBdev3_malloc 00:17:09.225 00:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:09.225 true 00:17:09.225 00:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:09.483 [2024-07-25 00:01:05.227811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:09.483 [2024-07-25 00:01:05.228123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.483 [2024-07-25 00:01:05.228165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:17:09.483 [2024-07-25 00:01:05.228184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.483 [2024-07-25 00:01:05.230797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.483 [2024-07-25 00:01:05.230872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:09.483 BaseBdev3 00:17:09.483 00:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:17:09.742 [2024-07-25 00:01:05.443969] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:09.742 [2024-07-25 00:01:05.446343] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:09.742 [2024-07-25 00:01:05.446432] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:09.742 [2024-07-25 00:01:05.446687] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:17:09.742 [2024-07-25 00:01:05.446704] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:09.742 [2024-07-25 00:01:05.446907] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:17:09.742 [2024-07-25 00:01:05.447395] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:17:09.742 [2024-07-25 00:01:05.447612] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:17:09.742 [2024-07-25 00:01:05.447907] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.742 00:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:09.742 00:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:09.742 00:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:09.742 00:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:09.742 00:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:09.742 00:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:09.742 00:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:09.742 00:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:09.742 00:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:09.742 00:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:09.742 00:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.742 00:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.000 00:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:10.000 "name": "raid_bdev1", 00:17:10.000 "uuid": "20fc1a84-48df-4fb4-ade2-98cf08467a31", 00:17:10.000 "strip_size_kb": 64, 00:17:10.000 "state": "online", 00:17:10.001 "raid_level": "raid0", 00:17:10.001 "superblock": true, 00:17:10.001 "num_base_bdevs": 3, 00:17:10.001 "num_base_bdevs_discovered": 3, 00:17:10.001 "num_base_bdevs_operational": 3, 00:17:10.001 "base_bdevs_list": [ 00:17:10.001 { 00:17:10.001 "name": "BaseBdev1", 00:17:10.001 "uuid": "9a798dda-1447-52a3-ab2a-94d496402455", 00:17:10.001 "is_configured": true, 00:17:10.001 "data_offset": 2048, 00:17:10.001 "data_size": 63488 00:17:10.001 }, 00:17:10.001 { 00:17:10.001 "name": "BaseBdev2", 00:17:10.001 "uuid": "b24eb481-e53e-5051-aa51-1546bcceeaf7", 00:17:10.001 "is_configured": true, 00:17:10.001 "data_offset": 2048, 00:17:10.001 "data_size": 63488 00:17:10.001 }, 00:17:10.001 { 00:17:10.001 "name": "BaseBdev3", 00:17:10.001 "uuid": "ad7fe839-8299-5802-8bcf-d5d1c96ddcbb", 00:17:10.001 "is_configured": true, 00:17:10.001 "data_offset": 2048, 00:17:10.001 "data_size": 63488 00:17:10.001 } 00:17:10.001 ] 00:17:10.001 }' 00:17:10.001 00:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:10.001 00:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.259 00:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:17:10.259 00:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:10.517 [2024-07-25 00:01:06.141281] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:17:11.454 00:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:11.454 00:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:17:11.454 00:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:11.454 00:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:17:11.454 00:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:11.454 00:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:11.454 00:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:11.454 00:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:11.454 00:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:11.454 00:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:11.454 00:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.454 00:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.454 00:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.454 00:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.454 00:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.454 00:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.711 00:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.711 "name": "raid_bdev1", 00:17:11.711 "uuid": "20fc1a84-48df-4fb4-ade2-98cf08467a31", 00:17:11.711 "strip_size_kb": 64, 00:17:11.711 "state": "online", 00:17:11.711 "raid_level": "raid0", 00:17:11.711 "superblock": true, 00:17:11.711 "num_base_bdevs": 3, 00:17:11.711 "num_base_bdevs_discovered": 3, 00:17:11.711 "num_base_bdevs_operational": 3, 00:17:11.711 "base_bdevs_list": [ 00:17:11.711 { 00:17:11.711 "name": "BaseBdev1", 00:17:11.711 "uuid": "9a798dda-1447-52a3-ab2a-94d496402455", 00:17:11.711 "is_configured": true, 00:17:11.711 "data_offset": 2048, 00:17:11.711 "data_size": 63488 00:17:11.711 }, 00:17:11.711 { 00:17:11.711 "name": "BaseBdev2", 00:17:11.711 "uuid": "b24eb481-e53e-5051-aa51-1546bcceeaf7", 00:17:11.711 "is_configured": true, 00:17:11.711 "data_offset": 2048, 00:17:11.711 "data_size": 63488 00:17:11.711 }, 00:17:11.711 { 00:17:11.711 "name": "BaseBdev3", 00:17:11.711 "uuid": "ad7fe839-8299-5802-8bcf-d5d1c96ddcbb", 00:17:11.711 "is_configured": true, 00:17:11.711 "data_offset": 2048, 00:17:11.711 "data_size": 63488 00:17:11.711 } 00:17:11.711 ] 00:17:11.711 }' 00:17:11.711 00:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.711 00:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.276 00:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:12.276 [2024-07-25 00:01:08.144714] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:12.276 [2024-07-25 00:01:08.145053] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:12.536 [2024-07-25 00:01:08.148456] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.536 [2024-07-25 00:01:08.148670] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.536 0 00:17:12.536 [2024-07-25 00:01:08.148782] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:12.536 [2024-07-25 00:01:08.148804] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:17:12.536 00:01:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 82258 00:17:12.536 00:01:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 82258 ']' 00:17:12.536 00:01:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 82258 00:17:12.536 00:01:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:17:12.536 00:01:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:12.536 00:01:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82258 00:17:12.536 killing process with pid 82258 00:17:12.536 00:01:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:12.536 00:01:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:12.536 00:01:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82258' 00:17:12.536 00:01:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 82258 00:17:12.536 [2024-07-25 00:01:08.196596] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:12.536 00:01:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 82258 00:17:12.536 [2024-07-25 00:01:08.350130] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:13.911 00:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.HH0zoWusNA 00:17:13.911 00:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:17:13.911 00:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:17:13.911 ************************************ 00:17:13.911 END TEST raid_read_error_test 00:17:13.911 ************************************ 00:17:13.911 00:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.50 00:17:13.911 00:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:17:13.911 00:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:13.911 00:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:13.911 00:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.50 != \0\.\0\0 ]] 00:17:13.911 00:17:13.911 real 0m7.244s 00:17:13.911 user 0m10.682s 00:17:13.911 sys 0m0.935s 00:17:13.911 00:01:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:13.911 00:01:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.911 00:01:09 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:17:13.911 00:01:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:13.911 00:01:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:13.911 00:01:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:13.911 ************************************ 00:17:13.911 START TEST raid_write_error_test 00:17:13.911 ************************************ 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.KklSe0lShL 00:17:13.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=82443 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 82443 /var/tmp/spdk-raid.sock 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 82443 ']' 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:13.911 00:01:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.911 [2024-07-25 00:01:09.581460] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:17:13.911 [2024-07-25 00:01:09.581899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82443 ] 00:17:13.911 [2024-07-25 00:01:09.754280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.169 [2024-07-25 00:01:09.929789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.428 [2024-07-25 00:01:10.106803] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.686 00:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:14.686 00:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:17:14.686 00:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:14.686 00:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:14.945 BaseBdev1_malloc 00:17:14.945 00:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:15.203 true 00:17:15.203 00:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:15.461 [2024-07-25 00:01:11.248451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:15.461 [2024-07-25 00:01:11.248549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.461 [2024-07-25 00:01:11.248584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:17:15.461 [2024-07-25 00:01:11.248602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.461 [2024-07-25 00:01:11.251208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.461 [2024-07-25 00:01:11.251262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:15.461 BaseBdev1 00:17:15.461 00:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:15.461 00:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:15.719 BaseBdev2_malloc 00:17:15.719 00:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:15.976 true 00:17:15.976 00:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:16.234 [2024-07-25 00:01:11.966299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:16.234 [2024-07-25 00:01:11.966397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.234 [2024-07-25 00:01:11.966429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:17:16.234 [2024-07-25 00:01:11.966449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.234 [2024-07-25 00:01:11.969171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.234 [2024-07-25 00:01:11.969436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:16.234 BaseBdev2 00:17:16.234 00:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:16.234 00:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:16.492 BaseBdev3_malloc 00:17:16.492 00:01:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:16.751 true 00:17:16.751 00:01:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:17.009 [2024-07-25 00:01:12.681601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:17.009 [2024-07-25 00:01:12.681984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.009 [2024-07-25 00:01:12.682068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:17:17.009 [2024-07-25 00:01:12.682278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.009 [2024-07-25 00:01:12.684770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.009 [2024-07-25 00:01:12.685009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:17.009 BaseBdev3 00:17:17.009 00:01:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:17:17.268 [2024-07-25 00:01:12.941897] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:17.268 [2024-07-25 00:01:12.944043] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:17.268 [2024-07-25 00:01:12.944149] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:17.268 [2024-07-25 00:01:12.944415] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:17:17.268 [2024-07-25 00:01:12.944432] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:17.268 [2024-07-25 00:01:12.944553] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:17:17.268 [2024-07-25 00:01:12.944949] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:17:17.268 [2024-07-25 00:01:12.944972] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:17:17.268 [2024-07-25 00:01:12.945137] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.268 00:01:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:17.268 00:01:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:17.268 00:01:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:17.268 00:01:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:17.268 00:01:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:17.268 00:01:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:17.268 00:01:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:17.268 00:01:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:17.268 00:01:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:17.268 00:01:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:17.268 00:01:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.268 00:01:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.526 00:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:17.526 "name": "raid_bdev1", 00:17:17.526 "uuid": "8d7a0258-f2da-483a-9d90-4b3168f8fa0b", 00:17:17.526 "strip_size_kb": 64, 00:17:17.526 "state": "online", 00:17:17.526 "raid_level": "raid0", 00:17:17.526 "superblock": true, 00:17:17.526 "num_base_bdevs": 3, 00:17:17.526 "num_base_bdevs_discovered": 3, 00:17:17.526 "num_base_bdevs_operational": 3, 00:17:17.526 "base_bdevs_list": [ 00:17:17.526 { 00:17:17.526 "name": "BaseBdev1", 00:17:17.526 "uuid": "42ad292c-3b3c-5f76-a3b3-5cafe3996029", 00:17:17.526 "is_configured": true, 00:17:17.526 "data_offset": 2048, 00:17:17.526 "data_size": 63488 00:17:17.526 }, 00:17:17.526 { 00:17:17.526 "name": "BaseBdev2", 00:17:17.526 "uuid": "ea282958-c278-530a-a25b-47199044ddc3", 00:17:17.526 "is_configured": true, 00:17:17.526 "data_offset": 2048, 00:17:17.526 "data_size": 63488 00:17:17.526 }, 00:17:17.526 { 00:17:17.526 "name": "BaseBdev3", 00:17:17.526 "uuid": "e8796e90-cbb1-5198-a743-0d0cc4ef1ec8", 00:17:17.526 "is_configured": true, 00:17:17.526 "data_offset": 2048, 00:17:17.526 "data_size": 63488 00:17:17.526 } 00:17:17.526 ] 00:17:17.526 }' 00:17:17.526 00:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:17.526 00:01:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.783 00:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:17:17.783 00:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:17.783 [2024-07-25 00:01:13.635051] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:17:18.716 00:01:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:18.974 00:01:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:17:18.974 00:01:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:18.974 00:01:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:17:18.974 00:01:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:18.974 00:01:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:18.974 00:01:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:18.974 00:01:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:18.974 00:01:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:18.975 00:01:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:18.975 00:01:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:18.975 00:01:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:18.975 00:01:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:18.975 00:01:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:18.975 00:01:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.975 00:01:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.233 00:01:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:19.233 "name": "raid_bdev1", 00:17:19.233 "uuid": "8d7a0258-f2da-483a-9d90-4b3168f8fa0b", 00:17:19.233 "strip_size_kb": 64, 00:17:19.233 "state": "online", 00:17:19.233 "raid_level": "raid0", 00:17:19.233 "superblock": true, 00:17:19.233 "num_base_bdevs": 3, 00:17:19.233 "num_base_bdevs_discovered": 3, 00:17:19.233 "num_base_bdevs_operational": 3, 00:17:19.233 "base_bdevs_list": [ 00:17:19.233 { 00:17:19.233 "name": "BaseBdev1", 00:17:19.233 "uuid": "42ad292c-3b3c-5f76-a3b3-5cafe3996029", 00:17:19.233 "is_configured": true, 00:17:19.233 "data_offset": 2048, 00:17:19.233 "data_size": 63488 00:17:19.233 }, 00:17:19.233 { 00:17:19.233 "name": "BaseBdev2", 00:17:19.233 "uuid": "ea282958-c278-530a-a25b-47199044ddc3", 00:17:19.233 "is_configured": true, 00:17:19.233 "data_offset": 2048, 00:17:19.233 "data_size": 63488 00:17:19.233 }, 00:17:19.233 { 00:17:19.233 "name": "BaseBdev3", 00:17:19.233 "uuid": "e8796e90-cbb1-5198-a743-0d0cc4ef1ec8", 00:17:19.233 "is_configured": true, 00:17:19.233 "data_offset": 2048, 00:17:19.233 "data_size": 63488 00:17:19.233 } 00:17:19.233 ] 00:17:19.233 }' 00:17:19.233 00:01:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:19.233 00:01:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.800 00:01:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:19.800 [2024-07-25 00:01:15.633896] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.800 [2024-07-25 00:01:15.634202] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.800 [2024-07-25 00:01:15.637216] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.800 [2024-07-25 00:01:15.637416] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.800 [2024-07-25 00:01:15.637509] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr0 00:17:19.800 ee all in destruct 00:17:19.800 [2024-07-25 00:01:15.637729] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:17:19.800 00:01:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 82443 00:17:19.800 00:01:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 82443 ']' 00:17:19.800 00:01:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 82443 00:17:19.800 00:01:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:17:19.800 00:01:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:19.800 00:01:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82443 00:17:20.059 killing process with pid 82443 00:17:20.059 00:01:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:20.059 00:01:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:20.059 00:01:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82443' 00:17:20.059 00:01:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 82443 00:17:20.059 [2024-07-25 00:01:15.688661] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:20.059 00:01:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 82443 00:17:20.059 [2024-07-25 00:01:15.860058] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:21.434 00:01:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.KklSe0lShL 00:17:21.434 00:01:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:17:21.434 00:01:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:17:21.434 00:01:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.50 00:17:21.434 00:01:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:17:21.434 ************************************ 00:17:21.434 END TEST raid_write_error_test 00:17:21.434 ************************************ 00:17:21.434 00:01:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:21.434 00:01:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:21.434 00:01:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.50 != \0\.\0\0 ]] 00:17:21.434 00:17:21.434 real 0m7.453s 00:17:21.434 user 0m11.069s 00:17:21.434 sys 0m0.916s 00:17:21.434 00:01:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:21.434 00:01:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.434 00:01:17 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:17:21.434 00:01:17 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:17:21.434 00:01:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:21.434 00:01:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:21.434 00:01:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:21.434 ************************************ 00:17:21.434 START TEST raid_state_function_test 00:17:21.434 ************************************ 00:17:21.434 00:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:17:21.434 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=82622 00:17:21.435 Process raid pid: 82622 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 82622' 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 82622 /var/tmp/spdk-raid.sock 00:17:21.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82622 ']' 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:21.435 00:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.435 [2024-07-25 00:01:17.086616] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:17:21.435 [2024-07-25 00:01:17.086840] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.435 [2024-07-25 00:01:17.263009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.693 [2024-07-25 00:01:17.426276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.951 [2024-07-25 00:01:17.589298] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.209 00:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:22.210 00:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:17:22.210 00:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:22.468 [2024-07-25 00:01:18.242559] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:22.468 [2024-07-25 00:01:18.242663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:22.468 [2024-07-25 00:01:18.242678] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:22.468 [2024-07-25 00:01:18.242693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:22.468 [2024-07-25 00:01:18.242703] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:22.468 [2024-07-25 00:01:18.242715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:22.468 00:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:22.468 00:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:22.468 00:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:22.468 00:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:22.468 00:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:22.468 00:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:22.468 00:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:22.468 00:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:22.468 00:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:22.468 00:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:22.468 00:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.468 00:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.726 00:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:22.726 "name": "Existed_Raid", 00:17:22.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.726 "strip_size_kb": 64, 00:17:22.726 "state": "configuring", 00:17:22.726 "raid_level": "concat", 00:17:22.726 "superblock": false, 00:17:22.726 "num_base_bdevs": 3, 00:17:22.726 "num_base_bdevs_discovered": 0, 00:17:22.726 "num_base_bdevs_operational": 3, 00:17:22.726 "base_bdevs_list": [ 00:17:22.726 { 00:17:22.726 "name": "BaseBdev1", 00:17:22.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.726 "is_configured": false, 00:17:22.726 "data_offset": 0, 00:17:22.726 "data_size": 0 00:17:22.726 }, 00:17:22.726 { 00:17:22.726 "name": "BaseBdev2", 00:17:22.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.726 "is_configured": false, 00:17:22.726 "data_offset": 0, 00:17:22.726 "data_size": 0 00:17:22.726 }, 00:17:22.726 { 00:17:22.726 "name": "BaseBdev3", 00:17:22.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.726 "is_configured": false, 00:17:22.726 "data_offset": 0, 00:17:22.726 "data_size": 0 00:17:22.726 } 00:17:22.726 ] 00:17:22.726 }' 00:17:22.726 00:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:22.726 00:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.294 00:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:23.294 [2024-07-25 00:01:19.062641] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:23.294 [2024-07-25 00:01:19.062696] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:17:23.294 00:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:23.552 [2024-07-25 00:01:19.322666] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.552 [2024-07-25 00:01:19.322748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.552 [2024-07-25 00:01:19.322770] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:23.552 [2024-07-25 00:01:19.322788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:23.552 [2024-07-25 00:01:19.322797] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:23.552 [2024-07-25 00:01:19.322809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:23.552 00:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:23.811 [2024-07-25 00:01:19.603484] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:23.811 BaseBdev1 00:17:23.811 00:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:23.811 00:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:23.811 00:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:23.811 00:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:23.811 00:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:23.811 00:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:23.811 00:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:24.070 00:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:24.329 [ 00:17:24.329 { 00:17:24.329 "name": "BaseBdev1", 00:17:24.329 "aliases": [ 00:17:24.329 "f715eceb-52e8-4849-bafa-ae3bd5f38f11" 00:17:24.329 ], 00:17:24.329 "product_name": "Malloc disk", 00:17:24.329 "block_size": 512, 00:17:24.329 "num_blocks": 65536, 00:17:24.329 "uuid": "f715eceb-52e8-4849-bafa-ae3bd5f38f11", 00:17:24.329 "assigned_rate_limits": { 00:17:24.329 "rw_ios_per_sec": 0, 00:17:24.329 "rw_mbytes_per_sec": 0, 00:17:24.329 "r_mbytes_per_sec": 0, 00:17:24.329 "w_mbytes_per_sec": 0 00:17:24.329 }, 00:17:24.329 "claimed": true, 00:17:24.329 "claim_type": "exclusive_write", 00:17:24.329 "zoned": false, 00:17:24.329 "supported_io_types": { 00:17:24.329 "read": true, 00:17:24.329 "write": true, 00:17:24.329 "unmap": true, 00:17:24.329 "flush": true, 00:17:24.329 "reset": true, 00:17:24.329 "nvme_admin": false, 00:17:24.330 "nvme_io": false, 00:17:24.330 "nvme_io_md": false, 00:17:24.330 "write_zeroes": true, 00:17:24.330 "zcopy": true, 00:17:24.330 "get_zone_info": false, 00:17:24.330 "zone_management": false, 00:17:24.330 "zone_append": false, 00:17:24.330 "compare": false, 00:17:24.330 "compare_and_write": false, 00:17:24.330 "abort": true, 00:17:24.330 "seek_hole": false, 00:17:24.330 "seek_data": false, 00:17:24.330 "copy": true, 00:17:24.330 "nvme_iov_md": false 00:17:24.330 }, 00:17:24.330 "memory_domains": [ 00:17:24.330 { 00:17:24.330 "dma_device_id": "system", 00:17:24.330 "dma_device_type": 1 00:17:24.330 }, 00:17:24.330 { 00:17:24.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.330 "dma_device_type": 2 00:17:24.330 } 00:17:24.330 ], 00:17:24.330 "driver_specific": {} 00:17:24.330 } 00:17:24.330 ] 00:17:24.330 00:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:24.330 00:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:24.330 00:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:24.330 00:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:24.330 00:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:24.330 00:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:24.330 00:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:24.330 00:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:24.330 00:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:24.330 00:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:24.330 00:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:24.330 00:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.330 00:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.589 00:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:24.589 "name": "Existed_Raid", 00:17:24.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.589 "strip_size_kb": 64, 00:17:24.589 "state": "configuring", 00:17:24.589 "raid_level": "concat", 00:17:24.589 "superblock": false, 00:17:24.589 "num_base_bdevs": 3, 00:17:24.589 "num_base_bdevs_discovered": 1, 00:17:24.589 "num_base_bdevs_operational": 3, 00:17:24.589 "base_bdevs_list": [ 00:17:24.589 { 00:17:24.589 "name": "BaseBdev1", 00:17:24.589 "uuid": "f715eceb-52e8-4849-bafa-ae3bd5f38f11", 00:17:24.589 "is_configured": true, 00:17:24.589 "data_offset": 0, 00:17:24.589 "data_size": 65536 00:17:24.589 }, 00:17:24.589 { 00:17:24.589 "name": "BaseBdev2", 00:17:24.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.589 "is_configured": false, 00:17:24.589 "data_offset": 0, 00:17:24.589 "data_size": 0 00:17:24.589 }, 00:17:24.589 { 00:17:24.589 "name": "BaseBdev3", 00:17:24.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.589 "is_configured": false, 00:17:24.589 "data_offset": 0, 00:17:24.589 "data_size": 0 00:17:24.589 } 00:17:24.589 ] 00:17:24.589 }' 00:17:24.589 00:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:24.589 00:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.848 00:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:25.107 [2024-07-25 00:01:20.896209] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:25.107 [2024-07-25 00:01:20.896510] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:17:25.107 00:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:25.366 [2024-07-25 00:01:21.164321] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.366 [2024-07-25 00:01:21.166556] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:25.366 [2024-07-25 00:01:21.166779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:25.366 [2024-07-25 00:01:21.166936] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:25.366 [2024-07-25 00:01:21.166970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:25.366 00:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:25.366 00:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:25.366 00:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:25.366 00:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:25.366 00:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:25.366 00:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:25.366 00:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:25.366 00:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:25.366 00:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:25.366 00:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:25.366 00:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:25.366 00:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:25.366 00:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.366 00:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.625 00:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:25.625 "name": "Existed_Raid", 00:17:25.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.625 "strip_size_kb": 64, 00:17:25.625 "state": "configuring", 00:17:25.625 "raid_level": "concat", 00:17:25.625 "superblock": false, 00:17:25.625 "num_base_bdevs": 3, 00:17:25.625 "num_base_bdevs_discovered": 1, 00:17:25.625 "num_base_bdevs_operational": 3, 00:17:25.625 "base_bdevs_list": [ 00:17:25.625 { 00:17:25.625 "name": "BaseBdev1", 00:17:25.625 "uuid": "f715eceb-52e8-4849-bafa-ae3bd5f38f11", 00:17:25.625 "is_configured": true, 00:17:25.625 "data_offset": 0, 00:17:25.625 "data_size": 65536 00:17:25.625 }, 00:17:25.625 { 00:17:25.625 "name": "BaseBdev2", 00:17:25.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.625 "is_configured": false, 00:17:25.625 "data_offset": 0, 00:17:25.625 "data_size": 0 00:17:25.625 }, 00:17:25.625 { 00:17:25.625 "name": "BaseBdev3", 00:17:25.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.625 "is_configured": false, 00:17:25.625 "data_offset": 0, 00:17:25.625 "data_size": 0 00:17:25.625 } 00:17:25.625 ] 00:17:25.625 }' 00:17:25.625 00:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:25.625 00:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.884 00:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:26.143 [2024-07-25 00:01:21.895483] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:26.143 BaseBdev2 00:17:26.143 00:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:26.143 00:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:26.143 00:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:26.143 00:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:26.143 00:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:26.143 00:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:26.143 00:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:26.402 00:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:26.661 [ 00:17:26.661 { 00:17:26.661 "name": "BaseBdev2", 00:17:26.661 "aliases": [ 00:17:26.661 "7c185825-142a-4f20-afbc-b701d1edd416" 00:17:26.661 ], 00:17:26.661 "product_name": "Malloc disk", 00:17:26.661 "block_size": 512, 00:17:26.661 "num_blocks": 65536, 00:17:26.661 "uuid": "7c185825-142a-4f20-afbc-b701d1edd416", 00:17:26.661 "assigned_rate_limits": { 00:17:26.661 "rw_ios_per_sec": 0, 00:17:26.661 "rw_mbytes_per_sec": 0, 00:17:26.661 "r_mbytes_per_sec": 0, 00:17:26.661 "w_mbytes_per_sec": 0 00:17:26.661 }, 00:17:26.661 "claimed": true, 00:17:26.661 "claim_type": "exclusive_write", 00:17:26.661 "zoned": false, 00:17:26.661 "supported_io_types": { 00:17:26.661 "read": true, 00:17:26.661 "write": true, 00:17:26.661 "unmap": true, 00:17:26.661 "flush": true, 00:17:26.661 "reset": true, 00:17:26.661 "nvme_admin": false, 00:17:26.661 "nvme_io": false, 00:17:26.661 "nvme_io_md": false, 00:17:26.661 "write_zeroes": true, 00:17:26.661 "zcopy": true, 00:17:26.661 "get_zone_info": false, 00:17:26.661 "zone_management": false, 00:17:26.661 "zone_append": false, 00:17:26.661 "compare": false, 00:17:26.661 "compare_and_write": false, 00:17:26.661 "abort": true, 00:17:26.661 "seek_hole": false, 00:17:26.661 "seek_data": false, 00:17:26.661 "copy": true, 00:17:26.661 "nvme_iov_md": false 00:17:26.662 }, 00:17:26.662 "memory_domains": [ 00:17:26.662 { 00:17:26.662 "dma_device_id": "system", 00:17:26.662 "dma_device_type": 1 00:17:26.662 }, 00:17:26.662 { 00:17:26.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.662 "dma_device_type": 2 00:17:26.662 } 00:17:26.662 ], 00:17:26.662 "driver_specific": {} 00:17:26.662 } 00:17:26.662 ] 00:17:26.662 00:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:26.662 00:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:26.662 00:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:26.662 00:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:26.662 00:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:26.662 00:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:26.662 00:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:26.662 00:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:26.662 00:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:26.662 00:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:26.662 00:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:26.662 00:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:26.662 00:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:26.662 00:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.662 00:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.921 00:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:26.921 "name": "Existed_Raid", 00:17:26.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.921 "strip_size_kb": 64, 00:17:26.921 "state": "configuring", 00:17:26.921 "raid_level": "concat", 00:17:26.921 "superblock": false, 00:17:26.921 "num_base_bdevs": 3, 00:17:26.921 "num_base_bdevs_discovered": 2, 00:17:26.921 "num_base_bdevs_operational": 3, 00:17:26.921 "base_bdevs_list": [ 00:17:26.921 { 00:17:26.921 "name": "BaseBdev1", 00:17:26.921 "uuid": "f715eceb-52e8-4849-bafa-ae3bd5f38f11", 00:17:26.921 "is_configured": true, 00:17:26.921 "data_offset": 0, 00:17:26.921 "data_size": 65536 00:17:26.921 }, 00:17:26.921 { 00:17:26.921 "name": "BaseBdev2", 00:17:26.921 "uuid": "7c185825-142a-4f20-afbc-b701d1edd416", 00:17:26.921 "is_configured": true, 00:17:26.921 "data_offset": 0, 00:17:26.921 "data_size": 65536 00:17:26.921 }, 00:17:26.921 { 00:17:26.921 "name": "BaseBdev3", 00:17:26.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.921 "is_configured": false, 00:17:26.921 "data_offset": 0, 00:17:26.921 "data_size": 0 00:17:26.921 } 00:17:26.921 ] 00:17:26.921 }' 00:17:26.921 00:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:26.921 00:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.180 00:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:27.439 [2024-07-25 00:01:23.117861] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:27.439 [2024-07-25 00:01:23.118156] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:17:27.439 [2024-07-25 00:01:23.118306] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:27.439 [2024-07-25 00:01:23.118499] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:17:27.439 [2024-07-25 00:01:23.119077] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:17:27.439 [2024-07-25 00:01:23.119341] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:17:27.439 [2024-07-25 00:01:23.119953] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.439 BaseBdev3 00:17:27.439 00:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:27.439 00:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:27.439 00:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:27.439 00:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:27.439 00:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:27.439 00:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:27.439 00:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:27.697 00:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:27.956 [ 00:17:27.956 { 00:17:27.956 "name": "BaseBdev3", 00:17:27.956 "aliases": [ 00:17:27.956 "fedf95df-c3b3-4bf6-b16f-f7686897db43" 00:17:27.956 ], 00:17:27.956 "product_name": "Malloc disk", 00:17:27.956 "block_size": 512, 00:17:27.956 "num_blocks": 65536, 00:17:27.956 "uuid": "fedf95df-c3b3-4bf6-b16f-f7686897db43", 00:17:27.956 "assigned_rate_limits": { 00:17:27.956 "rw_ios_per_sec": 0, 00:17:27.956 "rw_mbytes_per_sec": 0, 00:17:27.956 "r_mbytes_per_sec": 0, 00:17:27.956 "w_mbytes_per_sec": 0 00:17:27.956 }, 00:17:27.956 "claimed": true, 00:17:27.956 "claim_type": "exclusive_write", 00:17:27.956 "zoned": false, 00:17:27.956 "supported_io_types": { 00:17:27.956 "read": true, 00:17:27.956 "write": true, 00:17:27.956 "unmap": true, 00:17:27.956 "flush": true, 00:17:27.956 "reset": true, 00:17:27.956 "nvme_admin": false, 00:17:27.956 "nvme_io": false, 00:17:27.956 "nvme_io_md": false, 00:17:27.956 "write_zeroes": true, 00:17:27.956 "zcopy": true, 00:17:27.956 "get_zone_info": false, 00:17:27.956 "zone_management": false, 00:17:27.956 "zone_append": false, 00:17:27.956 "compare": false, 00:17:27.956 "compare_and_write": false, 00:17:27.956 "abort": true, 00:17:27.956 "seek_hole": false, 00:17:27.956 "seek_data": false, 00:17:27.956 "copy": true, 00:17:27.956 "nvme_iov_md": false 00:17:27.956 }, 00:17:27.956 "memory_domains": [ 00:17:27.956 { 00:17:27.956 "dma_device_id": "system", 00:17:27.956 "dma_device_type": 1 00:17:27.956 }, 00:17:27.956 { 00:17:27.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.956 "dma_device_type": 2 00:17:27.956 } 00:17:27.956 ], 00:17:27.956 "driver_specific": {} 00:17:27.956 } 00:17:27.956 ] 00:17:27.956 00:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:27.956 00:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:27.956 00:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:27.956 00:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:27.956 00:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:27.956 00:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:27.956 00:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:27.956 00:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:27.956 00:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:27.956 00:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:27.956 00:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:27.956 00:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:27.956 00:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:27.956 00:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.956 00:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.214 00:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:28.214 "name": "Existed_Raid", 00:17:28.214 "uuid": "44a9c350-0061-4271-8d11-d24d4b2eff97", 00:17:28.214 "strip_size_kb": 64, 00:17:28.214 "state": "online", 00:17:28.214 "raid_level": "concat", 00:17:28.214 "superblock": false, 00:17:28.214 "num_base_bdevs": 3, 00:17:28.214 "num_base_bdevs_discovered": 3, 00:17:28.214 "num_base_bdevs_operational": 3, 00:17:28.214 "base_bdevs_list": [ 00:17:28.214 { 00:17:28.214 "name": "BaseBdev1", 00:17:28.214 "uuid": "f715eceb-52e8-4849-bafa-ae3bd5f38f11", 00:17:28.214 "is_configured": true, 00:17:28.214 "data_offset": 0, 00:17:28.214 "data_size": 65536 00:17:28.214 }, 00:17:28.214 { 00:17:28.214 "name": "BaseBdev2", 00:17:28.214 "uuid": "7c185825-142a-4f20-afbc-b701d1edd416", 00:17:28.214 "is_configured": true, 00:17:28.214 "data_offset": 0, 00:17:28.214 "data_size": 65536 00:17:28.214 }, 00:17:28.214 { 00:17:28.214 "name": "BaseBdev3", 00:17:28.214 "uuid": "fedf95df-c3b3-4bf6-b16f-f7686897db43", 00:17:28.214 "is_configured": true, 00:17:28.214 "data_offset": 0, 00:17:28.214 "data_size": 65536 00:17:28.214 } 00:17:28.214 ] 00:17:28.214 }' 00:17:28.214 00:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:28.214 00:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.472 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:28.472 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:28.472 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:28.472 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:28.472 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:28.472 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:28.472 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:28.472 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:28.731 [2024-07-25 00:01:24.378707] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:28.731 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:28.731 "name": "Existed_Raid", 00:17:28.731 "aliases": [ 00:17:28.731 "44a9c350-0061-4271-8d11-d24d4b2eff97" 00:17:28.731 ], 00:17:28.731 "product_name": "Raid Volume", 00:17:28.731 "block_size": 512, 00:17:28.731 "num_blocks": 196608, 00:17:28.731 "uuid": "44a9c350-0061-4271-8d11-d24d4b2eff97", 00:17:28.731 "assigned_rate_limits": { 00:17:28.731 "rw_ios_per_sec": 0, 00:17:28.731 "rw_mbytes_per_sec": 0, 00:17:28.731 "r_mbytes_per_sec": 0, 00:17:28.731 "w_mbytes_per_sec": 0 00:17:28.731 }, 00:17:28.731 "claimed": false, 00:17:28.731 "zoned": false, 00:17:28.731 "supported_io_types": { 00:17:28.731 "read": true, 00:17:28.731 "write": true, 00:17:28.731 "unmap": true, 00:17:28.731 "flush": true, 00:17:28.731 "reset": true, 00:17:28.731 "nvme_admin": false, 00:17:28.731 "nvme_io": false, 00:17:28.731 "nvme_io_md": false, 00:17:28.731 "write_zeroes": true, 00:17:28.731 "zcopy": false, 00:17:28.731 "get_zone_info": false, 00:17:28.731 "zone_management": false, 00:17:28.731 "zone_append": false, 00:17:28.731 "compare": false, 00:17:28.731 "compare_and_write": false, 00:17:28.731 "abort": false, 00:17:28.731 "seek_hole": false, 00:17:28.731 "seek_data": false, 00:17:28.731 "copy": false, 00:17:28.731 "nvme_iov_md": false 00:17:28.731 }, 00:17:28.731 "memory_domains": [ 00:17:28.731 { 00:17:28.731 "dma_device_id": "system", 00:17:28.731 "dma_device_type": 1 00:17:28.731 }, 00:17:28.731 { 00:17:28.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.731 "dma_device_type": 2 00:17:28.731 }, 00:17:28.731 { 00:17:28.731 "dma_device_id": "system", 00:17:28.731 "dma_device_type": 1 00:17:28.731 }, 00:17:28.731 { 00:17:28.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.731 "dma_device_type": 2 00:17:28.731 }, 00:17:28.731 { 00:17:28.731 "dma_device_id": "system", 00:17:28.731 "dma_device_type": 1 00:17:28.731 }, 00:17:28.731 { 00:17:28.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.731 "dma_device_type": 2 00:17:28.731 } 00:17:28.731 ], 00:17:28.731 "driver_specific": { 00:17:28.731 "raid": { 00:17:28.731 "uuid": "44a9c350-0061-4271-8d11-d24d4b2eff97", 00:17:28.731 "strip_size_kb": 64, 00:17:28.731 "state": "online", 00:17:28.731 "raid_level": "concat", 00:17:28.731 "superblock": false, 00:17:28.731 "num_base_bdevs": 3, 00:17:28.731 "num_base_bdevs_discovered": 3, 00:17:28.731 "num_base_bdevs_operational": 3, 00:17:28.731 "base_bdevs_list": [ 00:17:28.731 { 00:17:28.731 "name": "BaseBdev1", 00:17:28.731 "uuid": "f715eceb-52e8-4849-bafa-ae3bd5f38f11", 00:17:28.731 "is_configured": true, 00:17:28.731 "data_offset": 0, 00:17:28.731 "data_size": 65536 00:17:28.731 }, 00:17:28.731 { 00:17:28.731 "name": "BaseBdev2", 00:17:28.731 "uuid": "7c185825-142a-4f20-afbc-b701d1edd416", 00:17:28.731 "is_configured": true, 00:17:28.731 "data_offset": 0, 00:17:28.731 "data_size": 65536 00:17:28.731 }, 00:17:28.731 { 00:17:28.731 "name": "BaseBdev3", 00:17:28.731 "uuid": "fedf95df-c3b3-4bf6-b16f-f7686897db43", 00:17:28.731 "is_configured": true, 00:17:28.731 "data_offset": 0, 00:17:28.731 "data_size": 65536 00:17:28.731 } 00:17:28.731 ] 00:17:28.731 } 00:17:28.731 } 00:17:28.731 }' 00:17:28.731 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:28.731 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:28.731 BaseBdev2 00:17:28.731 BaseBdev3' 00:17:28.731 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:28.731 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:28.731 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:28.990 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:28.991 "name": "BaseBdev1", 00:17:28.991 "aliases": [ 00:17:28.991 "f715eceb-52e8-4849-bafa-ae3bd5f38f11" 00:17:28.991 ], 00:17:28.991 "product_name": "Malloc disk", 00:17:28.991 "block_size": 512, 00:17:28.991 "num_blocks": 65536, 00:17:28.991 "uuid": "f715eceb-52e8-4849-bafa-ae3bd5f38f11", 00:17:28.991 "assigned_rate_limits": { 00:17:28.991 "rw_ios_per_sec": 0, 00:17:28.991 "rw_mbytes_per_sec": 0, 00:17:28.991 "r_mbytes_per_sec": 0, 00:17:28.991 "w_mbytes_per_sec": 0 00:17:28.991 }, 00:17:28.991 "claimed": true, 00:17:28.991 "claim_type": "exclusive_write", 00:17:28.991 "zoned": false, 00:17:28.991 "supported_io_types": { 00:17:28.991 "read": true, 00:17:28.991 "write": true, 00:17:28.991 "unmap": true, 00:17:28.991 "flush": true, 00:17:28.991 "reset": true, 00:17:28.991 "nvme_admin": false, 00:17:28.991 "nvme_io": false, 00:17:28.991 "nvme_io_md": false, 00:17:28.991 "write_zeroes": true, 00:17:28.991 "zcopy": true, 00:17:28.991 "get_zone_info": false, 00:17:28.991 "zone_management": false, 00:17:28.991 "zone_append": false, 00:17:28.991 "compare": false, 00:17:28.991 "compare_and_write": false, 00:17:28.991 "abort": true, 00:17:28.991 "seek_hole": false, 00:17:28.991 "seek_data": false, 00:17:28.991 "copy": true, 00:17:28.991 "nvme_iov_md": false 00:17:28.991 }, 00:17:28.991 "memory_domains": [ 00:17:28.991 { 00:17:28.991 "dma_device_id": "system", 00:17:28.991 "dma_device_type": 1 00:17:28.991 }, 00:17:28.991 { 00:17:28.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.991 "dma_device_type": 2 00:17:28.991 } 00:17:28.991 ], 00:17:28.991 "driver_specific": {} 00:17:28.991 }' 00:17:28.991 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.991 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.991 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:28.991 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.991 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.991 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:28.991 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.991 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.991 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:28.991 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.991 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.991 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:28.991 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:28.991 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:28.991 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:29.250 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:29.250 "name": "BaseBdev2", 00:17:29.250 "aliases": [ 00:17:29.250 "7c185825-142a-4f20-afbc-b701d1edd416" 00:17:29.250 ], 00:17:29.250 "product_name": "Malloc disk", 00:17:29.250 "block_size": 512, 00:17:29.250 "num_blocks": 65536, 00:17:29.250 "uuid": "7c185825-142a-4f20-afbc-b701d1edd416", 00:17:29.250 "assigned_rate_limits": { 00:17:29.250 "rw_ios_per_sec": 0, 00:17:29.250 "rw_mbytes_per_sec": 0, 00:17:29.250 "r_mbytes_per_sec": 0, 00:17:29.250 "w_mbytes_per_sec": 0 00:17:29.250 }, 00:17:29.250 "claimed": true, 00:17:29.250 "claim_type": "exclusive_write", 00:17:29.250 "zoned": false, 00:17:29.250 "supported_io_types": { 00:17:29.250 "read": true, 00:17:29.250 "write": true, 00:17:29.250 "unmap": true, 00:17:29.250 "flush": true, 00:17:29.250 "reset": true, 00:17:29.250 "nvme_admin": false, 00:17:29.250 "nvme_io": false, 00:17:29.250 "nvme_io_md": false, 00:17:29.250 "write_zeroes": true, 00:17:29.250 "zcopy": true, 00:17:29.250 "get_zone_info": false, 00:17:29.250 "zone_management": false, 00:17:29.250 "zone_append": false, 00:17:29.250 "compare": false, 00:17:29.250 "compare_and_write": false, 00:17:29.250 "abort": true, 00:17:29.250 "seek_hole": false, 00:17:29.250 "seek_data": false, 00:17:29.250 "copy": true, 00:17:29.250 "nvme_iov_md": false 00:17:29.250 }, 00:17:29.250 "memory_domains": [ 00:17:29.250 { 00:17:29.250 "dma_device_id": "system", 00:17:29.250 "dma_device_type": 1 00:17:29.250 }, 00:17:29.250 { 00:17:29.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.250 "dma_device_type": 2 00:17:29.250 } 00:17:29.250 ], 00:17:29.250 "driver_specific": {} 00:17:29.250 }' 00:17:29.250 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:29.250 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:29.250 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:29.250 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:29.250 00:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:29.250 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:29.250 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:29.250 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:29.250 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:29.250 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:29.250 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:29.250 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:29.250 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:29.250 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:29.250 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:29.509 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:29.509 "name": "BaseBdev3", 00:17:29.509 "aliases": [ 00:17:29.509 "fedf95df-c3b3-4bf6-b16f-f7686897db43" 00:17:29.509 ], 00:17:29.509 "product_name": "Malloc disk", 00:17:29.509 "block_size": 512, 00:17:29.509 "num_blocks": 65536, 00:17:29.509 "uuid": "fedf95df-c3b3-4bf6-b16f-f7686897db43", 00:17:29.509 "assigned_rate_limits": { 00:17:29.509 "rw_ios_per_sec": 0, 00:17:29.509 "rw_mbytes_per_sec": 0, 00:17:29.509 "r_mbytes_per_sec": 0, 00:17:29.509 "w_mbytes_per_sec": 0 00:17:29.509 }, 00:17:29.509 "claimed": true, 00:17:29.509 "claim_type": "exclusive_write", 00:17:29.509 "zoned": false, 00:17:29.509 "supported_io_types": { 00:17:29.509 "read": true, 00:17:29.509 "write": true, 00:17:29.509 "unmap": true, 00:17:29.509 "flush": true, 00:17:29.509 "reset": true, 00:17:29.509 "nvme_admin": false, 00:17:29.509 "nvme_io": false, 00:17:29.509 "nvme_io_md": false, 00:17:29.509 "write_zeroes": true, 00:17:29.509 "zcopy": true, 00:17:29.509 "get_zone_info": false, 00:17:29.509 "zone_management": false, 00:17:29.509 "zone_append": false, 00:17:29.509 "compare": false, 00:17:29.510 "compare_and_write": false, 00:17:29.510 "abort": true, 00:17:29.510 "seek_hole": false, 00:17:29.510 "seek_data": false, 00:17:29.510 "copy": true, 00:17:29.510 "nvme_iov_md": false 00:17:29.510 }, 00:17:29.510 "memory_domains": [ 00:17:29.510 { 00:17:29.510 "dma_device_id": "system", 00:17:29.510 "dma_device_type": 1 00:17:29.510 }, 00:17:29.510 { 00:17:29.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.510 "dma_device_type": 2 00:17:29.510 } 00:17:29.510 ], 00:17:29.510 "driver_specific": {} 00:17:29.510 }' 00:17:29.510 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:29.510 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:29.510 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:29.510 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:29.510 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:29.510 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:29.510 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:29.510 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:29.510 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:29.510 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:29.768 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:29.768 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:29.768 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:29.768 [2024-07-25 00:01:25.630764] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:29.768 [2024-07-25 00:01:25.630807] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.768 [2024-07-25 00:01:25.630921] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.027 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:30.027 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:17:30.027 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:30.027 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:30.028 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:30.028 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:30.028 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:30.028 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:30.028 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:30.028 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:30.028 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:30.028 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:30.028 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:30.028 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:30.028 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:30.028 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.028 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.287 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:30.287 "name": "Existed_Raid", 00:17:30.287 "uuid": "44a9c350-0061-4271-8d11-d24d4b2eff97", 00:17:30.287 "strip_size_kb": 64, 00:17:30.287 "state": "offline", 00:17:30.287 "raid_level": "concat", 00:17:30.287 "superblock": false, 00:17:30.287 "num_base_bdevs": 3, 00:17:30.287 "num_base_bdevs_discovered": 2, 00:17:30.287 "num_base_bdevs_operational": 2, 00:17:30.287 "base_bdevs_list": [ 00:17:30.287 { 00:17:30.287 "name": null, 00:17:30.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.287 "is_configured": false, 00:17:30.287 "data_offset": 0, 00:17:30.287 "data_size": 65536 00:17:30.287 }, 00:17:30.287 { 00:17:30.287 "name": "BaseBdev2", 00:17:30.287 "uuid": "7c185825-142a-4f20-afbc-b701d1edd416", 00:17:30.287 "is_configured": true, 00:17:30.287 "data_offset": 0, 00:17:30.287 "data_size": 65536 00:17:30.287 }, 00:17:30.287 { 00:17:30.287 "name": "BaseBdev3", 00:17:30.287 "uuid": "fedf95df-c3b3-4bf6-b16f-f7686897db43", 00:17:30.287 "is_configured": true, 00:17:30.287 "data_offset": 0, 00:17:30.287 "data_size": 65536 00:17:30.287 } 00:17:30.287 ] 00:17:30.287 }' 00:17:30.287 00:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:30.287 00:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.546 00:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:30.546 00:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:30.546 00:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.546 00:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:30.805 00:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:30.805 00:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:30.805 00:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:31.064 [2024-07-25 00:01:26.732270] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:31.064 00:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:31.064 00:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:31.064 00:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.064 00:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:31.323 00:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:31.323 00:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:31.323 00:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:31.583 [2024-07-25 00:01:27.320849] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:31.583 [2024-07-25 00:01:27.320927] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:17:31.583 00:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:31.583 00:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:31.583 00:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.583 00:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:31.842 00:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:31.842 00:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:31.842 00:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:17:31.842 00:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:31.842 00:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:31.842 00:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:32.101 BaseBdev2 00:17:32.101 00:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:32.101 00:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:32.101 00:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:32.101 00:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:32.101 00:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:32.101 00:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:32.101 00:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:32.360 00:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:32.619 [ 00:17:32.619 { 00:17:32.619 "name": "BaseBdev2", 00:17:32.619 "aliases": [ 00:17:32.619 "b8079748-6bb5-438a-a0f7-a80665ee0ab0" 00:17:32.619 ], 00:17:32.619 "product_name": "Malloc disk", 00:17:32.619 "block_size": 512, 00:17:32.619 "num_blocks": 65536, 00:17:32.619 "uuid": "b8079748-6bb5-438a-a0f7-a80665ee0ab0", 00:17:32.619 "assigned_rate_limits": { 00:17:32.619 "rw_ios_per_sec": 0, 00:17:32.619 "rw_mbytes_per_sec": 0, 00:17:32.619 "r_mbytes_per_sec": 0, 00:17:32.619 "w_mbytes_per_sec": 0 00:17:32.619 }, 00:17:32.619 "claimed": false, 00:17:32.619 "zoned": false, 00:17:32.619 "supported_io_types": { 00:17:32.619 "read": true, 00:17:32.619 "write": true, 00:17:32.619 "unmap": true, 00:17:32.619 "flush": true, 00:17:32.619 "reset": true, 00:17:32.619 "nvme_admin": false, 00:17:32.619 "nvme_io": false, 00:17:32.619 "nvme_io_md": false, 00:17:32.619 "write_zeroes": true, 00:17:32.619 "zcopy": true, 00:17:32.619 "get_zone_info": false, 00:17:32.619 "zone_management": false, 00:17:32.619 "zone_append": false, 00:17:32.619 "compare": false, 00:17:32.619 "compare_and_write": false, 00:17:32.619 "abort": true, 00:17:32.619 "seek_hole": false, 00:17:32.619 "seek_data": false, 00:17:32.619 "copy": true, 00:17:32.620 "nvme_iov_md": false 00:17:32.620 }, 00:17:32.620 "memory_domains": [ 00:17:32.620 { 00:17:32.620 "dma_device_id": "system", 00:17:32.620 "dma_device_type": 1 00:17:32.620 }, 00:17:32.620 { 00:17:32.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.620 "dma_device_type": 2 00:17:32.620 } 00:17:32.620 ], 00:17:32.620 "driver_specific": {} 00:17:32.620 } 00:17:32.620 ] 00:17:32.620 00:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:32.620 00:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:32.620 00:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:32.620 00:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:32.879 BaseBdev3 00:17:32.879 00:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:32.879 00:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:32.879 00:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:32.879 00:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:32.879 00:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:32.879 00:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:32.879 00:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:33.137 00:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:33.397 [ 00:17:33.397 { 00:17:33.397 "name": "BaseBdev3", 00:17:33.397 "aliases": [ 00:17:33.397 "e5529a63-41d1-4578-aa56-389589d69a03" 00:17:33.397 ], 00:17:33.397 "product_name": "Malloc disk", 00:17:33.397 "block_size": 512, 00:17:33.397 "num_blocks": 65536, 00:17:33.397 "uuid": "e5529a63-41d1-4578-aa56-389589d69a03", 00:17:33.397 "assigned_rate_limits": { 00:17:33.397 "rw_ios_per_sec": 0, 00:17:33.397 "rw_mbytes_per_sec": 0, 00:17:33.397 "r_mbytes_per_sec": 0, 00:17:33.397 "w_mbytes_per_sec": 0 00:17:33.397 }, 00:17:33.397 "claimed": false, 00:17:33.397 "zoned": false, 00:17:33.397 "supported_io_types": { 00:17:33.397 "read": true, 00:17:33.397 "write": true, 00:17:33.397 "unmap": true, 00:17:33.397 "flush": true, 00:17:33.397 "reset": true, 00:17:33.397 "nvme_admin": false, 00:17:33.397 "nvme_io": false, 00:17:33.397 "nvme_io_md": false, 00:17:33.397 "write_zeroes": true, 00:17:33.397 "zcopy": true, 00:17:33.397 "get_zone_info": false, 00:17:33.397 "zone_management": false, 00:17:33.397 "zone_append": false, 00:17:33.397 "compare": false, 00:17:33.397 "compare_and_write": false, 00:17:33.397 "abort": true, 00:17:33.397 "seek_hole": false, 00:17:33.397 "seek_data": false, 00:17:33.397 "copy": true, 00:17:33.397 "nvme_iov_md": false 00:17:33.397 }, 00:17:33.397 "memory_domains": [ 00:17:33.397 { 00:17:33.397 "dma_device_id": "system", 00:17:33.397 "dma_device_type": 1 00:17:33.397 }, 00:17:33.397 { 00:17:33.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.397 "dma_device_type": 2 00:17:33.397 } 00:17:33.397 ], 00:17:33.397 "driver_specific": {} 00:17:33.397 } 00:17:33.397 ] 00:17:33.397 00:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:33.397 00:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:33.397 00:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:33.397 00:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:33.397 [2024-07-25 00:01:29.233509] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:33.397 [2024-07-25 00:01:29.233569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:33.397 [2024-07-25 00:01:29.233617] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:33.397 [2024-07-25 00:01:29.235748] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:33.397 00:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:33.397 00:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:33.397 00:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:33.397 00:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:33.397 00:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:33.397 00:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:33.397 00:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:33.397 00:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:33.397 00:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:33.397 00:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:33.397 00:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.397 00:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.656 00:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:33.656 "name": "Existed_Raid", 00:17:33.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.656 "strip_size_kb": 64, 00:17:33.656 "state": "configuring", 00:17:33.656 "raid_level": "concat", 00:17:33.656 "superblock": false, 00:17:33.656 "num_base_bdevs": 3, 00:17:33.656 "num_base_bdevs_discovered": 2, 00:17:33.656 "num_base_bdevs_operational": 3, 00:17:33.656 "base_bdevs_list": [ 00:17:33.656 { 00:17:33.656 "name": "BaseBdev1", 00:17:33.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.656 "is_configured": false, 00:17:33.656 "data_offset": 0, 00:17:33.656 "data_size": 0 00:17:33.656 }, 00:17:33.656 { 00:17:33.656 "name": "BaseBdev2", 00:17:33.656 "uuid": "b8079748-6bb5-438a-a0f7-a80665ee0ab0", 00:17:33.656 "is_configured": true, 00:17:33.656 "data_offset": 0, 00:17:33.656 "data_size": 65536 00:17:33.656 }, 00:17:33.656 { 00:17:33.656 "name": "BaseBdev3", 00:17:33.656 "uuid": "e5529a63-41d1-4578-aa56-389589d69a03", 00:17:33.656 "is_configured": true, 00:17:33.656 "data_offset": 0, 00:17:33.656 "data_size": 65536 00:17:33.656 } 00:17:33.656 ] 00:17:33.656 }' 00:17:33.657 00:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:33.657 00:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.915 00:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:34.174 [2024-07-25 00:01:30.033682] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:34.433 00:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:34.433 00:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:34.433 00:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:34.433 00:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:34.433 00:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:34.433 00:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:34.433 00:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:34.433 00:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:34.433 00:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:34.433 00:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:34.433 00:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.433 00:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.692 00:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:34.692 "name": "Existed_Raid", 00:17:34.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.692 "strip_size_kb": 64, 00:17:34.692 "state": "configuring", 00:17:34.692 "raid_level": "concat", 00:17:34.692 "superblock": false, 00:17:34.692 "num_base_bdevs": 3, 00:17:34.692 "num_base_bdevs_discovered": 1, 00:17:34.692 "num_base_bdevs_operational": 3, 00:17:34.692 "base_bdevs_list": [ 00:17:34.692 { 00:17:34.692 "name": "BaseBdev1", 00:17:34.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.692 "is_configured": false, 00:17:34.692 "data_offset": 0, 00:17:34.692 "data_size": 0 00:17:34.692 }, 00:17:34.692 { 00:17:34.692 "name": null, 00:17:34.692 "uuid": "b8079748-6bb5-438a-a0f7-a80665ee0ab0", 00:17:34.692 "is_configured": false, 00:17:34.692 "data_offset": 0, 00:17:34.692 "data_size": 65536 00:17:34.692 }, 00:17:34.692 { 00:17:34.692 "name": "BaseBdev3", 00:17:34.692 "uuid": "e5529a63-41d1-4578-aa56-389589d69a03", 00:17:34.692 "is_configured": true, 00:17:34.692 "data_offset": 0, 00:17:34.692 "data_size": 65536 00:17:34.692 } 00:17:34.692 ] 00:17:34.692 }' 00:17:34.692 00:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:34.692 00:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.950 00:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.950 00:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:34.950 00:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:34.950 00:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:35.209 [2024-07-25 00:01:31.071147] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.209 BaseBdev1 00:17:35.468 00:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:35.469 00:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:35.469 00:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:35.469 00:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:35.469 00:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:35.469 00:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:35.469 00:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:35.469 00:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:35.728 [ 00:17:35.728 { 00:17:35.728 "name": "BaseBdev1", 00:17:35.728 "aliases": [ 00:17:35.728 "c6c8e6c6-5b85-471d-b4f8-1e1562b15af4" 00:17:35.728 ], 00:17:35.728 "product_name": "Malloc disk", 00:17:35.728 "block_size": 512, 00:17:35.728 "num_blocks": 65536, 00:17:35.728 "uuid": "c6c8e6c6-5b85-471d-b4f8-1e1562b15af4", 00:17:35.728 "assigned_rate_limits": { 00:17:35.728 "rw_ios_per_sec": 0, 00:17:35.728 "rw_mbytes_per_sec": 0, 00:17:35.728 "r_mbytes_per_sec": 0, 00:17:35.728 "w_mbytes_per_sec": 0 00:17:35.728 }, 00:17:35.728 "claimed": true, 00:17:35.728 "claim_type": "exclusive_write", 00:17:35.728 "zoned": false, 00:17:35.728 "supported_io_types": { 00:17:35.728 "read": true, 00:17:35.728 "write": true, 00:17:35.728 "unmap": true, 00:17:35.728 "flush": true, 00:17:35.728 "reset": true, 00:17:35.728 "nvme_admin": false, 00:17:35.728 "nvme_io": false, 00:17:35.728 "nvme_io_md": false, 00:17:35.728 "write_zeroes": true, 00:17:35.728 "zcopy": true, 00:17:35.728 "get_zone_info": false, 00:17:35.728 "zone_management": false, 00:17:35.728 "zone_append": false, 00:17:35.728 "compare": false, 00:17:35.728 "compare_and_write": false, 00:17:35.728 "abort": true, 00:17:35.728 "seek_hole": false, 00:17:35.728 "seek_data": false, 00:17:35.728 "copy": true, 00:17:35.728 "nvme_iov_md": false 00:17:35.728 }, 00:17:35.728 "memory_domains": [ 00:17:35.728 { 00:17:35.728 "dma_device_id": "system", 00:17:35.728 "dma_device_type": 1 00:17:35.728 }, 00:17:35.728 { 00:17:35.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.728 "dma_device_type": 2 00:17:35.728 } 00:17:35.728 ], 00:17:35.728 "driver_specific": {} 00:17:35.728 } 00:17:35.728 ] 00:17:35.728 00:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:35.728 00:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:35.728 00:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:35.728 00:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:35.728 00:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:35.728 00:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:35.728 00:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:35.728 00:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:35.728 00:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:35.728 00:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:35.728 00:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:35.728 00:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.728 00:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.986 00:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:35.986 "name": "Existed_Raid", 00:17:35.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.986 "strip_size_kb": 64, 00:17:35.986 "state": "configuring", 00:17:35.986 "raid_level": "concat", 00:17:35.986 "superblock": false, 00:17:35.986 "num_base_bdevs": 3, 00:17:35.986 "num_base_bdevs_discovered": 2, 00:17:35.986 "num_base_bdevs_operational": 3, 00:17:35.986 "base_bdevs_list": [ 00:17:35.986 { 00:17:35.986 "name": "BaseBdev1", 00:17:35.986 "uuid": "c6c8e6c6-5b85-471d-b4f8-1e1562b15af4", 00:17:35.987 "is_configured": true, 00:17:35.987 "data_offset": 0, 00:17:35.987 "data_size": 65536 00:17:35.987 }, 00:17:35.987 { 00:17:35.987 "name": null, 00:17:35.987 "uuid": "b8079748-6bb5-438a-a0f7-a80665ee0ab0", 00:17:35.987 "is_configured": false, 00:17:35.987 "data_offset": 0, 00:17:35.987 "data_size": 65536 00:17:35.987 }, 00:17:35.987 { 00:17:35.987 "name": "BaseBdev3", 00:17:35.987 "uuid": "e5529a63-41d1-4578-aa56-389589d69a03", 00:17:35.987 "is_configured": true, 00:17:35.987 "data_offset": 0, 00:17:35.987 "data_size": 65536 00:17:35.987 } 00:17:35.987 ] 00:17:35.987 }' 00:17:35.987 00:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:35.987 00:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.246 00:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.246 00:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:36.505 00:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:36.505 00:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:36.764 [2024-07-25 00:01:32.535723] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:36.764 00:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:36.764 00:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:36.764 00:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:36.764 00:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:36.764 00:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:36.764 00:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:36.764 00:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:36.764 00:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:36.764 00:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:36.764 00:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:36.764 00:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.764 00:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.024 00:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:37.024 "name": "Existed_Raid", 00:17:37.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.024 "strip_size_kb": 64, 00:17:37.024 "state": "configuring", 00:17:37.024 "raid_level": "concat", 00:17:37.024 "superblock": false, 00:17:37.024 "num_base_bdevs": 3, 00:17:37.024 "num_base_bdevs_discovered": 1, 00:17:37.024 "num_base_bdevs_operational": 3, 00:17:37.024 "base_bdevs_list": [ 00:17:37.024 { 00:17:37.024 "name": "BaseBdev1", 00:17:37.024 "uuid": "c6c8e6c6-5b85-471d-b4f8-1e1562b15af4", 00:17:37.024 "is_configured": true, 00:17:37.024 "data_offset": 0, 00:17:37.024 "data_size": 65536 00:17:37.024 }, 00:17:37.024 { 00:17:37.024 "name": null, 00:17:37.024 "uuid": "b8079748-6bb5-438a-a0f7-a80665ee0ab0", 00:17:37.024 "is_configured": false, 00:17:37.024 "data_offset": 0, 00:17:37.024 "data_size": 65536 00:17:37.024 }, 00:17:37.024 { 00:17:37.024 "name": null, 00:17:37.024 "uuid": "e5529a63-41d1-4578-aa56-389589d69a03", 00:17:37.024 "is_configured": false, 00:17:37.024 "data_offset": 0, 00:17:37.024 "data_size": 65536 00:17:37.024 } 00:17:37.024 ] 00:17:37.024 }' 00:17:37.024 00:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:37.024 00:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.309 00:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:37.309 00:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.574 00:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:37.574 00:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:37.833 [2024-07-25 00:01:33.484045] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:37.833 00:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:37.833 00:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:37.833 00:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:37.833 00:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:37.833 00:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:37.833 00:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:37.833 00:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:37.833 00:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:37.833 00:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:37.833 00:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:37.833 00:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.833 00:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.090 00:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:38.090 "name": "Existed_Raid", 00:17:38.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.090 "strip_size_kb": 64, 00:17:38.090 "state": "configuring", 00:17:38.090 "raid_level": "concat", 00:17:38.090 "superblock": false, 00:17:38.090 "num_base_bdevs": 3, 00:17:38.090 "num_base_bdevs_discovered": 2, 00:17:38.090 "num_base_bdevs_operational": 3, 00:17:38.090 "base_bdevs_list": [ 00:17:38.090 { 00:17:38.090 "name": "BaseBdev1", 00:17:38.090 "uuid": "c6c8e6c6-5b85-471d-b4f8-1e1562b15af4", 00:17:38.090 "is_configured": true, 00:17:38.090 "data_offset": 0, 00:17:38.090 "data_size": 65536 00:17:38.090 }, 00:17:38.090 { 00:17:38.090 "name": null, 00:17:38.090 "uuid": "b8079748-6bb5-438a-a0f7-a80665ee0ab0", 00:17:38.090 "is_configured": false, 00:17:38.090 "data_offset": 0, 00:17:38.090 "data_size": 65536 00:17:38.090 }, 00:17:38.090 { 00:17:38.090 "name": "BaseBdev3", 00:17:38.090 "uuid": "e5529a63-41d1-4578-aa56-389589d69a03", 00:17:38.090 "is_configured": true, 00:17:38.090 "data_offset": 0, 00:17:38.090 "data_size": 65536 00:17:38.090 } 00:17:38.090 ] 00:17:38.090 }' 00:17:38.091 00:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:38.091 00:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.348 00:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:38.348 00:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.607 00:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:38.607 00:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:38.865 [2024-07-25 00:01:34.520439] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:38.865 00:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:38.865 00:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:38.865 00:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:38.865 00:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:38.865 00:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:38.865 00:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:38.865 00:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:38.865 00:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:38.865 00:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:38.865 00:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:38.866 00:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.866 00:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.124 00:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:39.124 "name": "Existed_Raid", 00:17:39.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.124 "strip_size_kb": 64, 00:17:39.124 "state": "configuring", 00:17:39.124 "raid_level": "concat", 00:17:39.124 "superblock": false, 00:17:39.124 "num_base_bdevs": 3, 00:17:39.124 "num_base_bdevs_discovered": 1, 00:17:39.124 "num_base_bdevs_operational": 3, 00:17:39.124 "base_bdevs_list": [ 00:17:39.124 { 00:17:39.124 "name": null, 00:17:39.124 "uuid": "c6c8e6c6-5b85-471d-b4f8-1e1562b15af4", 00:17:39.124 "is_configured": false, 00:17:39.125 "data_offset": 0, 00:17:39.125 "data_size": 65536 00:17:39.125 }, 00:17:39.125 { 00:17:39.125 "name": null, 00:17:39.125 "uuid": "b8079748-6bb5-438a-a0f7-a80665ee0ab0", 00:17:39.125 "is_configured": false, 00:17:39.125 "data_offset": 0, 00:17:39.125 "data_size": 65536 00:17:39.125 }, 00:17:39.125 { 00:17:39.125 "name": "BaseBdev3", 00:17:39.125 "uuid": "e5529a63-41d1-4578-aa56-389589d69a03", 00:17:39.125 "is_configured": true, 00:17:39.125 "data_offset": 0, 00:17:39.125 "data_size": 65536 00:17:39.125 } 00:17:39.125 ] 00:17:39.125 }' 00:17:39.125 00:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:39.125 00:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.384 00:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.384 00:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:39.642 00:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:39.642 00:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:39.901 [2024-07-25 00:01:35.651408] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:39.901 00:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:39.901 00:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:39.901 00:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:39.901 00:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:39.901 00:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:39.901 00:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:39.901 00:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:39.901 00:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:39.901 00:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:39.902 00:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:39.902 00:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.902 00:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.160 00:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:40.160 "name": "Existed_Raid", 00:17:40.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.160 "strip_size_kb": 64, 00:17:40.160 "state": "configuring", 00:17:40.160 "raid_level": "concat", 00:17:40.160 "superblock": false, 00:17:40.160 "num_base_bdevs": 3, 00:17:40.160 "num_base_bdevs_discovered": 2, 00:17:40.160 "num_base_bdevs_operational": 3, 00:17:40.160 "base_bdevs_list": [ 00:17:40.160 { 00:17:40.160 "name": null, 00:17:40.160 "uuid": "c6c8e6c6-5b85-471d-b4f8-1e1562b15af4", 00:17:40.160 "is_configured": false, 00:17:40.160 "data_offset": 0, 00:17:40.160 "data_size": 65536 00:17:40.160 }, 00:17:40.160 { 00:17:40.160 "name": "BaseBdev2", 00:17:40.160 "uuid": "b8079748-6bb5-438a-a0f7-a80665ee0ab0", 00:17:40.160 "is_configured": true, 00:17:40.160 "data_offset": 0, 00:17:40.160 "data_size": 65536 00:17:40.160 }, 00:17:40.160 { 00:17:40.160 "name": "BaseBdev3", 00:17:40.160 "uuid": "e5529a63-41d1-4578-aa56-389589d69a03", 00:17:40.160 "is_configured": true, 00:17:40.160 "data_offset": 0, 00:17:40.160 "data_size": 65536 00:17:40.160 } 00:17:40.160 ] 00:17:40.160 }' 00:17:40.160 00:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:40.160 00:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.418 00:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.418 00:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:40.676 00:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:40.676 00:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.676 00:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:40.934 00:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u c6c8e6c6-5b85-471d-b4f8-1e1562b15af4 00:17:41.193 [2024-07-25 00:01:37.057146] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:41.193 [2024-07-25 00:01:37.057223] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:17:41.193 [2024-07-25 00:01:37.057237] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:41.193 [2024-07-25 00:01:37.057338] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005d40 00:17:41.193 [2024-07-25 00:01:37.057733] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:17:41.193 [2024-07-25 00:01:37.057777] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000008a80 00:17:41.193 [2024-07-25 00:01:37.058091] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.193 NewBaseBdev 00:17:41.452 00:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:41.452 00:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:41.452 00:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:41.452 00:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:41.452 00:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:41.452 00:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:41.452 00:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:41.452 00:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:41.711 [ 00:17:41.711 { 00:17:41.711 "name": "NewBaseBdev", 00:17:41.711 "aliases": [ 00:17:41.711 "c6c8e6c6-5b85-471d-b4f8-1e1562b15af4" 00:17:41.711 ], 00:17:41.711 "product_name": "Malloc disk", 00:17:41.711 "block_size": 512, 00:17:41.711 "num_blocks": 65536, 00:17:41.711 "uuid": "c6c8e6c6-5b85-471d-b4f8-1e1562b15af4", 00:17:41.711 "assigned_rate_limits": { 00:17:41.711 "rw_ios_per_sec": 0, 00:17:41.711 "rw_mbytes_per_sec": 0, 00:17:41.711 "r_mbytes_per_sec": 0, 00:17:41.711 "w_mbytes_per_sec": 0 00:17:41.711 }, 00:17:41.711 "claimed": true, 00:17:41.711 "claim_type": "exclusive_write", 00:17:41.711 "zoned": false, 00:17:41.711 "supported_io_types": { 00:17:41.711 "read": true, 00:17:41.711 "write": true, 00:17:41.711 "unmap": true, 00:17:41.711 "flush": true, 00:17:41.711 "reset": true, 00:17:41.711 "nvme_admin": false, 00:17:41.711 "nvme_io": false, 00:17:41.711 "nvme_io_md": false, 00:17:41.711 "write_zeroes": true, 00:17:41.711 "zcopy": true, 00:17:41.711 "get_zone_info": false, 00:17:41.711 "zone_management": false, 00:17:41.711 "zone_append": false, 00:17:41.711 "compare": false, 00:17:41.711 "compare_and_write": false, 00:17:41.711 "abort": true, 00:17:41.711 "seek_hole": false, 00:17:41.711 "seek_data": false, 00:17:41.711 "copy": true, 00:17:41.711 "nvme_iov_md": false 00:17:41.711 }, 00:17:41.711 "memory_domains": [ 00:17:41.711 { 00:17:41.711 "dma_device_id": "system", 00:17:41.711 "dma_device_type": 1 00:17:41.711 }, 00:17:41.711 { 00:17:41.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.711 "dma_device_type": 2 00:17:41.711 } 00:17:41.711 ], 00:17:41.711 "driver_specific": {} 00:17:41.711 } 00:17:41.711 ] 00:17:41.711 00:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:41.711 00:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:41.711 00:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:41.711 00:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:41.711 00:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:41.711 00:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:41.711 00:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:41.711 00:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:41.711 00:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:41.711 00:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:41.711 00:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:41.711 00:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.711 00:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.969 00:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.969 "name": "Existed_Raid", 00:17:41.969 "uuid": "ae619362-123d-4ae5-986d-e59cf0e9267d", 00:17:41.969 "strip_size_kb": 64, 00:17:41.969 "state": "online", 00:17:41.969 "raid_level": "concat", 00:17:41.969 "superblock": false, 00:17:41.969 "num_base_bdevs": 3, 00:17:41.969 "num_base_bdevs_discovered": 3, 00:17:41.969 "num_base_bdevs_operational": 3, 00:17:41.969 "base_bdevs_list": [ 00:17:41.969 { 00:17:41.969 "name": "NewBaseBdev", 00:17:41.969 "uuid": "c6c8e6c6-5b85-471d-b4f8-1e1562b15af4", 00:17:41.969 "is_configured": true, 00:17:41.969 "data_offset": 0, 00:17:41.969 "data_size": 65536 00:17:41.969 }, 00:17:41.969 { 00:17:41.969 "name": "BaseBdev2", 00:17:41.969 "uuid": "b8079748-6bb5-438a-a0f7-a80665ee0ab0", 00:17:41.969 "is_configured": true, 00:17:41.969 "data_offset": 0, 00:17:41.969 "data_size": 65536 00:17:41.969 }, 00:17:41.969 { 00:17:41.969 "name": "BaseBdev3", 00:17:41.969 "uuid": "e5529a63-41d1-4578-aa56-389589d69a03", 00:17:41.969 "is_configured": true, 00:17:41.969 "data_offset": 0, 00:17:41.969 "data_size": 65536 00:17:41.969 } 00:17:41.969 ] 00:17:41.969 }' 00:17:41.969 00:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.969 00:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.228 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:42.228 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:42.228 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:42.228 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:42.228 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:42.228 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:42.228 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:42.228 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:42.486 [2024-07-25 00:01:38.229868] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.486 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:42.486 "name": "Existed_Raid", 00:17:42.486 "aliases": [ 00:17:42.486 "ae619362-123d-4ae5-986d-e59cf0e9267d" 00:17:42.486 ], 00:17:42.486 "product_name": "Raid Volume", 00:17:42.486 "block_size": 512, 00:17:42.486 "num_blocks": 196608, 00:17:42.486 "uuid": "ae619362-123d-4ae5-986d-e59cf0e9267d", 00:17:42.486 "assigned_rate_limits": { 00:17:42.486 "rw_ios_per_sec": 0, 00:17:42.486 "rw_mbytes_per_sec": 0, 00:17:42.486 "r_mbytes_per_sec": 0, 00:17:42.486 "w_mbytes_per_sec": 0 00:17:42.486 }, 00:17:42.486 "claimed": false, 00:17:42.486 "zoned": false, 00:17:42.486 "supported_io_types": { 00:17:42.486 "read": true, 00:17:42.486 "write": true, 00:17:42.486 "unmap": true, 00:17:42.486 "flush": true, 00:17:42.486 "reset": true, 00:17:42.486 "nvme_admin": false, 00:17:42.486 "nvme_io": false, 00:17:42.487 "nvme_io_md": false, 00:17:42.487 "write_zeroes": true, 00:17:42.487 "zcopy": false, 00:17:42.487 "get_zone_info": false, 00:17:42.487 "zone_management": false, 00:17:42.487 "zone_append": false, 00:17:42.487 "compare": false, 00:17:42.487 "compare_and_write": false, 00:17:42.487 "abort": false, 00:17:42.487 "seek_hole": false, 00:17:42.487 "seek_data": false, 00:17:42.487 "copy": false, 00:17:42.487 "nvme_iov_md": false 00:17:42.487 }, 00:17:42.487 "memory_domains": [ 00:17:42.487 { 00:17:42.487 "dma_device_id": "system", 00:17:42.487 "dma_device_type": 1 00:17:42.487 }, 00:17:42.487 { 00:17:42.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.487 "dma_device_type": 2 00:17:42.487 }, 00:17:42.487 { 00:17:42.487 "dma_device_id": "system", 00:17:42.487 "dma_device_type": 1 00:17:42.487 }, 00:17:42.487 { 00:17:42.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.487 "dma_device_type": 2 00:17:42.487 }, 00:17:42.487 { 00:17:42.487 "dma_device_id": "system", 00:17:42.487 "dma_device_type": 1 00:17:42.487 }, 00:17:42.487 { 00:17:42.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.487 "dma_device_type": 2 00:17:42.487 } 00:17:42.487 ], 00:17:42.487 "driver_specific": { 00:17:42.487 "raid": { 00:17:42.487 "uuid": "ae619362-123d-4ae5-986d-e59cf0e9267d", 00:17:42.487 "strip_size_kb": 64, 00:17:42.487 "state": "online", 00:17:42.487 "raid_level": "concat", 00:17:42.487 "superblock": false, 00:17:42.487 "num_base_bdevs": 3, 00:17:42.487 "num_base_bdevs_discovered": 3, 00:17:42.487 "num_base_bdevs_operational": 3, 00:17:42.487 "base_bdevs_list": [ 00:17:42.487 { 00:17:42.487 "name": "NewBaseBdev", 00:17:42.487 "uuid": "c6c8e6c6-5b85-471d-b4f8-1e1562b15af4", 00:17:42.487 "is_configured": true, 00:17:42.487 "data_offset": 0, 00:17:42.487 "data_size": 65536 00:17:42.487 }, 00:17:42.487 { 00:17:42.487 "name": "BaseBdev2", 00:17:42.487 "uuid": "b8079748-6bb5-438a-a0f7-a80665ee0ab0", 00:17:42.487 "is_configured": true, 00:17:42.487 "data_offset": 0, 00:17:42.487 "data_size": 65536 00:17:42.487 }, 00:17:42.487 { 00:17:42.487 "name": "BaseBdev3", 00:17:42.487 "uuid": "e5529a63-41d1-4578-aa56-389589d69a03", 00:17:42.487 "is_configured": true, 00:17:42.487 "data_offset": 0, 00:17:42.487 "data_size": 65536 00:17:42.487 } 00:17:42.487 ] 00:17:42.487 } 00:17:42.487 } 00:17:42.487 }' 00:17:42.487 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:42.487 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:42.487 BaseBdev2 00:17:42.487 BaseBdev3' 00:17:42.487 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:42.487 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:42.487 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:42.745 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:42.745 "name": "NewBaseBdev", 00:17:42.745 "aliases": [ 00:17:42.745 "c6c8e6c6-5b85-471d-b4f8-1e1562b15af4" 00:17:42.745 ], 00:17:42.745 "product_name": "Malloc disk", 00:17:42.745 "block_size": 512, 00:17:42.745 "num_blocks": 65536, 00:17:42.745 "uuid": "c6c8e6c6-5b85-471d-b4f8-1e1562b15af4", 00:17:42.745 "assigned_rate_limits": { 00:17:42.745 "rw_ios_per_sec": 0, 00:17:42.745 "rw_mbytes_per_sec": 0, 00:17:42.745 "r_mbytes_per_sec": 0, 00:17:42.745 "w_mbytes_per_sec": 0 00:17:42.745 }, 00:17:42.745 "claimed": true, 00:17:42.745 "claim_type": "exclusive_write", 00:17:42.745 "zoned": false, 00:17:42.745 "supported_io_types": { 00:17:42.745 "read": true, 00:17:42.745 "write": true, 00:17:42.745 "unmap": true, 00:17:42.745 "flush": true, 00:17:42.745 "reset": true, 00:17:42.745 "nvme_admin": false, 00:17:42.745 "nvme_io": false, 00:17:42.745 "nvme_io_md": false, 00:17:42.745 "write_zeroes": true, 00:17:42.745 "zcopy": true, 00:17:42.746 "get_zone_info": false, 00:17:42.746 "zone_management": false, 00:17:42.746 "zone_append": false, 00:17:42.746 "compare": false, 00:17:42.746 "compare_and_write": false, 00:17:42.746 "abort": true, 00:17:42.746 "seek_hole": false, 00:17:42.746 "seek_data": false, 00:17:42.746 "copy": true, 00:17:42.746 "nvme_iov_md": false 00:17:42.746 }, 00:17:42.746 "memory_domains": [ 00:17:42.746 { 00:17:42.746 "dma_device_id": "system", 00:17:42.746 "dma_device_type": 1 00:17:42.746 }, 00:17:42.746 { 00:17:42.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.746 "dma_device_type": 2 00:17:42.746 } 00:17:42.746 ], 00:17:42.746 "driver_specific": {} 00:17:42.746 }' 00:17:42.746 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.746 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.746 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:42.746 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.746 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.746 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:42.746 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.746 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.746 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:43.004 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:43.004 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:43.005 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:43.005 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:43.005 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:43.005 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:43.263 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:43.263 "name": "BaseBdev2", 00:17:43.263 "aliases": [ 00:17:43.263 "b8079748-6bb5-438a-a0f7-a80665ee0ab0" 00:17:43.263 ], 00:17:43.263 "product_name": "Malloc disk", 00:17:43.263 "block_size": 512, 00:17:43.263 "num_blocks": 65536, 00:17:43.263 "uuid": "b8079748-6bb5-438a-a0f7-a80665ee0ab0", 00:17:43.263 "assigned_rate_limits": { 00:17:43.263 "rw_ios_per_sec": 0, 00:17:43.263 "rw_mbytes_per_sec": 0, 00:17:43.263 "r_mbytes_per_sec": 0, 00:17:43.263 "w_mbytes_per_sec": 0 00:17:43.263 }, 00:17:43.263 "claimed": true, 00:17:43.263 "claim_type": "exclusive_write", 00:17:43.263 "zoned": false, 00:17:43.263 "supported_io_types": { 00:17:43.263 "read": true, 00:17:43.263 "write": true, 00:17:43.263 "unmap": true, 00:17:43.263 "flush": true, 00:17:43.263 "reset": true, 00:17:43.263 "nvme_admin": false, 00:17:43.263 "nvme_io": false, 00:17:43.263 "nvme_io_md": false, 00:17:43.263 "write_zeroes": true, 00:17:43.263 "zcopy": true, 00:17:43.263 "get_zone_info": false, 00:17:43.263 "zone_management": false, 00:17:43.263 "zone_append": false, 00:17:43.263 "compare": false, 00:17:43.263 "compare_and_write": false, 00:17:43.263 "abort": true, 00:17:43.263 "seek_hole": false, 00:17:43.263 "seek_data": false, 00:17:43.263 "copy": true, 00:17:43.263 "nvme_iov_md": false 00:17:43.263 }, 00:17:43.263 "memory_domains": [ 00:17:43.263 { 00:17:43.263 "dma_device_id": "system", 00:17:43.263 "dma_device_type": 1 00:17:43.263 }, 00:17:43.263 { 00:17:43.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.263 "dma_device_type": 2 00:17:43.263 } 00:17:43.263 ], 00:17:43.263 "driver_specific": {} 00:17:43.263 }' 00:17:43.263 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:43.263 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:43.263 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:43.263 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:43.263 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:43.263 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:43.263 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:43.263 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:43.263 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:43.263 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:43.263 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:43.263 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:43.263 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:43.263 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:43.263 00:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:43.522 00:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:43.522 "name": "BaseBdev3", 00:17:43.522 "aliases": [ 00:17:43.522 "e5529a63-41d1-4578-aa56-389589d69a03" 00:17:43.522 ], 00:17:43.522 "product_name": "Malloc disk", 00:17:43.522 "block_size": 512, 00:17:43.522 "num_blocks": 65536, 00:17:43.522 "uuid": "e5529a63-41d1-4578-aa56-389589d69a03", 00:17:43.522 "assigned_rate_limits": { 00:17:43.522 "rw_ios_per_sec": 0, 00:17:43.522 "rw_mbytes_per_sec": 0, 00:17:43.522 "r_mbytes_per_sec": 0, 00:17:43.522 "w_mbytes_per_sec": 0 00:17:43.522 }, 00:17:43.522 "claimed": true, 00:17:43.522 "claim_type": "exclusive_write", 00:17:43.522 "zoned": false, 00:17:43.522 "supported_io_types": { 00:17:43.522 "read": true, 00:17:43.522 "write": true, 00:17:43.522 "unmap": true, 00:17:43.522 "flush": true, 00:17:43.522 "reset": true, 00:17:43.522 "nvme_admin": false, 00:17:43.522 "nvme_io": false, 00:17:43.522 "nvme_io_md": false, 00:17:43.522 "write_zeroes": true, 00:17:43.522 "zcopy": true, 00:17:43.522 "get_zone_info": false, 00:17:43.522 "zone_management": false, 00:17:43.522 "zone_append": false, 00:17:43.522 "compare": false, 00:17:43.522 "compare_and_write": false, 00:17:43.522 "abort": true, 00:17:43.522 "seek_hole": false, 00:17:43.522 "seek_data": false, 00:17:43.522 "copy": true, 00:17:43.522 "nvme_iov_md": false 00:17:43.522 }, 00:17:43.522 "memory_domains": [ 00:17:43.522 { 00:17:43.522 "dma_device_id": "system", 00:17:43.522 "dma_device_type": 1 00:17:43.522 }, 00:17:43.522 { 00:17:43.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.522 "dma_device_type": 2 00:17:43.522 } 00:17:43.522 ], 00:17:43.522 "driver_specific": {} 00:17:43.522 }' 00:17:43.522 00:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:43.522 00:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:43.522 00:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:43.522 00:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:43.522 00:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:43.522 00:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:43.522 00:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:43.522 00:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:43.522 00:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:43.522 00:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:43.522 00:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:43.522 00:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:43.522 00:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:43.781 [2024-07-25 00:01:39.593909] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:43.781 [2024-07-25 00:01:39.593967] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.781 [2024-07-25 00:01:39.594049] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.781 [2024-07-25 00:01:39.594114] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.781 [2024-07-25 00:01:39.594134] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name Existed_Raid, state offline 00:17:43.781 00:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 82622 00:17:43.781 00:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82622 ']' 00:17:43.781 00:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82622 00:17:43.781 00:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:17:43.781 00:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:43.781 00:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82622 00:17:43.781 00:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:43.781 00:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:43.781 killing process with pid 82622 00:17:43.781 00:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82622' 00:17:43.781 00:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 82622 00:17:43.781 [2024-07-25 00:01:39.646020] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:43.781 00:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 82622 00:17:44.040 [2024-07-25 00:01:39.861141] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:45.418 00:17:45.418 real 0m23.908s 00:17:45.418 user 0m41.695s 00:17:45.418 sys 0m3.810s 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.418 ************************************ 00:17:45.418 END TEST raid_state_function_test 00:17:45.418 ************************************ 00:17:45.418 00:01:40 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:17:45.418 00:01:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:45.418 00:01:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:45.418 00:01:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:45.418 ************************************ 00:17:45.418 START TEST raid_state_function_test_sb 00:17:45.418 ************************************ 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=83499 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 83499' 00:17:45.418 Process raid pid: 83499 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 83499 /var/tmp/spdk-raid.sock 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 83499 ']' 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:45.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:45.418 00:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.418 [2024-07-25 00:01:41.047918] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:17:45.418 [2024-07-25 00:01:41.048107] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.418 [2024-07-25 00:01:41.222268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.707 [2024-07-25 00:01:41.457372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.986 [2024-07-25 00:01:41.624849] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.245 00:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:46.245 00:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:46.245 00:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:46.504 [2024-07-25 00:01:42.156790] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:46.504 [2024-07-25 00:01:42.156907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:46.504 [2024-07-25 00:01:42.156925] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:46.504 [2024-07-25 00:01:42.156940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:46.504 [2024-07-25 00:01:42.156951] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:46.504 [2024-07-25 00:01:42.156964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:46.504 00:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:46.504 00:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:46.504 00:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:46.504 00:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:46.504 00:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:46.504 00:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:46.504 00:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:46.504 00:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:46.504 00:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:46.504 00:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:46.504 00:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.504 00:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.763 00:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:46.763 "name": "Existed_Raid", 00:17:46.763 "uuid": "c33f5616-53c5-40e1-a951-6e164eeabeed", 00:17:46.763 "strip_size_kb": 64, 00:17:46.763 "state": "configuring", 00:17:46.763 "raid_level": "concat", 00:17:46.763 "superblock": true, 00:17:46.763 "num_base_bdevs": 3, 00:17:46.763 "num_base_bdevs_discovered": 0, 00:17:46.763 "num_base_bdevs_operational": 3, 00:17:46.763 "base_bdevs_list": [ 00:17:46.763 { 00:17:46.763 "name": "BaseBdev1", 00:17:46.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.763 "is_configured": false, 00:17:46.763 "data_offset": 0, 00:17:46.763 "data_size": 0 00:17:46.763 }, 00:17:46.763 { 00:17:46.763 "name": "BaseBdev2", 00:17:46.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.763 "is_configured": false, 00:17:46.763 "data_offset": 0, 00:17:46.763 "data_size": 0 00:17:46.763 }, 00:17:46.763 { 00:17:46.763 "name": "BaseBdev3", 00:17:46.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.763 "is_configured": false, 00:17:46.763 "data_offset": 0, 00:17:46.763 "data_size": 0 00:17:46.763 } 00:17:46.763 ] 00:17:46.763 }' 00:17:46.763 00:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:46.763 00:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.022 00:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:47.282 [2024-07-25 00:01:42.928896] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:47.282 [2024-07-25 00:01:42.928955] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:17:47.282 00:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:47.282 [2024-07-25 00:01:43.144952] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:47.282 [2024-07-25 00:01:43.145033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:47.282 [2024-07-25 00:01:43.145089] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:47.282 [2024-07-25 00:01:43.145110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:47.282 [2024-07-25 00:01:43.145121] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:47.282 [2024-07-25 00:01:43.145135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:47.540 00:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:47.540 [2024-07-25 00:01:43.395447] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:47.540 BaseBdev1 00:17:47.798 00:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:47.798 00:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:47.798 00:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:47.798 00:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:47.798 00:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:47.798 00:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:47.798 00:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:48.057 00:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:48.057 [ 00:17:48.057 { 00:17:48.057 "name": "BaseBdev1", 00:17:48.057 "aliases": [ 00:17:48.057 "a6184982-087a-4a34-a2f6-ff104e4b6af2" 00:17:48.057 ], 00:17:48.057 "product_name": "Malloc disk", 00:17:48.057 "block_size": 512, 00:17:48.057 "num_blocks": 65536, 00:17:48.057 "uuid": "a6184982-087a-4a34-a2f6-ff104e4b6af2", 00:17:48.057 "assigned_rate_limits": { 00:17:48.057 "rw_ios_per_sec": 0, 00:17:48.057 "rw_mbytes_per_sec": 0, 00:17:48.057 "r_mbytes_per_sec": 0, 00:17:48.057 "w_mbytes_per_sec": 0 00:17:48.057 }, 00:17:48.057 "claimed": true, 00:17:48.057 "claim_type": "exclusive_write", 00:17:48.057 "zoned": false, 00:17:48.057 "supported_io_types": { 00:17:48.057 "read": true, 00:17:48.057 "write": true, 00:17:48.057 "unmap": true, 00:17:48.057 "flush": true, 00:17:48.057 "reset": true, 00:17:48.057 "nvme_admin": false, 00:17:48.057 "nvme_io": false, 00:17:48.057 "nvme_io_md": false, 00:17:48.057 "write_zeroes": true, 00:17:48.057 "zcopy": true, 00:17:48.057 "get_zone_info": false, 00:17:48.057 "zone_management": false, 00:17:48.057 "zone_append": false, 00:17:48.057 "compare": false, 00:17:48.057 "compare_and_write": false, 00:17:48.057 "abort": true, 00:17:48.057 "seek_hole": false, 00:17:48.057 "seek_data": false, 00:17:48.057 "copy": true, 00:17:48.057 "nvme_iov_md": false 00:17:48.057 }, 00:17:48.057 "memory_domains": [ 00:17:48.057 { 00:17:48.057 "dma_device_id": "system", 00:17:48.057 "dma_device_type": 1 00:17:48.057 }, 00:17:48.057 { 00:17:48.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.057 "dma_device_type": 2 00:17:48.057 } 00:17:48.057 ], 00:17:48.057 "driver_specific": {} 00:17:48.057 } 00:17:48.057 ] 00:17:48.316 00:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:48.316 00:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:48.316 00:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:48.316 00:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:48.316 00:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:48.316 00:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:48.316 00:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:48.316 00:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:48.316 00:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:48.316 00:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:48.316 00:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:48.316 00:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.316 00:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.575 00:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:48.575 "name": "Existed_Raid", 00:17:48.575 "uuid": "92c82cfb-fe0c-4691-a3f3-feb86c95564c", 00:17:48.575 "strip_size_kb": 64, 00:17:48.575 "state": "configuring", 00:17:48.575 "raid_level": "concat", 00:17:48.575 "superblock": true, 00:17:48.575 "num_base_bdevs": 3, 00:17:48.575 "num_base_bdevs_discovered": 1, 00:17:48.575 "num_base_bdevs_operational": 3, 00:17:48.575 "base_bdevs_list": [ 00:17:48.575 { 00:17:48.575 "name": "BaseBdev1", 00:17:48.575 "uuid": "a6184982-087a-4a34-a2f6-ff104e4b6af2", 00:17:48.575 "is_configured": true, 00:17:48.575 "data_offset": 2048, 00:17:48.575 "data_size": 63488 00:17:48.575 }, 00:17:48.575 { 00:17:48.575 "name": "BaseBdev2", 00:17:48.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.575 "is_configured": false, 00:17:48.575 "data_offset": 0, 00:17:48.575 "data_size": 0 00:17:48.575 }, 00:17:48.575 { 00:17:48.575 "name": "BaseBdev3", 00:17:48.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.575 "is_configured": false, 00:17:48.575 "data_offset": 0, 00:17:48.575 "data_size": 0 00:17:48.575 } 00:17:48.575 ] 00:17:48.575 }' 00:17:48.575 00:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:48.575 00:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.834 00:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:49.093 [2024-07-25 00:01:44.727916] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:49.093 [2024-07-25 00:01:44.727974] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:17:49.093 00:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:49.352 [2024-07-25 00:01:44.991984] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.352 [2024-07-25 00:01:44.994043] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:49.352 [2024-07-25 00:01:44.994111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:49.352 [2024-07-25 00:01:44.994142] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:49.352 [2024-07-25 00:01:44.994157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:49.352 00:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:49.352 00:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:49.352 00:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:49.352 00:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:49.352 00:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:49.352 00:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:49.352 00:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:49.352 00:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:49.352 00:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:49.352 00:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:49.353 00:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:49.353 00:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:49.353 00:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.353 00:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.612 00:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:49.612 "name": "Existed_Raid", 00:17:49.612 "uuid": "45625970-6b55-4f57-8ee6-e7d5856274dc", 00:17:49.612 "strip_size_kb": 64, 00:17:49.612 "state": "configuring", 00:17:49.612 "raid_level": "concat", 00:17:49.612 "superblock": true, 00:17:49.612 "num_base_bdevs": 3, 00:17:49.612 "num_base_bdevs_discovered": 1, 00:17:49.612 "num_base_bdevs_operational": 3, 00:17:49.612 "base_bdevs_list": [ 00:17:49.612 { 00:17:49.612 "name": "BaseBdev1", 00:17:49.612 "uuid": "a6184982-087a-4a34-a2f6-ff104e4b6af2", 00:17:49.612 "is_configured": true, 00:17:49.612 "data_offset": 2048, 00:17:49.612 "data_size": 63488 00:17:49.612 }, 00:17:49.612 { 00:17:49.612 "name": "BaseBdev2", 00:17:49.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.612 "is_configured": false, 00:17:49.612 "data_offset": 0, 00:17:49.612 "data_size": 0 00:17:49.613 }, 00:17:49.613 { 00:17:49.613 "name": "BaseBdev3", 00:17:49.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.613 "is_configured": false, 00:17:49.613 "data_offset": 0, 00:17:49.613 "data_size": 0 00:17:49.613 } 00:17:49.613 ] 00:17:49.613 }' 00:17:49.613 00:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:49.613 00:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.872 00:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:50.131 [2024-07-25 00:01:45.835896] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:50.131 BaseBdev2 00:17:50.131 00:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:50.131 00:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:50.131 00:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:50.131 00:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:50.131 00:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:50.131 00:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:50.131 00:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:50.389 00:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:50.649 [ 00:17:50.649 { 00:17:50.649 "name": "BaseBdev2", 00:17:50.649 "aliases": [ 00:17:50.649 "2c88f7b0-f5ee-471c-8b84-0656aa18ebc5" 00:17:50.649 ], 00:17:50.649 "product_name": "Malloc disk", 00:17:50.649 "block_size": 512, 00:17:50.649 "num_blocks": 65536, 00:17:50.649 "uuid": "2c88f7b0-f5ee-471c-8b84-0656aa18ebc5", 00:17:50.649 "assigned_rate_limits": { 00:17:50.649 "rw_ios_per_sec": 0, 00:17:50.649 "rw_mbytes_per_sec": 0, 00:17:50.649 "r_mbytes_per_sec": 0, 00:17:50.649 "w_mbytes_per_sec": 0 00:17:50.649 }, 00:17:50.649 "claimed": true, 00:17:50.649 "claim_type": "exclusive_write", 00:17:50.649 "zoned": false, 00:17:50.649 "supported_io_types": { 00:17:50.649 "read": true, 00:17:50.649 "write": true, 00:17:50.649 "unmap": true, 00:17:50.649 "flush": true, 00:17:50.649 "reset": true, 00:17:50.649 "nvme_admin": false, 00:17:50.649 "nvme_io": false, 00:17:50.649 "nvme_io_md": false, 00:17:50.649 "write_zeroes": true, 00:17:50.649 "zcopy": true, 00:17:50.649 "get_zone_info": false, 00:17:50.649 "zone_management": false, 00:17:50.649 "zone_append": false, 00:17:50.649 "compare": false, 00:17:50.649 "compare_and_write": false, 00:17:50.649 "abort": true, 00:17:50.649 "seek_hole": false, 00:17:50.649 "seek_data": false, 00:17:50.649 "copy": true, 00:17:50.649 "nvme_iov_md": false 00:17:50.649 }, 00:17:50.649 "memory_domains": [ 00:17:50.649 { 00:17:50.649 "dma_device_id": "system", 00:17:50.649 "dma_device_type": 1 00:17:50.649 }, 00:17:50.649 { 00:17:50.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.649 "dma_device_type": 2 00:17:50.649 } 00:17:50.649 ], 00:17:50.649 "driver_specific": {} 00:17:50.649 } 00:17:50.649 ] 00:17:50.649 00:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:50.649 00:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:50.649 00:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:50.649 00:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:50.649 00:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:50.649 00:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:50.649 00:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:50.649 00:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:50.649 00:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:50.649 00:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:50.649 00:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:50.649 00:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:50.649 00:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:50.649 00:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.649 00:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.908 00:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:50.908 "name": "Existed_Raid", 00:17:50.908 "uuid": "45625970-6b55-4f57-8ee6-e7d5856274dc", 00:17:50.908 "strip_size_kb": 64, 00:17:50.908 "state": "configuring", 00:17:50.908 "raid_level": "concat", 00:17:50.908 "superblock": true, 00:17:50.908 "num_base_bdevs": 3, 00:17:50.908 "num_base_bdevs_discovered": 2, 00:17:50.908 "num_base_bdevs_operational": 3, 00:17:50.908 "base_bdevs_list": [ 00:17:50.908 { 00:17:50.908 "name": "BaseBdev1", 00:17:50.908 "uuid": "a6184982-087a-4a34-a2f6-ff104e4b6af2", 00:17:50.908 "is_configured": true, 00:17:50.908 "data_offset": 2048, 00:17:50.908 "data_size": 63488 00:17:50.908 }, 00:17:50.908 { 00:17:50.908 "name": "BaseBdev2", 00:17:50.908 "uuid": "2c88f7b0-f5ee-471c-8b84-0656aa18ebc5", 00:17:50.908 "is_configured": true, 00:17:50.908 "data_offset": 2048, 00:17:50.908 "data_size": 63488 00:17:50.908 }, 00:17:50.908 { 00:17:50.908 "name": "BaseBdev3", 00:17:50.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.908 "is_configured": false, 00:17:50.908 "data_offset": 0, 00:17:50.908 "data_size": 0 00:17:50.908 } 00:17:50.908 ] 00:17:50.908 }' 00:17:50.908 00:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:50.908 00:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.167 00:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:51.427 [2024-07-25 00:01:47.159805] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:51.427 [2024-07-25 00:01:47.160350] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:17:51.427 [2024-07-25 00:01:47.160594] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:51.427 [2024-07-25 00:01:47.160779] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:17:51.427 [2024-07-25 00:01:47.161238] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:17:51.427 BaseBdev3 00:17:51.427 [2024-07-25 00:01:47.161427] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:17:51.427 [2024-07-25 00:01:47.161615] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.427 00:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:51.427 00:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:51.427 00:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:51.427 00:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:51.427 00:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:51.427 00:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:51.427 00:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:51.686 00:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:51.945 [ 00:17:51.945 { 00:17:51.945 "name": "BaseBdev3", 00:17:51.945 "aliases": [ 00:17:51.945 "728a7734-9f75-492b-84cd-8091bb5a526c" 00:17:51.945 ], 00:17:51.945 "product_name": "Malloc disk", 00:17:51.945 "block_size": 512, 00:17:51.945 "num_blocks": 65536, 00:17:51.945 "uuid": "728a7734-9f75-492b-84cd-8091bb5a526c", 00:17:51.945 "assigned_rate_limits": { 00:17:51.945 "rw_ios_per_sec": 0, 00:17:51.945 "rw_mbytes_per_sec": 0, 00:17:51.945 "r_mbytes_per_sec": 0, 00:17:51.945 "w_mbytes_per_sec": 0 00:17:51.945 }, 00:17:51.945 "claimed": true, 00:17:51.945 "claim_type": "exclusive_write", 00:17:51.945 "zoned": false, 00:17:51.945 "supported_io_types": { 00:17:51.945 "read": true, 00:17:51.945 "write": true, 00:17:51.945 "unmap": true, 00:17:51.945 "flush": true, 00:17:51.945 "reset": true, 00:17:51.945 "nvme_admin": false, 00:17:51.945 "nvme_io": false, 00:17:51.945 "nvme_io_md": false, 00:17:51.945 "write_zeroes": true, 00:17:51.945 "zcopy": true, 00:17:51.945 "get_zone_info": false, 00:17:51.945 "zone_management": false, 00:17:51.945 "zone_append": false, 00:17:51.945 "compare": false, 00:17:51.945 "compare_and_write": false, 00:17:51.945 "abort": true, 00:17:51.945 "seek_hole": false, 00:17:51.945 "seek_data": false, 00:17:51.945 "copy": true, 00:17:51.945 "nvme_iov_md": false 00:17:51.945 }, 00:17:51.945 "memory_domains": [ 00:17:51.945 { 00:17:51.945 "dma_device_id": "system", 00:17:51.945 "dma_device_type": 1 00:17:51.945 }, 00:17:51.945 { 00:17:51.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.945 "dma_device_type": 2 00:17:51.945 } 00:17:51.945 ], 00:17:51.945 "driver_specific": {} 00:17:51.945 } 00:17:51.945 ] 00:17:51.945 00:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:51.945 00:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:51.945 00:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:51.945 00:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:51.945 00:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:51.945 00:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:51.945 00:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:51.945 00:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:51.945 00:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:51.945 00:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:51.945 00:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:51.945 00:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:51.945 00:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:51.945 00:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.945 00:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.204 00:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:52.204 "name": "Existed_Raid", 00:17:52.204 "uuid": "45625970-6b55-4f57-8ee6-e7d5856274dc", 00:17:52.204 "strip_size_kb": 64, 00:17:52.204 "state": "online", 00:17:52.204 "raid_level": "concat", 00:17:52.204 "superblock": true, 00:17:52.204 "num_base_bdevs": 3, 00:17:52.204 "num_base_bdevs_discovered": 3, 00:17:52.204 "num_base_bdevs_operational": 3, 00:17:52.204 "base_bdevs_list": [ 00:17:52.204 { 00:17:52.204 "name": "BaseBdev1", 00:17:52.204 "uuid": "a6184982-087a-4a34-a2f6-ff104e4b6af2", 00:17:52.204 "is_configured": true, 00:17:52.204 "data_offset": 2048, 00:17:52.204 "data_size": 63488 00:17:52.204 }, 00:17:52.204 { 00:17:52.204 "name": "BaseBdev2", 00:17:52.204 "uuid": "2c88f7b0-f5ee-471c-8b84-0656aa18ebc5", 00:17:52.204 "is_configured": true, 00:17:52.204 "data_offset": 2048, 00:17:52.204 "data_size": 63488 00:17:52.204 }, 00:17:52.204 { 00:17:52.204 "name": "BaseBdev3", 00:17:52.204 "uuid": "728a7734-9f75-492b-84cd-8091bb5a526c", 00:17:52.204 "is_configured": true, 00:17:52.204 "data_offset": 2048, 00:17:52.204 "data_size": 63488 00:17:52.204 } 00:17:52.204 ] 00:17:52.204 }' 00:17:52.204 00:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:52.204 00:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.463 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:52.463 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:52.463 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:52.463 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:52.463 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:52.463 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:52.463 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:52.463 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:52.722 [2024-07-25 00:01:48.416557] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.722 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:52.722 "name": "Existed_Raid", 00:17:52.722 "aliases": [ 00:17:52.722 "45625970-6b55-4f57-8ee6-e7d5856274dc" 00:17:52.722 ], 00:17:52.722 "product_name": "Raid Volume", 00:17:52.722 "block_size": 512, 00:17:52.722 "num_blocks": 190464, 00:17:52.722 "uuid": "45625970-6b55-4f57-8ee6-e7d5856274dc", 00:17:52.722 "assigned_rate_limits": { 00:17:52.722 "rw_ios_per_sec": 0, 00:17:52.722 "rw_mbytes_per_sec": 0, 00:17:52.722 "r_mbytes_per_sec": 0, 00:17:52.722 "w_mbytes_per_sec": 0 00:17:52.722 }, 00:17:52.722 "claimed": false, 00:17:52.722 "zoned": false, 00:17:52.722 "supported_io_types": { 00:17:52.722 "read": true, 00:17:52.722 "write": true, 00:17:52.722 "unmap": true, 00:17:52.722 "flush": true, 00:17:52.722 "reset": true, 00:17:52.722 "nvme_admin": false, 00:17:52.722 "nvme_io": false, 00:17:52.722 "nvme_io_md": false, 00:17:52.722 "write_zeroes": true, 00:17:52.722 "zcopy": false, 00:17:52.722 "get_zone_info": false, 00:17:52.722 "zone_management": false, 00:17:52.722 "zone_append": false, 00:17:52.722 "compare": false, 00:17:52.722 "compare_and_write": false, 00:17:52.722 "abort": false, 00:17:52.722 "seek_hole": false, 00:17:52.722 "seek_data": false, 00:17:52.722 "copy": false, 00:17:52.722 "nvme_iov_md": false 00:17:52.722 }, 00:17:52.722 "memory_domains": [ 00:17:52.722 { 00:17:52.722 "dma_device_id": "system", 00:17:52.722 "dma_device_type": 1 00:17:52.722 }, 00:17:52.722 { 00:17:52.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.722 "dma_device_type": 2 00:17:52.722 }, 00:17:52.722 { 00:17:52.722 "dma_device_id": "system", 00:17:52.722 "dma_device_type": 1 00:17:52.722 }, 00:17:52.722 { 00:17:52.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.722 "dma_device_type": 2 00:17:52.722 }, 00:17:52.722 { 00:17:52.722 "dma_device_id": "system", 00:17:52.722 "dma_device_type": 1 00:17:52.722 }, 00:17:52.722 { 00:17:52.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.722 "dma_device_type": 2 00:17:52.722 } 00:17:52.722 ], 00:17:52.722 "driver_specific": { 00:17:52.722 "raid": { 00:17:52.722 "uuid": "45625970-6b55-4f57-8ee6-e7d5856274dc", 00:17:52.722 "strip_size_kb": 64, 00:17:52.722 "state": "online", 00:17:52.722 "raid_level": "concat", 00:17:52.722 "superblock": true, 00:17:52.722 "num_base_bdevs": 3, 00:17:52.722 "num_base_bdevs_discovered": 3, 00:17:52.722 "num_base_bdevs_operational": 3, 00:17:52.722 "base_bdevs_list": [ 00:17:52.722 { 00:17:52.722 "name": "BaseBdev1", 00:17:52.722 "uuid": "a6184982-087a-4a34-a2f6-ff104e4b6af2", 00:17:52.722 "is_configured": true, 00:17:52.722 "data_offset": 2048, 00:17:52.722 "data_size": 63488 00:17:52.722 }, 00:17:52.722 { 00:17:52.722 "name": "BaseBdev2", 00:17:52.722 "uuid": "2c88f7b0-f5ee-471c-8b84-0656aa18ebc5", 00:17:52.722 "is_configured": true, 00:17:52.722 "data_offset": 2048, 00:17:52.722 "data_size": 63488 00:17:52.722 }, 00:17:52.722 { 00:17:52.722 "name": "BaseBdev3", 00:17:52.722 "uuid": "728a7734-9f75-492b-84cd-8091bb5a526c", 00:17:52.722 "is_configured": true, 00:17:52.722 "data_offset": 2048, 00:17:52.722 "data_size": 63488 00:17:52.722 } 00:17:52.722 ] 00:17:52.722 } 00:17:52.722 } 00:17:52.722 }' 00:17:52.722 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:52.722 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:52.722 BaseBdev2 00:17:52.722 BaseBdev3' 00:17:52.722 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:52.722 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:52.722 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:52.982 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:52.982 "name": "BaseBdev1", 00:17:52.982 "aliases": [ 00:17:52.982 "a6184982-087a-4a34-a2f6-ff104e4b6af2" 00:17:52.982 ], 00:17:52.982 "product_name": "Malloc disk", 00:17:52.982 "block_size": 512, 00:17:52.982 "num_blocks": 65536, 00:17:52.982 "uuid": "a6184982-087a-4a34-a2f6-ff104e4b6af2", 00:17:52.982 "assigned_rate_limits": { 00:17:52.982 "rw_ios_per_sec": 0, 00:17:52.982 "rw_mbytes_per_sec": 0, 00:17:52.982 "r_mbytes_per_sec": 0, 00:17:52.982 "w_mbytes_per_sec": 0 00:17:52.982 }, 00:17:52.982 "claimed": true, 00:17:52.982 "claim_type": "exclusive_write", 00:17:52.982 "zoned": false, 00:17:52.982 "supported_io_types": { 00:17:52.982 "read": true, 00:17:52.982 "write": true, 00:17:52.982 "unmap": true, 00:17:52.982 "flush": true, 00:17:52.982 "reset": true, 00:17:52.982 "nvme_admin": false, 00:17:52.982 "nvme_io": false, 00:17:52.982 "nvme_io_md": false, 00:17:52.982 "write_zeroes": true, 00:17:52.982 "zcopy": true, 00:17:52.982 "get_zone_info": false, 00:17:52.982 "zone_management": false, 00:17:52.982 "zone_append": false, 00:17:52.982 "compare": false, 00:17:52.982 "compare_and_write": false, 00:17:52.982 "abort": true, 00:17:52.982 "seek_hole": false, 00:17:52.982 "seek_data": false, 00:17:52.982 "copy": true, 00:17:52.982 "nvme_iov_md": false 00:17:52.982 }, 00:17:52.982 "memory_domains": [ 00:17:52.982 { 00:17:52.982 "dma_device_id": "system", 00:17:52.982 "dma_device_type": 1 00:17:52.982 }, 00:17:52.982 { 00:17:52.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.982 "dma_device_type": 2 00:17:52.982 } 00:17:52.982 ], 00:17:52.982 "driver_specific": {} 00:17:52.982 }' 00:17:52.982 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:52.982 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:52.982 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:52.982 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:52.982 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:52.982 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:52.982 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:52.982 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:52.982 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:52.982 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:52.982 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:52.982 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:52.982 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:52.982 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:52.982 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:53.242 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:53.242 "name": "BaseBdev2", 00:17:53.242 "aliases": [ 00:17:53.242 "2c88f7b0-f5ee-471c-8b84-0656aa18ebc5" 00:17:53.242 ], 00:17:53.242 "product_name": "Malloc disk", 00:17:53.242 "block_size": 512, 00:17:53.242 "num_blocks": 65536, 00:17:53.242 "uuid": "2c88f7b0-f5ee-471c-8b84-0656aa18ebc5", 00:17:53.242 "assigned_rate_limits": { 00:17:53.242 "rw_ios_per_sec": 0, 00:17:53.242 "rw_mbytes_per_sec": 0, 00:17:53.242 "r_mbytes_per_sec": 0, 00:17:53.242 "w_mbytes_per_sec": 0 00:17:53.242 }, 00:17:53.242 "claimed": true, 00:17:53.242 "claim_type": "exclusive_write", 00:17:53.242 "zoned": false, 00:17:53.242 "supported_io_types": { 00:17:53.242 "read": true, 00:17:53.242 "write": true, 00:17:53.242 "unmap": true, 00:17:53.242 "flush": true, 00:17:53.242 "reset": true, 00:17:53.242 "nvme_admin": false, 00:17:53.242 "nvme_io": false, 00:17:53.242 "nvme_io_md": false, 00:17:53.242 "write_zeroes": true, 00:17:53.242 "zcopy": true, 00:17:53.242 "get_zone_info": false, 00:17:53.242 "zone_management": false, 00:17:53.242 "zone_append": false, 00:17:53.242 "compare": false, 00:17:53.242 "compare_and_write": false, 00:17:53.242 "abort": true, 00:17:53.242 "seek_hole": false, 00:17:53.242 "seek_data": false, 00:17:53.242 "copy": true, 00:17:53.242 "nvme_iov_md": false 00:17:53.242 }, 00:17:53.242 "memory_domains": [ 00:17:53.242 { 00:17:53.242 "dma_device_id": "system", 00:17:53.242 "dma_device_type": 1 00:17:53.242 }, 00:17:53.242 { 00:17:53.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.242 "dma_device_type": 2 00:17:53.242 } 00:17:53.242 ], 00:17:53.242 "driver_specific": {} 00:17:53.242 }' 00:17:53.242 00:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.242 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.242 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:53.242 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.242 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.242 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:53.242 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.242 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.242 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:53.242 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:53.242 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:53.242 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:53.242 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:53.242 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:53.242 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:53.519 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:53.519 "name": "BaseBdev3", 00:17:53.519 "aliases": [ 00:17:53.519 "728a7734-9f75-492b-84cd-8091bb5a526c" 00:17:53.519 ], 00:17:53.519 "product_name": "Malloc disk", 00:17:53.519 "block_size": 512, 00:17:53.519 "num_blocks": 65536, 00:17:53.519 "uuid": "728a7734-9f75-492b-84cd-8091bb5a526c", 00:17:53.519 "assigned_rate_limits": { 00:17:53.519 "rw_ios_per_sec": 0, 00:17:53.519 "rw_mbytes_per_sec": 0, 00:17:53.519 "r_mbytes_per_sec": 0, 00:17:53.519 "w_mbytes_per_sec": 0 00:17:53.519 }, 00:17:53.519 "claimed": true, 00:17:53.520 "claim_type": "exclusive_write", 00:17:53.520 "zoned": false, 00:17:53.520 "supported_io_types": { 00:17:53.520 "read": true, 00:17:53.520 "write": true, 00:17:53.520 "unmap": true, 00:17:53.520 "flush": true, 00:17:53.520 "reset": true, 00:17:53.520 "nvme_admin": false, 00:17:53.520 "nvme_io": false, 00:17:53.520 "nvme_io_md": false, 00:17:53.520 "write_zeroes": true, 00:17:53.520 "zcopy": true, 00:17:53.520 "get_zone_info": false, 00:17:53.520 "zone_management": false, 00:17:53.520 "zone_append": false, 00:17:53.520 "compare": false, 00:17:53.520 "compare_and_write": false, 00:17:53.520 "abort": true, 00:17:53.520 "seek_hole": false, 00:17:53.520 "seek_data": false, 00:17:53.520 "copy": true, 00:17:53.520 "nvme_iov_md": false 00:17:53.520 }, 00:17:53.520 "memory_domains": [ 00:17:53.520 { 00:17:53.520 "dma_device_id": "system", 00:17:53.520 "dma_device_type": 1 00:17:53.520 }, 00:17:53.520 { 00:17:53.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.520 "dma_device_type": 2 00:17:53.520 } 00:17:53.520 ], 00:17:53.520 "driver_specific": {} 00:17:53.520 }' 00:17:53.520 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.520 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.520 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:53.520 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.520 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.520 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:53.520 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.790 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.790 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:53.790 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:53.790 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:53.790 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:53.790 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:53.790 [2024-07-25 00:01:49.620698] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:53.790 [2024-07-25 00:01:49.620745] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:53.790 [2024-07-25 00:01:49.620835] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.049 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:54.049 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:17:54.049 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:54.049 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:17:54.049 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:54.049 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:54.049 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:54.049 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:54.049 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:54.049 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:54.049 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:54.049 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:54.049 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:54.049 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:54.049 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:54.049 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.049 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.308 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:54.308 "name": "Existed_Raid", 00:17:54.308 "uuid": "45625970-6b55-4f57-8ee6-e7d5856274dc", 00:17:54.308 "strip_size_kb": 64, 00:17:54.308 "state": "offline", 00:17:54.308 "raid_level": "concat", 00:17:54.308 "superblock": true, 00:17:54.308 "num_base_bdevs": 3, 00:17:54.308 "num_base_bdevs_discovered": 2, 00:17:54.308 "num_base_bdevs_operational": 2, 00:17:54.308 "base_bdevs_list": [ 00:17:54.308 { 00:17:54.308 "name": null, 00:17:54.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.308 "is_configured": false, 00:17:54.308 "data_offset": 2048, 00:17:54.308 "data_size": 63488 00:17:54.308 }, 00:17:54.308 { 00:17:54.308 "name": "BaseBdev2", 00:17:54.308 "uuid": "2c88f7b0-f5ee-471c-8b84-0656aa18ebc5", 00:17:54.308 "is_configured": true, 00:17:54.308 "data_offset": 2048, 00:17:54.308 "data_size": 63488 00:17:54.308 }, 00:17:54.308 { 00:17:54.308 "name": "BaseBdev3", 00:17:54.308 "uuid": "728a7734-9f75-492b-84cd-8091bb5a526c", 00:17:54.308 "is_configured": true, 00:17:54.308 "data_offset": 2048, 00:17:54.308 "data_size": 63488 00:17:54.308 } 00:17:54.308 ] 00:17:54.308 }' 00:17:54.308 00:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:54.308 00:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.567 00:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:54.567 00:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:54.567 00:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.567 00:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:54.826 00:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:54.826 00:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:54.826 00:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:54.826 [2024-07-25 00:01:50.689896] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:55.085 00:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:55.085 00:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:55.085 00:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.085 00:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:55.344 00:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:55.344 00:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:55.344 00:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:55.603 [2024-07-25 00:01:51.245010] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:55.603 [2024-07-25 00:01:51.245076] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:17:55.603 00:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:55.603 00:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:55.603 00:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:55.603 00:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.862 00:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:55.862 00:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:55.862 00:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:17:55.862 00:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:55.862 00:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:55.862 00:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:56.121 BaseBdev2 00:17:56.121 00:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:56.121 00:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:56.121 00:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:56.121 00:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:56.121 00:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:56.121 00:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:56.121 00:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:56.380 00:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:56.380 [ 00:17:56.380 { 00:17:56.380 "name": "BaseBdev2", 00:17:56.380 "aliases": [ 00:17:56.380 "fbd2a16c-bc3e-4e29-bcbe-4e8048ec8d26" 00:17:56.380 ], 00:17:56.380 "product_name": "Malloc disk", 00:17:56.380 "block_size": 512, 00:17:56.380 "num_blocks": 65536, 00:17:56.380 "uuid": "fbd2a16c-bc3e-4e29-bcbe-4e8048ec8d26", 00:17:56.380 "assigned_rate_limits": { 00:17:56.380 "rw_ios_per_sec": 0, 00:17:56.380 "rw_mbytes_per_sec": 0, 00:17:56.380 "r_mbytes_per_sec": 0, 00:17:56.380 "w_mbytes_per_sec": 0 00:17:56.380 }, 00:17:56.380 "claimed": false, 00:17:56.380 "zoned": false, 00:17:56.380 "supported_io_types": { 00:17:56.380 "read": true, 00:17:56.380 "write": true, 00:17:56.380 "unmap": true, 00:17:56.380 "flush": true, 00:17:56.380 "reset": true, 00:17:56.380 "nvme_admin": false, 00:17:56.380 "nvme_io": false, 00:17:56.380 "nvme_io_md": false, 00:17:56.380 "write_zeroes": true, 00:17:56.380 "zcopy": true, 00:17:56.380 "get_zone_info": false, 00:17:56.380 "zone_management": false, 00:17:56.380 "zone_append": false, 00:17:56.380 "compare": false, 00:17:56.380 "compare_and_write": false, 00:17:56.380 "abort": true, 00:17:56.380 "seek_hole": false, 00:17:56.380 "seek_data": false, 00:17:56.380 "copy": true, 00:17:56.380 "nvme_iov_md": false 00:17:56.380 }, 00:17:56.380 "memory_domains": [ 00:17:56.380 { 00:17:56.380 "dma_device_id": "system", 00:17:56.380 "dma_device_type": 1 00:17:56.380 }, 00:17:56.380 { 00:17:56.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.380 "dma_device_type": 2 00:17:56.380 } 00:17:56.380 ], 00:17:56.380 "driver_specific": {} 00:17:56.380 } 00:17:56.380 ] 00:17:56.380 00:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:56.380 00:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:56.380 00:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:56.380 00:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:56.639 BaseBdev3 00:17:56.639 00:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:56.639 00:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:56.639 00:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:56.639 00:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:56.640 00:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:56.640 00:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:56.640 00:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:56.898 00:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:57.157 [ 00:17:57.157 { 00:17:57.157 "name": "BaseBdev3", 00:17:57.157 "aliases": [ 00:17:57.157 "443b3ba7-c504-4259-853c-4a6483658b92" 00:17:57.157 ], 00:17:57.157 "product_name": "Malloc disk", 00:17:57.157 "block_size": 512, 00:17:57.157 "num_blocks": 65536, 00:17:57.157 "uuid": "443b3ba7-c504-4259-853c-4a6483658b92", 00:17:57.157 "assigned_rate_limits": { 00:17:57.157 "rw_ios_per_sec": 0, 00:17:57.157 "rw_mbytes_per_sec": 0, 00:17:57.157 "r_mbytes_per_sec": 0, 00:17:57.157 "w_mbytes_per_sec": 0 00:17:57.157 }, 00:17:57.157 "claimed": false, 00:17:57.157 "zoned": false, 00:17:57.157 "supported_io_types": { 00:17:57.157 "read": true, 00:17:57.157 "write": true, 00:17:57.157 "unmap": true, 00:17:57.157 "flush": true, 00:17:57.157 "reset": true, 00:17:57.157 "nvme_admin": false, 00:17:57.157 "nvme_io": false, 00:17:57.157 "nvme_io_md": false, 00:17:57.157 "write_zeroes": true, 00:17:57.157 "zcopy": true, 00:17:57.157 "get_zone_info": false, 00:17:57.157 "zone_management": false, 00:17:57.157 "zone_append": false, 00:17:57.157 "compare": false, 00:17:57.157 "compare_and_write": false, 00:17:57.157 "abort": true, 00:17:57.157 "seek_hole": false, 00:17:57.157 "seek_data": false, 00:17:57.157 "copy": true, 00:17:57.157 "nvme_iov_md": false 00:17:57.157 }, 00:17:57.157 "memory_domains": [ 00:17:57.157 { 00:17:57.157 "dma_device_id": "system", 00:17:57.157 "dma_device_type": 1 00:17:57.157 }, 00:17:57.157 { 00:17:57.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.157 "dma_device_type": 2 00:17:57.157 } 00:17:57.157 ], 00:17:57.157 "driver_specific": {} 00:17:57.157 } 00:17:57.157 ] 00:17:57.157 00:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:57.157 00:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:57.157 00:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:57.157 00:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:57.416 [2024-07-25 00:01:53.101604] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:57.416 [2024-07-25 00:01:53.101662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:57.416 [2024-07-25 00:01:53.101711] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:57.416 [2024-07-25 00:01:53.104167] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:57.416 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:57.416 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:57.416 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:57.416 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:57.416 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:57.416 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:57.416 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:57.416 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:57.416 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:57.416 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:57.416 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.416 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.674 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:57.674 "name": "Existed_Raid", 00:17:57.674 "uuid": "1a3c42da-9fe4-4196-8488-f46eb2122eac", 00:17:57.674 "strip_size_kb": 64, 00:17:57.674 "state": "configuring", 00:17:57.674 "raid_level": "concat", 00:17:57.674 "superblock": true, 00:17:57.674 "num_base_bdevs": 3, 00:17:57.674 "num_base_bdevs_discovered": 2, 00:17:57.674 "num_base_bdevs_operational": 3, 00:17:57.674 "base_bdevs_list": [ 00:17:57.674 { 00:17:57.674 "name": "BaseBdev1", 00:17:57.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.674 "is_configured": false, 00:17:57.674 "data_offset": 0, 00:17:57.674 "data_size": 0 00:17:57.674 }, 00:17:57.674 { 00:17:57.674 "name": "BaseBdev2", 00:17:57.674 "uuid": "fbd2a16c-bc3e-4e29-bcbe-4e8048ec8d26", 00:17:57.674 "is_configured": true, 00:17:57.674 "data_offset": 2048, 00:17:57.674 "data_size": 63488 00:17:57.674 }, 00:17:57.674 { 00:17:57.674 "name": "BaseBdev3", 00:17:57.674 "uuid": "443b3ba7-c504-4259-853c-4a6483658b92", 00:17:57.674 "is_configured": true, 00:17:57.674 "data_offset": 2048, 00:17:57.674 "data_size": 63488 00:17:57.674 } 00:17:57.674 ] 00:17:57.674 }' 00:17:57.674 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:57.674 00:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.933 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:57.933 [2024-07-25 00:01:53.801752] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:58.191 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:58.191 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:58.191 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:58.191 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:58.191 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:58.191 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:58.191 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:58.191 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:58.191 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:58.191 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:58.191 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.191 00:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.450 00:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:58.450 "name": "Existed_Raid", 00:17:58.450 "uuid": "1a3c42da-9fe4-4196-8488-f46eb2122eac", 00:17:58.450 "strip_size_kb": 64, 00:17:58.450 "state": "configuring", 00:17:58.450 "raid_level": "concat", 00:17:58.450 "superblock": true, 00:17:58.450 "num_base_bdevs": 3, 00:17:58.450 "num_base_bdevs_discovered": 1, 00:17:58.450 "num_base_bdevs_operational": 3, 00:17:58.450 "base_bdevs_list": [ 00:17:58.450 { 00:17:58.450 "name": "BaseBdev1", 00:17:58.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.450 "is_configured": false, 00:17:58.450 "data_offset": 0, 00:17:58.450 "data_size": 0 00:17:58.450 }, 00:17:58.450 { 00:17:58.450 "name": null, 00:17:58.450 "uuid": "fbd2a16c-bc3e-4e29-bcbe-4e8048ec8d26", 00:17:58.450 "is_configured": false, 00:17:58.450 "data_offset": 2048, 00:17:58.450 "data_size": 63488 00:17:58.450 }, 00:17:58.450 { 00:17:58.450 "name": "BaseBdev3", 00:17:58.450 "uuid": "443b3ba7-c504-4259-853c-4a6483658b92", 00:17:58.450 "is_configured": true, 00:17:58.450 "data_offset": 2048, 00:17:58.450 "data_size": 63488 00:17:58.450 } 00:17:58.450 ] 00:17:58.450 }' 00:17:58.450 00:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:58.450 00:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.708 00:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.708 00:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:58.967 00:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:58.967 00:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:58.967 [2024-07-25 00:01:54.825018] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:58.967 BaseBdev1 00:17:59.226 00:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:59.226 00:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:59.226 00:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:59.226 00:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:59.226 00:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:59.226 00:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:59.226 00:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:59.226 00:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:59.486 [ 00:17:59.486 { 00:17:59.486 "name": "BaseBdev1", 00:17:59.486 "aliases": [ 00:17:59.486 "d0d0d3c9-0356-4fa7-b38a-a6e6b3f5981f" 00:17:59.486 ], 00:17:59.486 "product_name": "Malloc disk", 00:17:59.486 "block_size": 512, 00:17:59.486 "num_blocks": 65536, 00:17:59.486 "uuid": "d0d0d3c9-0356-4fa7-b38a-a6e6b3f5981f", 00:17:59.486 "assigned_rate_limits": { 00:17:59.486 "rw_ios_per_sec": 0, 00:17:59.486 "rw_mbytes_per_sec": 0, 00:17:59.486 "r_mbytes_per_sec": 0, 00:17:59.486 "w_mbytes_per_sec": 0 00:17:59.486 }, 00:17:59.486 "claimed": true, 00:17:59.486 "claim_type": "exclusive_write", 00:17:59.486 "zoned": false, 00:17:59.486 "supported_io_types": { 00:17:59.486 "read": true, 00:17:59.486 "write": true, 00:17:59.486 "unmap": true, 00:17:59.486 "flush": true, 00:17:59.486 "reset": true, 00:17:59.486 "nvme_admin": false, 00:17:59.486 "nvme_io": false, 00:17:59.486 "nvme_io_md": false, 00:17:59.486 "write_zeroes": true, 00:17:59.486 "zcopy": true, 00:17:59.486 "get_zone_info": false, 00:17:59.486 "zone_management": false, 00:17:59.486 "zone_append": false, 00:17:59.486 "compare": false, 00:17:59.486 "compare_and_write": false, 00:17:59.486 "abort": true, 00:17:59.486 "seek_hole": false, 00:17:59.486 "seek_data": false, 00:17:59.486 "copy": true, 00:17:59.486 "nvme_iov_md": false 00:17:59.486 }, 00:17:59.486 "memory_domains": [ 00:17:59.486 { 00:17:59.486 "dma_device_id": "system", 00:17:59.486 "dma_device_type": 1 00:17:59.486 }, 00:17:59.486 { 00:17:59.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.486 "dma_device_type": 2 00:17:59.486 } 00:17:59.486 ], 00:17:59.486 "driver_specific": {} 00:17:59.486 } 00:17:59.486 ] 00:17:59.486 00:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:59.486 00:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:59.486 00:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:59.486 00:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:59.486 00:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:59.486 00:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:59.486 00:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:59.486 00:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:59.486 00:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:59.486 00:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:59.486 00:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:59.486 00:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.486 00:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.745 00:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:59.745 "name": "Existed_Raid", 00:17:59.745 "uuid": "1a3c42da-9fe4-4196-8488-f46eb2122eac", 00:17:59.745 "strip_size_kb": 64, 00:17:59.745 "state": "configuring", 00:17:59.745 "raid_level": "concat", 00:17:59.745 "superblock": true, 00:17:59.745 "num_base_bdevs": 3, 00:17:59.745 "num_base_bdevs_discovered": 2, 00:17:59.745 "num_base_bdevs_operational": 3, 00:17:59.745 "base_bdevs_list": [ 00:17:59.745 { 00:17:59.745 "name": "BaseBdev1", 00:17:59.745 "uuid": "d0d0d3c9-0356-4fa7-b38a-a6e6b3f5981f", 00:17:59.745 "is_configured": true, 00:17:59.745 "data_offset": 2048, 00:17:59.745 "data_size": 63488 00:17:59.745 }, 00:17:59.745 { 00:17:59.745 "name": null, 00:17:59.745 "uuid": "fbd2a16c-bc3e-4e29-bcbe-4e8048ec8d26", 00:17:59.745 "is_configured": false, 00:17:59.745 "data_offset": 2048, 00:17:59.745 "data_size": 63488 00:17:59.745 }, 00:17:59.745 { 00:17:59.745 "name": "BaseBdev3", 00:17:59.745 "uuid": "443b3ba7-c504-4259-853c-4a6483658b92", 00:17:59.745 "is_configured": true, 00:17:59.745 "data_offset": 2048, 00:17:59.745 "data_size": 63488 00:17:59.745 } 00:17:59.745 ] 00:17:59.745 }' 00:17:59.745 00:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:59.745 00:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.004 00:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.004 00:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:00.263 00:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:00.263 00:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:00.522 [2024-07-25 00:01:56.257516] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:00.522 00:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:00.522 00:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:00.522 00:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:00.522 00:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:00.522 00:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:00.522 00:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:00.522 00:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:00.522 00:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:00.522 00:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:00.522 00:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:00.522 00:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.522 00:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.783 00:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:00.783 "name": "Existed_Raid", 00:18:00.783 "uuid": "1a3c42da-9fe4-4196-8488-f46eb2122eac", 00:18:00.783 "strip_size_kb": 64, 00:18:00.783 "state": "configuring", 00:18:00.783 "raid_level": "concat", 00:18:00.783 "superblock": true, 00:18:00.783 "num_base_bdevs": 3, 00:18:00.783 "num_base_bdevs_discovered": 1, 00:18:00.783 "num_base_bdevs_operational": 3, 00:18:00.783 "base_bdevs_list": [ 00:18:00.783 { 00:18:00.783 "name": "BaseBdev1", 00:18:00.783 "uuid": "d0d0d3c9-0356-4fa7-b38a-a6e6b3f5981f", 00:18:00.783 "is_configured": true, 00:18:00.783 "data_offset": 2048, 00:18:00.783 "data_size": 63488 00:18:00.783 }, 00:18:00.783 { 00:18:00.783 "name": null, 00:18:00.783 "uuid": "fbd2a16c-bc3e-4e29-bcbe-4e8048ec8d26", 00:18:00.783 "is_configured": false, 00:18:00.783 "data_offset": 2048, 00:18:00.783 "data_size": 63488 00:18:00.783 }, 00:18:00.783 { 00:18:00.783 "name": null, 00:18:00.783 "uuid": "443b3ba7-c504-4259-853c-4a6483658b92", 00:18:00.783 "is_configured": false, 00:18:00.783 "data_offset": 2048, 00:18:00.784 "data_size": 63488 00:18:00.784 } 00:18:00.784 ] 00:18:00.784 }' 00:18:00.784 00:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:00.784 00:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.044 00:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:01.044 00:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.314 00:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:01.314 00:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:01.592 [2024-07-25 00:01:57.249747] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:01.592 00:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:01.592 00:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:01.592 00:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:01.592 00:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:01.592 00:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:01.592 00:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:01.592 00:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:01.592 00:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:01.592 00:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:01.592 00:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:01.592 00:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.592 00:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.851 00:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:01.851 "name": "Existed_Raid", 00:18:01.851 "uuid": "1a3c42da-9fe4-4196-8488-f46eb2122eac", 00:18:01.851 "strip_size_kb": 64, 00:18:01.851 "state": "configuring", 00:18:01.851 "raid_level": "concat", 00:18:01.851 "superblock": true, 00:18:01.851 "num_base_bdevs": 3, 00:18:01.851 "num_base_bdevs_discovered": 2, 00:18:01.851 "num_base_bdevs_operational": 3, 00:18:01.851 "base_bdevs_list": [ 00:18:01.851 { 00:18:01.851 "name": "BaseBdev1", 00:18:01.851 "uuid": "d0d0d3c9-0356-4fa7-b38a-a6e6b3f5981f", 00:18:01.851 "is_configured": true, 00:18:01.851 "data_offset": 2048, 00:18:01.851 "data_size": 63488 00:18:01.851 }, 00:18:01.851 { 00:18:01.851 "name": null, 00:18:01.851 "uuid": "fbd2a16c-bc3e-4e29-bcbe-4e8048ec8d26", 00:18:01.851 "is_configured": false, 00:18:01.851 "data_offset": 2048, 00:18:01.851 "data_size": 63488 00:18:01.851 }, 00:18:01.851 { 00:18:01.851 "name": "BaseBdev3", 00:18:01.851 "uuid": "443b3ba7-c504-4259-853c-4a6483658b92", 00:18:01.851 "is_configured": true, 00:18:01.851 "data_offset": 2048, 00:18:01.851 "data_size": 63488 00:18:01.851 } 00:18:01.851 ] 00:18:01.851 }' 00:18:01.851 00:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:01.851 00:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.110 00:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.110 00:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:02.369 00:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:02.369 00:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:02.628 [2024-07-25 00:01:58.298111] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:02.628 00:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:02.628 00:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:02.628 00:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:02.628 00:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:02.628 00:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:02.628 00:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:02.628 00:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:02.628 00:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:02.628 00:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:02.628 00:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:02.628 00:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.628 00:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.887 00:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:02.887 "name": "Existed_Raid", 00:18:02.887 "uuid": "1a3c42da-9fe4-4196-8488-f46eb2122eac", 00:18:02.887 "strip_size_kb": 64, 00:18:02.887 "state": "configuring", 00:18:02.887 "raid_level": "concat", 00:18:02.887 "superblock": true, 00:18:02.887 "num_base_bdevs": 3, 00:18:02.887 "num_base_bdevs_discovered": 1, 00:18:02.887 "num_base_bdevs_operational": 3, 00:18:02.887 "base_bdevs_list": [ 00:18:02.887 { 00:18:02.887 "name": null, 00:18:02.887 "uuid": "d0d0d3c9-0356-4fa7-b38a-a6e6b3f5981f", 00:18:02.887 "is_configured": false, 00:18:02.887 "data_offset": 2048, 00:18:02.887 "data_size": 63488 00:18:02.887 }, 00:18:02.887 { 00:18:02.887 "name": null, 00:18:02.887 "uuid": "fbd2a16c-bc3e-4e29-bcbe-4e8048ec8d26", 00:18:02.887 "is_configured": false, 00:18:02.887 "data_offset": 2048, 00:18:02.887 "data_size": 63488 00:18:02.887 }, 00:18:02.887 { 00:18:02.887 "name": "BaseBdev3", 00:18:02.887 "uuid": "443b3ba7-c504-4259-853c-4a6483658b92", 00:18:02.887 "is_configured": true, 00:18:02.887 "data_offset": 2048, 00:18:02.887 "data_size": 63488 00:18:02.887 } 00:18:02.887 ] 00:18:02.887 }' 00:18:02.887 00:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:02.887 00:01:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.146 00:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.146 00:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:03.406 00:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:03.406 00:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:03.665 [2024-07-25 00:01:59.387009] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:03.665 00:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:03.665 00:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:03.665 00:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:03.665 00:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:03.665 00:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:03.665 00:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:03.665 00:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:03.665 00:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:03.665 00:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:03.665 00:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:03.665 00:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.665 00:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.924 00:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:03.924 "name": "Existed_Raid", 00:18:03.924 "uuid": "1a3c42da-9fe4-4196-8488-f46eb2122eac", 00:18:03.924 "strip_size_kb": 64, 00:18:03.924 "state": "configuring", 00:18:03.924 "raid_level": "concat", 00:18:03.924 "superblock": true, 00:18:03.924 "num_base_bdevs": 3, 00:18:03.924 "num_base_bdevs_discovered": 2, 00:18:03.924 "num_base_bdevs_operational": 3, 00:18:03.924 "base_bdevs_list": [ 00:18:03.924 { 00:18:03.924 "name": null, 00:18:03.924 "uuid": "d0d0d3c9-0356-4fa7-b38a-a6e6b3f5981f", 00:18:03.924 "is_configured": false, 00:18:03.924 "data_offset": 2048, 00:18:03.924 "data_size": 63488 00:18:03.924 }, 00:18:03.924 { 00:18:03.924 "name": "BaseBdev2", 00:18:03.924 "uuid": "fbd2a16c-bc3e-4e29-bcbe-4e8048ec8d26", 00:18:03.924 "is_configured": true, 00:18:03.924 "data_offset": 2048, 00:18:03.924 "data_size": 63488 00:18:03.924 }, 00:18:03.924 { 00:18:03.924 "name": "BaseBdev3", 00:18:03.924 "uuid": "443b3ba7-c504-4259-853c-4a6483658b92", 00:18:03.924 "is_configured": true, 00:18:03.924 "data_offset": 2048, 00:18:03.924 "data_size": 63488 00:18:03.924 } 00:18:03.924 ] 00:18:03.924 }' 00:18:03.924 00:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:03.924 00:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.183 00:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.183 00:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:04.442 00:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:04.442 00:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.442 00:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:04.700 00:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d0d0d3c9-0356-4fa7-b38a-a6e6b3f5981f 00:18:04.958 [2024-07-25 00:02:00.699824] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:04.958 [2024-07-25 00:02:00.700129] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:18:04.958 [2024-07-25 00:02:00.700154] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:04.958 [2024-07-25 00:02:00.700265] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005d40 00:18:04.958 [2024-07-25 00:02:00.700723] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:18:04.958 [2024-07-25 00:02:00.700741] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000008a80 00:18:04.958 [2024-07-25 00:02:00.700915] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.958 NewBaseBdev 00:18:04.958 00:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:04.958 00:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:04.958 00:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:04.958 00:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:04.958 00:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:04.958 00:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:04.958 00:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:05.217 00:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:05.477 [ 00:18:05.477 { 00:18:05.477 "name": "NewBaseBdev", 00:18:05.477 "aliases": [ 00:18:05.477 "d0d0d3c9-0356-4fa7-b38a-a6e6b3f5981f" 00:18:05.477 ], 00:18:05.477 "product_name": "Malloc disk", 00:18:05.477 "block_size": 512, 00:18:05.477 "num_blocks": 65536, 00:18:05.477 "uuid": "d0d0d3c9-0356-4fa7-b38a-a6e6b3f5981f", 00:18:05.477 "assigned_rate_limits": { 00:18:05.477 "rw_ios_per_sec": 0, 00:18:05.477 "rw_mbytes_per_sec": 0, 00:18:05.477 "r_mbytes_per_sec": 0, 00:18:05.477 "w_mbytes_per_sec": 0 00:18:05.477 }, 00:18:05.477 "claimed": true, 00:18:05.477 "claim_type": "exclusive_write", 00:18:05.477 "zoned": false, 00:18:05.477 "supported_io_types": { 00:18:05.477 "read": true, 00:18:05.477 "write": true, 00:18:05.477 "unmap": true, 00:18:05.477 "flush": true, 00:18:05.477 "reset": true, 00:18:05.477 "nvme_admin": false, 00:18:05.477 "nvme_io": false, 00:18:05.477 "nvme_io_md": false, 00:18:05.477 "write_zeroes": true, 00:18:05.477 "zcopy": true, 00:18:05.477 "get_zone_info": false, 00:18:05.477 "zone_management": false, 00:18:05.477 "zone_append": false, 00:18:05.477 "compare": false, 00:18:05.477 "compare_and_write": false, 00:18:05.477 "abort": true, 00:18:05.477 "seek_hole": false, 00:18:05.477 "seek_data": false, 00:18:05.477 "copy": true, 00:18:05.477 "nvme_iov_md": false 00:18:05.477 }, 00:18:05.477 "memory_domains": [ 00:18:05.477 { 00:18:05.477 "dma_device_id": "system", 00:18:05.477 "dma_device_type": 1 00:18:05.477 }, 00:18:05.477 { 00:18:05.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.477 "dma_device_type": 2 00:18:05.477 } 00:18:05.477 ], 00:18:05.477 "driver_specific": {} 00:18:05.477 } 00:18:05.477 ] 00:18:05.477 00:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:05.477 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:05.477 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:05.477 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:05.477 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:05.477 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:05.477 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:05.477 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:05.477 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:05.477 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:05.477 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:05.477 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.477 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.736 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:05.736 "name": "Existed_Raid", 00:18:05.736 "uuid": "1a3c42da-9fe4-4196-8488-f46eb2122eac", 00:18:05.736 "strip_size_kb": 64, 00:18:05.736 "state": "online", 00:18:05.736 "raid_level": "concat", 00:18:05.736 "superblock": true, 00:18:05.736 "num_base_bdevs": 3, 00:18:05.736 "num_base_bdevs_discovered": 3, 00:18:05.736 "num_base_bdevs_operational": 3, 00:18:05.736 "base_bdevs_list": [ 00:18:05.736 { 00:18:05.736 "name": "NewBaseBdev", 00:18:05.736 "uuid": "d0d0d3c9-0356-4fa7-b38a-a6e6b3f5981f", 00:18:05.736 "is_configured": true, 00:18:05.736 "data_offset": 2048, 00:18:05.736 "data_size": 63488 00:18:05.736 }, 00:18:05.736 { 00:18:05.736 "name": "BaseBdev2", 00:18:05.736 "uuid": "fbd2a16c-bc3e-4e29-bcbe-4e8048ec8d26", 00:18:05.736 "is_configured": true, 00:18:05.736 "data_offset": 2048, 00:18:05.736 "data_size": 63488 00:18:05.736 }, 00:18:05.736 { 00:18:05.736 "name": "BaseBdev3", 00:18:05.736 "uuid": "443b3ba7-c504-4259-853c-4a6483658b92", 00:18:05.736 "is_configured": true, 00:18:05.736 "data_offset": 2048, 00:18:05.736 "data_size": 63488 00:18:05.736 } 00:18:05.736 ] 00:18:05.736 }' 00:18:05.736 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:05.736 00:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.994 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:05.994 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:05.994 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:05.994 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:05.994 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:05.994 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:05.994 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:05.994 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:06.252 [2024-07-25 00:02:01.940766] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.252 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:06.252 "name": "Existed_Raid", 00:18:06.252 "aliases": [ 00:18:06.252 "1a3c42da-9fe4-4196-8488-f46eb2122eac" 00:18:06.252 ], 00:18:06.252 "product_name": "Raid Volume", 00:18:06.252 "block_size": 512, 00:18:06.252 "num_blocks": 190464, 00:18:06.252 "uuid": "1a3c42da-9fe4-4196-8488-f46eb2122eac", 00:18:06.252 "assigned_rate_limits": { 00:18:06.252 "rw_ios_per_sec": 0, 00:18:06.252 "rw_mbytes_per_sec": 0, 00:18:06.252 "r_mbytes_per_sec": 0, 00:18:06.252 "w_mbytes_per_sec": 0 00:18:06.252 }, 00:18:06.252 "claimed": false, 00:18:06.252 "zoned": false, 00:18:06.252 "supported_io_types": { 00:18:06.252 "read": true, 00:18:06.252 "write": true, 00:18:06.252 "unmap": true, 00:18:06.252 "flush": true, 00:18:06.252 "reset": true, 00:18:06.252 "nvme_admin": false, 00:18:06.252 "nvme_io": false, 00:18:06.252 "nvme_io_md": false, 00:18:06.252 "write_zeroes": true, 00:18:06.252 "zcopy": false, 00:18:06.252 "get_zone_info": false, 00:18:06.252 "zone_management": false, 00:18:06.252 "zone_append": false, 00:18:06.252 "compare": false, 00:18:06.252 "compare_and_write": false, 00:18:06.252 "abort": false, 00:18:06.252 "seek_hole": false, 00:18:06.252 "seek_data": false, 00:18:06.252 "copy": false, 00:18:06.252 "nvme_iov_md": false 00:18:06.252 }, 00:18:06.252 "memory_domains": [ 00:18:06.252 { 00:18:06.252 "dma_device_id": "system", 00:18:06.252 "dma_device_type": 1 00:18:06.252 }, 00:18:06.252 { 00:18:06.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.252 "dma_device_type": 2 00:18:06.252 }, 00:18:06.252 { 00:18:06.252 "dma_device_id": "system", 00:18:06.252 "dma_device_type": 1 00:18:06.252 }, 00:18:06.252 { 00:18:06.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.252 "dma_device_type": 2 00:18:06.252 }, 00:18:06.252 { 00:18:06.252 "dma_device_id": "system", 00:18:06.252 "dma_device_type": 1 00:18:06.252 }, 00:18:06.252 { 00:18:06.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.252 "dma_device_type": 2 00:18:06.252 } 00:18:06.252 ], 00:18:06.252 "driver_specific": { 00:18:06.252 "raid": { 00:18:06.252 "uuid": "1a3c42da-9fe4-4196-8488-f46eb2122eac", 00:18:06.252 "strip_size_kb": 64, 00:18:06.252 "state": "online", 00:18:06.252 "raid_level": "concat", 00:18:06.252 "superblock": true, 00:18:06.252 "num_base_bdevs": 3, 00:18:06.252 "num_base_bdevs_discovered": 3, 00:18:06.252 "num_base_bdevs_operational": 3, 00:18:06.252 "base_bdevs_list": [ 00:18:06.252 { 00:18:06.252 "name": "NewBaseBdev", 00:18:06.252 "uuid": "d0d0d3c9-0356-4fa7-b38a-a6e6b3f5981f", 00:18:06.252 "is_configured": true, 00:18:06.252 "data_offset": 2048, 00:18:06.252 "data_size": 63488 00:18:06.252 }, 00:18:06.252 { 00:18:06.252 "name": "BaseBdev2", 00:18:06.252 "uuid": "fbd2a16c-bc3e-4e29-bcbe-4e8048ec8d26", 00:18:06.252 "is_configured": true, 00:18:06.252 "data_offset": 2048, 00:18:06.252 "data_size": 63488 00:18:06.252 }, 00:18:06.252 { 00:18:06.252 "name": "BaseBdev3", 00:18:06.252 "uuid": "443b3ba7-c504-4259-853c-4a6483658b92", 00:18:06.252 "is_configured": true, 00:18:06.252 "data_offset": 2048, 00:18:06.252 "data_size": 63488 00:18:06.252 } 00:18:06.252 ] 00:18:06.252 } 00:18:06.252 } 00:18:06.252 }' 00:18:06.252 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:06.252 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:06.252 BaseBdev2 00:18:06.252 BaseBdev3' 00:18:06.252 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:06.252 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:06.252 00:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:06.511 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:06.511 "name": "NewBaseBdev", 00:18:06.511 "aliases": [ 00:18:06.511 "d0d0d3c9-0356-4fa7-b38a-a6e6b3f5981f" 00:18:06.511 ], 00:18:06.511 "product_name": "Malloc disk", 00:18:06.511 "block_size": 512, 00:18:06.511 "num_blocks": 65536, 00:18:06.511 "uuid": "d0d0d3c9-0356-4fa7-b38a-a6e6b3f5981f", 00:18:06.511 "assigned_rate_limits": { 00:18:06.511 "rw_ios_per_sec": 0, 00:18:06.511 "rw_mbytes_per_sec": 0, 00:18:06.511 "r_mbytes_per_sec": 0, 00:18:06.511 "w_mbytes_per_sec": 0 00:18:06.511 }, 00:18:06.511 "claimed": true, 00:18:06.511 "claim_type": "exclusive_write", 00:18:06.511 "zoned": false, 00:18:06.511 "supported_io_types": { 00:18:06.511 "read": true, 00:18:06.511 "write": true, 00:18:06.511 "unmap": true, 00:18:06.511 "flush": true, 00:18:06.511 "reset": true, 00:18:06.511 "nvme_admin": false, 00:18:06.511 "nvme_io": false, 00:18:06.511 "nvme_io_md": false, 00:18:06.511 "write_zeroes": true, 00:18:06.511 "zcopy": true, 00:18:06.511 "get_zone_info": false, 00:18:06.511 "zone_management": false, 00:18:06.511 "zone_append": false, 00:18:06.511 "compare": false, 00:18:06.511 "compare_and_write": false, 00:18:06.511 "abort": true, 00:18:06.511 "seek_hole": false, 00:18:06.511 "seek_data": false, 00:18:06.511 "copy": true, 00:18:06.511 "nvme_iov_md": false 00:18:06.511 }, 00:18:06.511 "memory_domains": [ 00:18:06.511 { 00:18:06.511 "dma_device_id": "system", 00:18:06.511 "dma_device_type": 1 00:18:06.511 }, 00:18:06.511 { 00:18:06.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.511 "dma_device_type": 2 00:18:06.511 } 00:18:06.511 ], 00:18:06.511 "driver_specific": {} 00:18:06.511 }' 00:18:06.511 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.511 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.511 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:06.511 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.511 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.511 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:06.511 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.511 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.511 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:06.511 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.511 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.511 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:06.511 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:06.511 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:06.511 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:06.770 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:06.770 "name": "BaseBdev2", 00:18:06.770 "aliases": [ 00:18:06.770 "fbd2a16c-bc3e-4e29-bcbe-4e8048ec8d26" 00:18:06.770 ], 00:18:06.770 "product_name": "Malloc disk", 00:18:06.770 "block_size": 512, 00:18:06.770 "num_blocks": 65536, 00:18:06.770 "uuid": "fbd2a16c-bc3e-4e29-bcbe-4e8048ec8d26", 00:18:06.770 "assigned_rate_limits": { 00:18:06.770 "rw_ios_per_sec": 0, 00:18:06.770 "rw_mbytes_per_sec": 0, 00:18:06.770 "r_mbytes_per_sec": 0, 00:18:06.770 "w_mbytes_per_sec": 0 00:18:06.770 }, 00:18:06.770 "claimed": true, 00:18:06.770 "claim_type": "exclusive_write", 00:18:06.770 "zoned": false, 00:18:06.770 "supported_io_types": { 00:18:06.770 "read": true, 00:18:06.770 "write": true, 00:18:06.770 "unmap": true, 00:18:06.770 "flush": true, 00:18:06.770 "reset": true, 00:18:06.770 "nvme_admin": false, 00:18:06.770 "nvme_io": false, 00:18:06.770 "nvme_io_md": false, 00:18:06.770 "write_zeroes": true, 00:18:06.770 "zcopy": true, 00:18:06.770 "get_zone_info": false, 00:18:06.770 "zone_management": false, 00:18:06.770 "zone_append": false, 00:18:06.770 "compare": false, 00:18:06.770 "compare_and_write": false, 00:18:06.770 "abort": true, 00:18:06.770 "seek_hole": false, 00:18:06.770 "seek_data": false, 00:18:06.770 "copy": true, 00:18:06.770 "nvme_iov_md": false 00:18:06.770 }, 00:18:06.770 "memory_domains": [ 00:18:06.770 { 00:18:06.770 "dma_device_id": "system", 00:18:06.770 "dma_device_type": 1 00:18:06.770 }, 00:18:06.770 { 00:18:06.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.770 "dma_device_type": 2 00:18:06.770 } 00:18:06.770 ], 00:18:06.770 "driver_specific": {} 00:18:06.770 }' 00:18:06.770 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.770 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.770 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:06.770 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.770 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.770 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:06.770 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.770 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.770 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:06.770 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.770 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.770 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:06.770 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:06.770 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:06.770 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:07.029 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:07.029 "name": "BaseBdev3", 00:18:07.029 "aliases": [ 00:18:07.029 "443b3ba7-c504-4259-853c-4a6483658b92" 00:18:07.029 ], 00:18:07.029 "product_name": "Malloc disk", 00:18:07.029 "block_size": 512, 00:18:07.029 "num_blocks": 65536, 00:18:07.029 "uuid": "443b3ba7-c504-4259-853c-4a6483658b92", 00:18:07.029 "assigned_rate_limits": { 00:18:07.029 "rw_ios_per_sec": 0, 00:18:07.029 "rw_mbytes_per_sec": 0, 00:18:07.029 "r_mbytes_per_sec": 0, 00:18:07.029 "w_mbytes_per_sec": 0 00:18:07.029 }, 00:18:07.029 "claimed": true, 00:18:07.029 "claim_type": "exclusive_write", 00:18:07.029 "zoned": false, 00:18:07.029 "supported_io_types": { 00:18:07.029 "read": true, 00:18:07.029 "write": true, 00:18:07.029 "unmap": true, 00:18:07.029 "flush": true, 00:18:07.029 "reset": true, 00:18:07.029 "nvme_admin": false, 00:18:07.029 "nvme_io": false, 00:18:07.029 "nvme_io_md": false, 00:18:07.029 "write_zeroes": true, 00:18:07.029 "zcopy": true, 00:18:07.029 "get_zone_info": false, 00:18:07.029 "zone_management": false, 00:18:07.029 "zone_append": false, 00:18:07.029 "compare": false, 00:18:07.029 "compare_and_write": false, 00:18:07.029 "abort": true, 00:18:07.029 "seek_hole": false, 00:18:07.029 "seek_data": false, 00:18:07.029 "copy": true, 00:18:07.029 "nvme_iov_md": false 00:18:07.029 }, 00:18:07.029 "memory_domains": [ 00:18:07.029 { 00:18:07.029 "dma_device_id": "system", 00:18:07.029 "dma_device_type": 1 00:18:07.029 }, 00:18:07.029 { 00:18:07.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.029 "dma_device_type": 2 00:18:07.029 } 00:18:07.029 ], 00:18:07.029 "driver_specific": {} 00:18:07.029 }' 00:18:07.029 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:07.029 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:07.029 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:07.029 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.029 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.029 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:07.029 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.287 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.287 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:07.287 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.287 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.287 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:07.287 00:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:07.546 [2024-07-25 00:02:03.168789] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:07.546 [2024-07-25 00:02:03.168825] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.546 [2024-07-25 00:02:03.168941] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.546 [2024-07-25 00:02:03.169027] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.546 [2024-07-25 00:02:03.169048] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name Existed_Raid, state offline 00:18:07.546 00:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 83499 00:18:07.546 00:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 83499 ']' 00:18:07.546 00:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 83499 00:18:07.546 00:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:18:07.546 00:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:07.546 00:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83499 00:18:07.546 killing process with pid 83499 00:18:07.546 00:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:07.546 00:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:07.546 00:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83499' 00:18:07.546 00:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 83499 00:18:07.546 [2024-07-25 00:02:03.219588] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:07.546 00:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 83499 00:18:07.804 [2024-07-25 00:02:03.438972] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:08.739 00:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:08.739 00:18:08.739 real 0m23.523s 00:18:08.739 user 0m41.044s 00:18:08.739 sys 0m3.645s 00:18:08.739 00:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:08.739 00:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.739 ************************************ 00:18:08.739 END TEST raid_state_function_test_sb 00:18:08.739 ************************************ 00:18:08.739 00:02:04 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:18:08.739 00:02:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:08.739 00:02:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:08.739 00:02:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:08.739 ************************************ 00:18:08.739 START TEST raid_superblock_test 00:18:08.739 ************************************ 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=84375 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 84375 /var/tmp/spdk-raid.sock 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 84375 ']' 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:08.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:08.739 00:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.997 [2024-07-25 00:02:04.628928] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:18:08.998 [2024-07-25 00:02:04.629916] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84375 ] 00:18:08.998 [2024-07-25 00:02:04.804633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.256 [2024-07-25 00:02:05.019382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.514 [2024-07-25 00:02:05.184868] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:09.772 00:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:09.772 00:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:18:09.772 00:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:18:09.772 00:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:09.772 00:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:18:09.772 00:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:18:09.772 00:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:09.772 00:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:09.772 00:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:18:09.772 00:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:09.772 00:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:10.030 malloc1 00:18:10.030 00:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:10.287 [2024-07-25 00:02:05.992039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:10.287 [2024-07-25 00:02:05.992393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.287 [2024-07-25 00:02:05.992565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:18:10.287 [2024-07-25 00:02:05.992594] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.287 [2024-07-25 00:02:05.995298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.287 pt1 00:18:10.287 [2024-07-25 00:02:05.995550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:10.287 00:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:18:10.287 00:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:10.287 00:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:18:10.287 00:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:18:10.287 00:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:10.287 00:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:10.287 00:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:18:10.288 00:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:10.288 00:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:10.545 malloc2 00:18:10.545 00:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:10.804 [2024-07-25 00:02:06.505852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:10.804 [2024-07-25 00:02:06.505957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.804 [2024-07-25 00:02:06.505990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:18:10.804 [2024-07-25 00:02:06.506004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.804 [2024-07-25 00:02:06.508413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.804 [2024-07-25 00:02:06.508456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:10.804 pt2 00:18:10.804 00:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:18:10.804 00:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:10.804 00:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:18:10.804 00:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:18:10.804 00:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:10.804 00:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:10.804 00:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:18:10.804 00:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:10.804 00:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:11.062 malloc3 00:18:11.062 00:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:11.320 [2024-07-25 00:02:07.022373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:11.320 [2024-07-25 00:02:07.022464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.320 [2024-07-25 00:02:07.022497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:18:11.320 [2024-07-25 00:02:07.022511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.320 [2024-07-25 00:02:07.025135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.321 [2024-07-25 00:02:07.025193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:11.321 pt3 00:18:11.321 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:18:11.321 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:11.321 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:18:11.579 [2024-07-25 00:02:07.262466] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:11.579 [2024-07-25 00:02:07.264790] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:11.579 [2024-07-25 00:02:07.265072] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:11.579 [2024-07-25 00:02:07.265518] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:18:11.579 [2024-07-25 00:02:07.265684] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:11.579 [2024-07-25 00:02:07.265906] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:18:11.579 [2024-07-25 00:02:07.266379] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:18:11.579 [2024-07-25 00:02:07.266535] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008a80 00:18:11.579 [2024-07-25 00:02:07.266938] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.579 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:11.579 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:11.579 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:11.579 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:11.579 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:11.579 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:11.579 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:11.579 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:11.579 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:11.579 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:11.579 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.579 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.838 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:11.838 "name": "raid_bdev1", 00:18:11.838 "uuid": "275951df-a3b0-4c3c-99b0-962413ccedbe", 00:18:11.838 "strip_size_kb": 64, 00:18:11.838 "state": "online", 00:18:11.838 "raid_level": "concat", 00:18:11.838 "superblock": true, 00:18:11.838 "num_base_bdevs": 3, 00:18:11.838 "num_base_bdevs_discovered": 3, 00:18:11.838 "num_base_bdevs_operational": 3, 00:18:11.838 "base_bdevs_list": [ 00:18:11.838 { 00:18:11.838 "name": "pt1", 00:18:11.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:11.838 "is_configured": true, 00:18:11.838 "data_offset": 2048, 00:18:11.838 "data_size": 63488 00:18:11.838 }, 00:18:11.838 { 00:18:11.838 "name": "pt2", 00:18:11.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.838 "is_configured": true, 00:18:11.838 "data_offset": 2048, 00:18:11.838 "data_size": 63488 00:18:11.838 }, 00:18:11.838 { 00:18:11.838 "name": "pt3", 00:18:11.838 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:11.838 "is_configured": true, 00:18:11.838 "data_offset": 2048, 00:18:11.838 "data_size": 63488 00:18:11.838 } 00:18:11.838 ] 00:18:11.838 }' 00:18:11.838 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:11.838 00:02:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.097 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:18:12.097 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:12.097 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:12.097 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:12.097 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:12.097 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:12.097 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:12.097 00:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:12.355 [2024-07-25 00:02:08.107538] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:12.355 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:12.355 "name": "raid_bdev1", 00:18:12.355 "aliases": [ 00:18:12.355 "275951df-a3b0-4c3c-99b0-962413ccedbe" 00:18:12.355 ], 00:18:12.355 "product_name": "Raid Volume", 00:18:12.355 "block_size": 512, 00:18:12.355 "num_blocks": 190464, 00:18:12.355 "uuid": "275951df-a3b0-4c3c-99b0-962413ccedbe", 00:18:12.355 "assigned_rate_limits": { 00:18:12.355 "rw_ios_per_sec": 0, 00:18:12.355 "rw_mbytes_per_sec": 0, 00:18:12.355 "r_mbytes_per_sec": 0, 00:18:12.355 "w_mbytes_per_sec": 0 00:18:12.355 }, 00:18:12.355 "claimed": false, 00:18:12.355 "zoned": false, 00:18:12.355 "supported_io_types": { 00:18:12.355 "read": true, 00:18:12.355 "write": true, 00:18:12.355 "unmap": true, 00:18:12.355 "flush": true, 00:18:12.355 "reset": true, 00:18:12.355 "nvme_admin": false, 00:18:12.355 "nvme_io": false, 00:18:12.355 "nvme_io_md": false, 00:18:12.355 "write_zeroes": true, 00:18:12.355 "zcopy": false, 00:18:12.355 "get_zone_info": false, 00:18:12.355 "zone_management": false, 00:18:12.355 "zone_append": false, 00:18:12.355 "compare": false, 00:18:12.355 "compare_and_write": false, 00:18:12.355 "abort": false, 00:18:12.355 "seek_hole": false, 00:18:12.355 "seek_data": false, 00:18:12.355 "copy": false, 00:18:12.355 "nvme_iov_md": false 00:18:12.355 }, 00:18:12.355 "memory_domains": [ 00:18:12.355 { 00:18:12.355 "dma_device_id": "system", 00:18:12.355 "dma_device_type": 1 00:18:12.355 }, 00:18:12.355 { 00:18:12.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.355 "dma_device_type": 2 00:18:12.355 }, 00:18:12.355 { 00:18:12.355 "dma_device_id": "system", 00:18:12.355 "dma_device_type": 1 00:18:12.355 }, 00:18:12.355 { 00:18:12.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.355 "dma_device_type": 2 00:18:12.355 }, 00:18:12.355 { 00:18:12.355 "dma_device_id": "system", 00:18:12.355 "dma_device_type": 1 00:18:12.355 }, 00:18:12.355 { 00:18:12.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.355 "dma_device_type": 2 00:18:12.355 } 00:18:12.355 ], 00:18:12.355 "driver_specific": { 00:18:12.355 "raid": { 00:18:12.355 "uuid": "275951df-a3b0-4c3c-99b0-962413ccedbe", 00:18:12.355 "strip_size_kb": 64, 00:18:12.355 "state": "online", 00:18:12.355 "raid_level": "concat", 00:18:12.355 "superblock": true, 00:18:12.355 "num_base_bdevs": 3, 00:18:12.355 "num_base_bdevs_discovered": 3, 00:18:12.355 "num_base_bdevs_operational": 3, 00:18:12.355 "base_bdevs_list": [ 00:18:12.355 { 00:18:12.355 "name": "pt1", 00:18:12.355 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:12.355 "is_configured": true, 00:18:12.355 "data_offset": 2048, 00:18:12.355 "data_size": 63488 00:18:12.355 }, 00:18:12.355 { 00:18:12.355 "name": "pt2", 00:18:12.355 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.355 "is_configured": true, 00:18:12.355 "data_offset": 2048, 00:18:12.355 "data_size": 63488 00:18:12.355 }, 00:18:12.355 { 00:18:12.355 "name": "pt3", 00:18:12.355 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:12.355 "is_configured": true, 00:18:12.356 "data_offset": 2048, 00:18:12.356 "data_size": 63488 00:18:12.356 } 00:18:12.356 ] 00:18:12.356 } 00:18:12.356 } 00:18:12.356 }' 00:18:12.356 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:12.356 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:12.356 pt2 00:18:12.356 pt3' 00:18:12.356 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:12.356 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:12.356 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:12.614 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:12.614 "name": "pt1", 00:18:12.614 "aliases": [ 00:18:12.614 "00000000-0000-0000-0000-000000000001" 00:18:12.614 ], 00:18:12.614 "product_name": "passthru", 00:18:12.615 "block_size": 512, 00:18:12.615 "num_blocks": 65536, 00:18:12.615 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:12.615 "assigned_rate_limits": { 00:18:12.615 "rw_ios_per_sec": 0, 00:18:12.615 "rw_mbytes_per_sec": 0, 00:18:12.615 "r_mbytes_per_sec": 0, 00:18:12.615 "w_mbytes_per_sec": 0 00:18:12.615 }, 00:18:12.615 "claimed": true, 00:18:12.615 "claim_type": "exclusive_write", 00:18:12.615 "zoned": false, 00:18:12.615 "supported_io_types": { 00:18:12.615 "read": true, 00:18:12.615 "write": true, 00:18:12.615 "unmap": true, 00:18:12.615 "flush": true, 00:18:12.615 "reset": true, 00:18:12.615 "nvme_admin": false, 00:18:12.615 "nvme_io": false, 00:18:12.615 "nvme_io_md": false, 00:18:12.615 "write_zeroes": true, 00:18:12.615 "zcopy": true, 00:18:12.615 "get_zone_info": false, 00:18:12.615 "zone_management": false, 00:18:12.615 "zone_append": false, 00:18:12.615 "compare": false, 00:18:12.615 "compare_and_write": false, 00:18:12.615 "abort": true, 00:18:12.615 "seek_hole": false, 00:18:12.615 "seek_data": false, 00:18:12.615 "copy": true, 00:18:12.615 "nvme_iov_md": false 00:18:12.615 }, 00:18:12.615 "memory_domains": [ 00:18:12.615 { 00:18:12.615 "dma_device_id": "system", 00:18:12.615 "dma_device_type": 1 00:18:12.615 }, 00:18:12.615 { 00:18:12.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.615 "dma_device_type": 2 00:18:12.615 } 00:18:12.615 ], 00:18:12.615 "driver_specific": { 00:18:12.615 "passthru": { 00:18:12.615 "name": "pt1", 00:18:12.615 "base_bdev_name": "malloc1" 00:18:12.615 } 00:18:12.615 } 00:18:12.615 }' 00:18:12.615 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:12.615 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:12.615 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:12.615 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:12.615 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:12.615 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:12.615 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:12.615 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:12.615 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:12.615 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:12.873 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:12.873 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:12.873 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:12.873 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:12.873 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:13.132 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:13.132 "name": "pt2", 00:18:13.132 "aliases": [ 00:18:13.132 "00000000-0000-0000-0000-000000000002" 00:18:13.132 ], 00:18:13.132 "product_name": "passthru", 00:18:13.132 "block_size": 512, 00:18:13.132 "num_blocks": 65536, 00:18:13.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.132 "assigned_rate_limits": { 00:18:13.132 "rw_ios_per_sec": 0, 00:18:13.132 "rw_mbytes_per_sec": 0, 00:18:13.132 "r_mbytes_per_sec": 0, 00:18:13.132 "w_mbytes_per_sec": 0 00:18:13.132 }, 00:18:13.133 "claimed": true, 00:18:13.133 "claim_type": "exclusive_write", 00:18:13.133 "zoned": false, 00:18:13.133 "supported_io_types": { 00:18:13.133 "read": true, 00:18:13.133 "write": true, 00:18:13.133 "unmap": true, 00:18:13.133 "flush": true, 00:18:13.133 "reset": true, 00:18:13.133 "nvme_admin": false, 00:18:13.133 "nvme_io": false, 00:18:13.133 "nvme_io_md": false, 00:18:13.133 "write_zeroes": true, 00:18:13.133 "zcopy": true, 00:18:13.133 "get_zone_info": false, 00:18:13.133 "zone_management": false, 00:18:13.133 "zone_append": false, 00:18:13.133 "compare": false, 00:18:13.133 "compare_and_write": false, 00:18:13.133 "abort": true, 00:18:13.133 "seek_hole": false, 00:18:13.133 "seek_data": false, 00:18:13.133 "copy": true, 00:18:13.133 "nvme_iov_md": false 00:18:13.133 }, 00:18:13.133 "memory_domains": [ 00:18:13.133 { 00:18:13.133 "dma_device_id": "system", 00:18:13.133 "dma_device_type": 1 00:18:13.133 }, 00:18:13.133 { 00:18:13.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.133 "dma_device_type": 2 00:18:13.133 } 00:18:13.133 ], 00:18:13.133 "driver_specific": { 00:18:13.133 "passthru": { 00:18:13.133 "name": "pt2", 00:18:13.133 "base_bdev_name": "malloc2" 00:18:13.133 } 00:18:13.133 } 00:18:13.133 }' 00:18:13.133 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:13.133 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:13.133 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:13.133 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:13.133 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:13.133 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:13.133 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:13.133 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:13.133 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:13.133 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:13.133 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:13.133 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:13.133 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:13.133 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:13.133 00:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:13.391 00:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:13.391 "name": "pt3", 00:18:13.391 "aliases": [ 00:18:13.391 "00000000-0000-0000-0000-000000000003" 00:18:13.391 ], 00:18:13.391 "product_name": "passthru", 00:18:13.391 "block_size": 512, 00:18:13.391 "num_blocks": 65536, 00:18:13.392 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:13.392 "assigned_rate_limits": { 00:18:13.392 "rw_ios_per_sec": 0, 00:18:13.392 "rw_mbytes_per_sec": 0, 00:18:13.392 "r_mbytes_per_sec": 0, 00:18:13.392 "w_mbytes_per_sec": 0 00:18:13.392 }, 00:18:13.392 "claimed": true, 00:18:13.392 "claim_type": "exclusive_write", 00:18:13.392 "zoned": false, 00:18:13.392 "supported_io_types": { 00:18:13.392 "read": true, 00:18:13.392 "write": true, 00:18:13.392 "unmap": true, 00:18:13.392 "flush": true, 00:18:13.392 "reset": true, 00:18:13.392 "nvme_admin": false, 00:18:13.392 "nvme_io": false, 00:18:13.392 "nvme_io_md": false, 00:18:13.392 "write_zeroes": true, 00:18:13.392 "zcopy": true, 00:18:13.392 "get_zone_info": false, 00:18:13.392 "zone_management": false, 00:18:13.392 "zone_append": false, 00:18:13.392 "compare": false, 00:18:13.392 "compare_and_write": false, 00:18:13.392 "abort": true, 00:18:13.392 "seek_hole": false, 00:18:13.392 "seek_data": false, 00:18:13.392 "copy": true, 00:18:13.392 "nvme_iov_md": false 00:18:13.392 }, 00:18:13.392 "memory_domains": [ 00:18:13.392 { 00:18:13.392 "dma_device_id": "system", 00:18:13.392 "dma_device_type": 1 00:18:13.392 }, 00:18:13.392 { 00:18:13.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.392 "dma_device_type": 2 00:18:13.392 } 00:18:13.392 ], 00:18:13.392 "driver_specific": { 00:18:13.392 "passthru": { 00:18:13.392 "name": "pt3", 00:18:13.392 "base_bdev_name": "malloc3" 00:18:13.392 } 00:18:13.392 } 00:18:13.392 }' 00:18:13.392 00:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:13.392 00:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:13.392 00:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:13.392 00:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:13.392 00:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:13.392 00:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:13.392 00:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:13.392 00:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:13.650 00:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:13.650 00:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:13.650 00:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:13.650 00:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:13.650 00:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:13.651 00:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:18:13.909 [2024-07-25 00:02:09.559990] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.909 00:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=275951df-a3b0-4c3c-99b0-962413ccedbe 00:18:13.909 00:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 275951df-a3b0-4c3c-99b0-962413ccedbe ']' 00:18:13.909 00:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:14.167 [2024-07-25 00:02:09.867756] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:14.167 [2024-07-25 00:02:09.867804] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:14.167 [2024-07-25 00:02:09.867935] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.167 [2024-07-25 00:02:09.868021] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.167 [2024-07-25 00:02:09.868053] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name raid_bdev1, state offline 00:18:14.167 00:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.167 00:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:18:14.425 00:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:18:14.425 00:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:18:14.425 00:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:18:14.425 00:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:14.683 00:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:18:14.683 00:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:14.942 00:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:18:14.942 00:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:15.201 00:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:15.201 00:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:15.460 00:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:18:15.460 00:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:15.460 00:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:15.460 00:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:15.460 00:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:15.460 00:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:15.460 00:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:15.460 00:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:15.460 00:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:15.460 00:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:15.460 00:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:15.461 00:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:15.461 00:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:15.719 [2024-07-25 00:02:11.464242] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:15.719 [2024-07-25 00:02:11.466575] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:15.719 [2024-07-25 00:02:11.466655] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:15.719 [2024-07-25 00:02:11.466754] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:15.719 [2024-07-25 00:02:11.466898] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:15.719 [2024-07-25 00:02:11.466935] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:15.719 [2024-07-25 00:02:11.466962] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:15.719 [2024-07-25 00:02:11.466976] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state configuring 00:18:15.719 request: 00:18:15.719 { 00:18:15.719 "name": "raid_bdev1", 00:18:15.719 "raid_level": "concat", 00:18:15.719 "base_bdevs": [ 00:18:15.719 "malloc1", 00:18:15.719 "malloc2", 00:18:15.719 "malloc3" 00:18:15.719 ], 00:18:15.719 "strip_size_kb": 64, 00:18:15.719 "superblock": false, 00:18:15.719 "method": "bdev_raid_create", 00:18:15.719 "req_id": 1 00:18:15.719 } 00:18:15.719 Got JSON-RPC error response 00:18:15.719 response: 00:18:15.719 { 00:18:15.719 "code": -17, 00:18:15.719 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:15.719 } 00:18:15.719 00:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:15.719 00:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:15.719 00:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:15.719 00:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:15.719 00:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:18:15.719 00:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.978 00:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:18:15.978 00:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:18:15.978 00:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:16.238 [2024-07-25 00:02:11.964535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:16.238 [2024-07-25 00:02:11.964933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.238 [2024-07-25 00:02:11.964978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009680 00:18:16.238 [2024-07-25 00:02:11.964995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.238 [2024-07-25 00:02:11.967730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.238 [2024-07-25 00:02:11.967777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:16.238 [2024-07-25 00:02:11.967952] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:16.238 [2024-07-25 00:02:11.968029] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:16.238 pt1 00:18:16.238 00:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:18:16.238 00:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:16.238 00:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:16.238 00:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:16.238 00:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:16.238 00:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:16.238 00:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:16.238 00:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:16.238 00:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:16.238 00:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:16.238 00:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.238 00:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.497 00:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:16.497 "name": "raid_bdev1", 00:18:16.497 "uuid": "275951df-a3b0-4c3c-99b0-962413ccedbe", 00:18:16.497 "strip_size_kb": 64, 00:18:16.497 "state": "configuring", 00:18:16.497 "raid_level": "concat", 00:18:16.497 "superblock": true, 00:18:16.497 "num_base_bdevs": 3, 00:18:16.497 "num_base_bdevs_discovered": 1, 00:18:16.497 "num_base_bdevs_operational": 3, 00:18:16.497 "base_bdevs_list": [ 00:18:16.497 { 00:18:16.497 "name": "pt1", 00:18:16.497 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:16.497 "is_configured": true, 00:18:16.497 "data_offset": 2048, 00:18:16.497 "data_size": 63488 00:18:16.497 }, 00:18:16.497 { 00:18:16.497 "name": null, 00:18:16.497 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:16.497 "is_configured": false, 00:18:16.497 "data_offset": 2048, 00:18:16.497 "data_size": 63488 00:18:16.497 }, 00:18:16.497 { 00:18:16.497 "name": null, 00:18:16.497 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:16.497 "is_configured": false, 00:18:16.497 "data_offset": 2048, 00:18:16.497 "data_size": 63488 00:18:16.497 } 00:18:16.497 ] 00:18:16.497 }' 00:18:16.497 00:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:16.497 00:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.757 00:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:18:16.757 00:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:17.016 [2024-07-25 00:02:12.880856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:17.016 [2024-07-25 00:02:12.881014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.016 [2024-07-25 00:02:12.881065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:18:17.016 [2024-07-25 00:02:12.881079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.016 [2024-07-25 00:02:12.881693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.016 [2024-07-25 00:02:12.881727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:17.016 [2024-07-25 00:02:12.881844] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:17.016 [2024-07-25 00:02:12.881884] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:17.274 pt2 00:18:17.274 00:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:17.274 [2024-07-25 00:02:13.120999] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:17.533 00:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:18:17.533 00:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:17.533 00:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:17.533 00:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:17.533 00:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:17.533 00:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:17.533 00:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:17.533 00:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:17.533 00:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:17.533 00:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:17.533 00:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.533 00:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.791 00:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:17.791 "name": "raid_bdev1", 00:18:17.791 "uuid": "275951df-a3b0-4c3c-99b0-962413ccedbe", 00:18:17.791 "strip_size_kb": 64, 00:18:17.791 "state": "configuring", 00:18:17.791 "raid_level": "concat", 00:18:17.791 "superblock": true, 00:18:17.791 "num_base_bdevs": 3, 00:18:17.791 "num_base_bdevs_discovered": 1, 00:18:17.791 "num_base_bdevs_operational": 3, 00:18:17.791 "base_bdevs_list": [ 00:18:17.791 { 00:18:17.791 "name": "pt1", 00:18:17.791 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:17.791 "is_configured": true, 00:18:17.791 "data_offset": 2048, 00:18:17.791 "data_size": 63488 00:18:17.791 }, 00:18:17.791 { 00:18:17.791 "name": null, 00:18:17.791 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:17.791 "is_configured": false, 00:18:17.791 "data_offset": 2048, 00:18:17.791 "data_size": 63488 00:18:17.791 }, 00:18:17.791 { 00:18:17.791 "name": null, 00:18:17.791 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:17.791 "is_configured": false, 00:18:17.791 "data_offset": 2048, 00:18:17.791 "data_size": 63488 00:18:17.791 } 00:18:17.791 ] 00:18:17.791 }' 00:18:17.791 00:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:17.791 00:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.049 00:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:18:18.049 00:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:18:18.049 00:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:18.308 [2024-07-25 00:02:13.953228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:18.308 [2024-07-25 00:02:13.953348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.308 [2024-07-25 00:02:13.953376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:18:18.308 [2024-07-25 00:02:13.953393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.308 [2024-07-25 00:02:13.953923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.308 [2024-07-25 00:02:13.953956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:18.308 [2024-07-25 00:02:13.954061] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:18.308 [2024-07-25 00:02:13.954096] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:18.308 pt2 00:18:18.308 00:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:18:18.308 00:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:18:18.308 00:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:18.568 [2024-07-25 00:02:14.245329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:18.568 [2024-07-25 00:02:14.245452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.568 [2024-07-25 00:02:14.245479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:18:18.568 [2024-07-25 00:02:14.245495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.568 [2024-07-25 00:02:14.246125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.568 [2024-07-25 00:02:14.246157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:18.568 [2024-07-25 00:02:14.246288] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:18.568 [2024-07-25 00:02:14.246342] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:18.568 [2024-07-25 00:02:14.246497] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009c80 00:18:18.568 [2024-07-25 00:02:14.246527] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:18.568 [2024-07-25 00:02:14.246647] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:18:18.568 [2024-07-25 00:02:14.247030] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009c80 00:18:18.568 [2024-07-25 00:02:14.247062] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009c80 00:18:18.568 [2024-07-25 00:02:14.247268] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.568 pt3 00:18:18.568 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:18:18.568 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:18:18.568 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:18.568 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:18.568 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:18.568 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:18.568 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:18.568 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:18.568 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:18.568 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:18.568 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:18.568 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:18.568 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.568 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.848 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:18.848 "name": "raid_bdev1", 00:18:18.848 "uuid": "275951df-a3b0-4c3c-99b0-962413ccedbe", 00:18:18.848 "strip_size_kb": 64, 00:18:18.848 "state": "online", 00:18:18.848 "raid_level": "concat", 00:18:18.848 "superblock": true, 00:18:18.848 "num_base_bdevs": 3, 00:18:18.848 "num_base_bdevs_discovered": 3, 00:18:18.848 "num_base_bdevs_operational": 3, 00:18:18.848 "base_bdevs_list": [ 00:18:18.848 { 00:18:18.848 "name": "pt1", 00:18:18.848 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:18.848 "is_configured": true, 00:18:18.848 "data_offset": 2048, 00:18:18.848 "data_size": 63488 00:18:18.848 }, 00:18:18.848 { 00:18:18.848 "name": "pt2", 00:18:18.848 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:18.848 "is_configured": true, 00:18:18.848 "data_offset": 2048, 00:18:18.848 "data_size": 63488 00:18:18.848 }, 00:18:18.848 { 00:18:18.848 "name": "pt3", 00:18:18.848 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:18.848 "is_configured": true, 00:18:18.848 "data_offset": 2048, 00:18:18.848 "data_size": 63488 00:18:18.848 } 00:18:18.848 ] 00:18:18.848 }' 00:18:18.848 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:18.848 00:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.117 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:18:19.117 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:19.117 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:19.117 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:19.117 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:19.117 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:19.117 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:19.117 00:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:19.376 [2024-07-25 00:02:15.234164] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.635 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:19.635 "name": "raid_bdev1", 00:18:19.635 "aliases": [ 00:18:19.635 "275951df-a3b0-4c3c-99b0-962413ccedbe" 00:18:19.635 ], 00:18:19.635 "product_name": "Raid Volume", 00:18:19.635 "block_size": 512, 00:18:19.635 "num_blocks": 190464, 00:18:19.635 "uuid": "275951df-a3b0-4c3c-99b0-962413ccedbe", 00:18:19.635 "assigned_rate_limits": { 00:18:19.635 "rw_ios_per_sec": 0, 00:18:19.635 "rw_mbytes_per_sec": 0, 00:18:19.635 "r_mbytes_per_sec": 0, 00:18:19.635 "w_mbytes_per_sec": 0 00:18:19.635 }, 00:18:19.635 "claimed": false, 00:18:19.635 "zoned": false, 00:18:19.635 "supported_io_types": { 00:18:19.635 "read": true, 00:18:19.635 "write": true, 00:18:19.635 "unmap": true, 00:18:19.635 "flush": true, 00:18:19.635 "reset": true, 00:18:19.635 "nvme_admin": false, 00:18:19.635 "nvme_io": false, 00:18:19.635 "nvme_io_md": false, 00:18:19.635 "write_zeroes": true, 00:18:19.635 "zcopy": false, 00:18:19.635 "get_zone_info": false, 00:18:19.635 "zone_management": false, 00:18:19.635 "zone_append": false, 00:18:19.635 "compare": false, 00:18:19.635 "compare_and_write": false, 00:18:19.635 "abort": false, 00:18:19.635 "seek_hole": false, 00:18:19.635 "seek_data": false, 00:18:19.635 "copy": false, 00:18:19.635 "nvme_iov_md": false 00:18:19.635 }, 00:18:19.635 "memory_domains": [ 00:18:19.635 { 00:18:19.635 "dma_device_id": "system", 00:18:19.635 "dma_device_type": 1 00:18:19.635 }, 00:18:19.635 { 00:18:19.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.635 "dma_device_type": 2 00:18:19.635 }, 00:18:19.635 { 00:18:19.635 "dma_device_id": "system", 00:18:19.635 "dma_device_type": 1 00:18:19.635 }, 00:18:19.635 { 00:18:19.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.635 "dma_device_type": 2 00:18:19.635 }, 00:18:19.635 { 00:18:19.635 "dma_device_id": "system", 00:18:19.635 "dma_device_type": 1 00:18:19.635 }, 00:18:19.635 { 00:18:19.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.635 "dma_device_type": 2 00:18:19.635 } 00:18:19.635 ], 00:18:19.635 "driver_specific": { 00:18:19.635 "raid": { 00:18:19.635 "uuid": "275951df-a3b0-4c3c-99b0-962413ccedbe", 00:18:19.635 "strip_size_kb": 64, 00:18:19.635 "state": "online", 00:18:19.635 "raid_level": "concat", 00:18:19.635 "superblock": true, 00:18:19.636 "num_base_bdevs": 3, 00:18:19.636 "num_base_bdevs_discovered": 3, 00:18:19.636 "num_base_bdevs_operational": 3, 00:18:19.636 "base_bdevs_list": [ 00:18:19.636 { 00:18:19.636 "name": "pt1", 00:18:19.636 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:19.636 "is_configured": true, 00:18:19.636 "data_offset": 2048, 00:18:19.636 "data_size": 63488 00:18:19.636 }, 00:18:19.636 { 00:18:19.636 "name": "pt2", 00:18:19.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.636 "is_configured": true, 00:18:19.636 "data_offset": 2048, 00:18:19.636 "data_size": 63488 00:18:19.636 }, 00:18:19.636 { 00:18:19.636 "name": "pt3", 00:18:19.636 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:19.636 "is_configured": true, 00:18:19.636 "data_offset": 2048, 00:18:19.636 "data_size": 63488 00:18:19.636 } 00:18:19.636 ] 00:18:19.636 } 00:18:19.636 } 00:18:19.636 }' 00:18:19.636 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:19.636 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:19.636 pt2 00:18:19.636 pt3' 00:18:19.636 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:19.636 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:19.636 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:19.895 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:19.895 "name": "pt1", 00:18:19.895 "aliases": [ 00:18:19.895 "00000000-0000-0000-0000-000000000001" 00:18:19.895 ], 00:18:19.895 "product_name": "passthru", 00:18:19.895 "block_size": 512, 00:18:19.895 "num_blocks": 65536, 00:18:19.895 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:19.895 "assigned_rate_limits": { 00:18:19.895 "rw_ios_per_sec": 0, 00:18:19.895 "rw_mbytes_per_sec": 0, 00:18:19.895 "r_mbytes_per_sec": 0, 00:18:19.895 "w_mbytes_per_sec": 0 00:18:19.895 }, 00:18:19.895 "claimed": true, 00:18:19.895 "claim_type": "exclusive_write", 00:18:19.895 "zoned": false, 00:18:19.895 "supported_io_types": { 00:18:19.895 "read": true, 00:18:19.895 "write": true, 00:18:19.895 "unmap": true, 00:18:19.895 "flush": true, 00:18:19.895 "reset": true, 00:18:19.895 "nvme_admin": false, 00:18:19.895 "nvme_io": false, 00:18:19.895 "nvme_io_md": false, 00:18:19.895 "write_zeroes": true, 00:18:19.895 "zcopy": true, 00:18:19.895 "get_zone_info": false, 00:18:19.895 "zone_management": false, 00:18:19.895 "zone_append": false, 00:18:19.895 "compare": false, 00:18:19.895 "compare_and_write": false, 00:18:19.895 "abort": true, 00:18:19.895 "seek_hole": false, 00:18:19.895 "seek_data": false, 00:18:19.895 "copy": true, 00:18:19.895 "nvme_iov_md": false 00:18:19.895 }, 00:18:19.895 "memory_domains": [ 00:18:19.895 { 00:18:19.895 "dma_device_id": "system", 00:18:19.895 "dma_device_type": 1 00:18:19.895 }, 00:18:19.895 { 00:18:19.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.895 "dma_device_type": 2 00:18:19.895 } 00:18:19.895 ], 00:18:19.895 "driver_specific": { 00:18:19.895 "passthru": { 00:18:19.895 "name": "pt1", 00:18:19.895 "base_bdev_name": "malloc1" 00:18:19.895 } 00:18:19.895 } 00:18:19.895 }' 00:18:19.896 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.896 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.896 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:19.896 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.896 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.896 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:19.896 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.896 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.896 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:19.896 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:19.896 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:19.896 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:19.896 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:19.896 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:19.896 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:20.155 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:20.155 "name": "pt2", 00:18:20.155 "aliases": [ 00:18:20.155 "00000000-0000-0000-0000-000000000002" 00:18:20.155 ], 00:18:20.155 "product_name": "passthru", 00:18:20.155 "block_size": 512, 00:18:20.155 "num_blocks": 65536, 00:18:20.155 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.155 "assigned_rate_limits": { 00:18:20.155 "rw_ios_per_sec": 0, 00:18:20.155 "rw_mbytes_per_sec": 0, 00:18:20.155 "r_mbytes_per_sec": 0, 00:18:20.155 "w_mbytes_per_sec": 0 00:18:20.155 }, 00:18:20.155 "claimed": true, 00:18:20.155 "claim_type": "exclusive_write", 00:18:20.155 "zoned": false, 00:18:20.155 "supported_io_types": { 00:18:20.155 "read": true, 00:18:20.155 "write": true, 00:18:20.155 "unmap": true, 00:18:20.155 "flush": true, 00:18:20.155 "reset": true, 00:18:20.155 "nvme_admin": false, 00:18:20.155 "nvme_io": false, 00:18:20.155 "nvme_io_md": false, 00:18:20.155 "write_zeroes": true, 00:18:20.155 "zcopy": true, 00:18:20.155 "get_zone_info": false, 00:18:20.155 "zone_management": false, 00:18:20.155 "zone_append": false, 00:18:20.155 "compare": false, 00:18:20.155 "compare_and_write": false, 00:18:20.155 "abort": true, 00:18:20.155 "seek_hole": false, 00:18:20.155 "seek_data": false, 00:18:20.155 "copy": true, 00:18:20.155 "nvme_iov_md": false 00:18:20.155 }, 00:18:20.155 "memory_domains": [ 00:18:20.155 { 00:18:20.155 "dma_device_id": "system", 00:18:20.155 "dma_device_type": 1 00:18:20.155 }, 00:18:20.155 { 00:18:20.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.155 "dma_device_type": 2 00:18:20.155 } 00:18:20.155 ], 00:18:20.155 "driver_specific": { 00:18:20.155 "passthru": { 00:18:20.155 "name": "pt2", 00:18:20.155 "base_bdev_name": "malloc2" 00:18:20.155 } 00:18:20.155 } 00:18:20.155 }' 00:18:20.155 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:20.155 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:20.155 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:20.155 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:20.155 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:20.155 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:20.155 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.155 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.155 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:20.155 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.155 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.155 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:20.155 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:20.155 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:20.155 00:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:20.415 00:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:20.415 "name": "pt3", 00:18:20.415 "aliases": [ 00:18:20.415 "00000000-0000-0000-0000-000000000003" 00:18:20.415 ], 00:18:20.415 "product_name": "passthru", 00:18:20.415 "block_size": 512, 00:18:20.415 "num_blocks": 65536, 00:18:20.415 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:20.415 "assigned_rate_limits": { 00:18:20.415 "rw_ios_per_sec": 0, 00:18:20.415 "rw_mbytes_per_sec": 0, 00:18:20.415 "r_mbytes_per_sec": 0, 00:18:20.415 "w_mbytes_per_sec": 0 00:18:20.415 }, 00:18:20.415 "claimed": true, 00:18:20.415 "claim_type": "exclusive_write", 00:18:20.415 "zoned": false, 00:18:20.415 "supported_io_types": { 00:18:20.415 "read": true, 00:18:20.415 "write": true, 00:18:20.415 "unmap": true, 00:18:20.415 "flush": true, 00:18:20.415 "reset": true, 00:18:20.415 "nvme_admin": false, 00:18:20.415 "nvme_io": false, 00:18:20.415 "nvme_io_md": false, 00:18:20.415 "write_zeroes": true, 00:18:20.415 "zcopy": true, 00:18:20.415 "get_zone_info": false, 00:18:20.415 "zone_management": false, 00:18:20.415 "zone_append": false, 00:18:20.415 "compare": false, 00:18:20.415 "compare_and_write": false, 00:18:20.415 "abort": true, 00:18:20.415 "seek_hole": false, 00:18:20.415 "seek_data": false, 00:18:20.415 "copy": true, 00:18:20.415 "nvme_iov_md": false 00:18:20.415 }, 00:18:20.415 "memory_domains": [ 00:18:20.415 { 00:18:20.415 "dma_device_id": "system", 00:18:20.415 "dma_device_type": 1 00:18:20.415 }, 00:18:20.415 { 00:18:20.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.415 "dma_device_type": 2 00:18:20.415 } 00:18:20.415 ], 00:18:20.415 "driver_specific": { 00:18:20.415 "passthru": { 00:18:20.415 "name": "pt3", 00:18:20.415 "base_bdev_name": "malloc3" 00:18:20.415 } 00:18:20.415 } 00:18:20.415 }' 00:18:20.415 00:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:20.415 00:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:20.415 00:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:20.415 00:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:20.674 00:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:20.674 00:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:20.674 00:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.674 00:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.674 00:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:20.674 00:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.674 00:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.674 00:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:20.674 00:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:20.674 00:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:18:20.933 [2024-07-25 00:02:16.650631] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:20.933 00:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 275951df-a3b0-4c3c-99b0-962413ccedbe '!=' 275951df-a3b0-4c3c-99b0-962413ccedbe ']' 00:18:20.933 00:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:18:20.933 00:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:20.933 00:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:20.933 00:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 84375 00:18:20.933 00:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 84375 ']' 00:18:20.933 00:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 84375 00:18:20.933 00:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:18:20.933 00:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:20.933 00:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84375 00:18:20.933 killing process with pid 84375 00:18:20.933 00:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:20.933 00:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:20.933 00:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84375' 00:18:20.933 00:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 84375 00:18:20.933 [2024-07-25 00:02:16.710375] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:20.933 00:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 84375 00:18:20.934 [2024-07-25 00:02:16.710474] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.934 [2024-07-25 00:02:16.710542] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.934 [2024-07-25 00:02:16.710560] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009c80 name raid_bdev1, state offline 00:18:21.193 [2024-07-25 00:02:16.970815] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:22.571 ************************************ 00:18:22.571 END TEST raid_superblock_test 00:18:22.571 ************************************ 00:18:22.571 00:02:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:18:22.571 00:18:22.571 real 0m13.590s 00:18:22.571 user 0m22.887s 00:18:22.571 sys 0m2.137s 00:18:22.571 00:02:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:22.571 00:02:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.571 00:02:18 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:18:22.571 00:02:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:22.571 00:02:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:22.571 00:02:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.571 ************************************ 00:18:22.571 START TEST raid_read_error_test 00:18:22.571 ************************************ 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.ACQdTXwsE8 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=84826 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 84826 /var/tmp/spdk-raid.sock 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 84826 ']' 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:22.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:22.571 00:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.571 [2024-07-25 00:02:18.283394] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:18:22.571 [2024-07-25 00:02:18.283584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84826 ] 00:18:22.831 [2024-07-25 00:02:18.460082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.831 [2024-07-25 00:02:18.667568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.090 [2024-07-25 00:02:18.863761] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.657 00:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:23.657 00:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:18:23.657 00:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:23.657 00:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:23.657 BaseBdev1_malloc 00:18:23.916 00:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:23.916 true 00:18:23.916 00:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:24.175 [2024-07-25 00:02:20.039870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:24.175 [2024-07-25 00:02:20.040038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.175 [2024-07-25 00:02:20.040076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:18:24.175 [2024-07-25 00:02:20.040096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.175 [2024-07-25 00:02:20.042877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.175 [2024-07-25 00:02:20.042930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:24.434 BaseBdev1 00:18:24.434 00:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:24.434 00:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:24.693 BaseBdev2_malloc 00:18:24.693 00:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:24.952 true 00:18:24.952 00:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:25.211 [2024-07-25 00:02:20.848242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:25.211 [2024-07-25 00:02:20.848397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.211 [2024-07-25 00:02:20.848433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:18:25.211 [2024-07-25 00:02:20.848453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.211 [2024-07-25 00:02:20.851564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.211 [2024-07-25 00:02:20.851821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:25.211 BaseBdev2 00:18:25.211 00:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:25.211 00:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:25.470 BaseBdev3_malloc 00:18:25.470 00:02:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:25.729 true 00:18:25.729 00:02:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:25.988 [2024-07-25 00:02:21.654993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:25.988 [2024-07-25 00:02:21.655110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.988 [2024-07-25 00:02:21.655145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:18:25.988 [2024-07-25 00:02:21.655164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.988 [2024-07-25 00:02:21.658088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.988 [2024-07-25 00:02:21.658175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:25.988 BaseBdev3 00:18:25.988 00:02:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:18:26.247 [2024-07-25 00:02:21.907291] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:26.247 [2024-07-25 00:02:21.910011] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:26.247 [2024-07-25 00:02:21.910295] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:26.247 [2024-07-25 00:02:21.910775] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:18:26.247 [2024-07-25 00:02:21.910942] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:26.247 [2024-07-25 00:02:21.911122] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:18:26.247 [2024-07-25 00:02:21.911579] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:18:26.247 [2024-07-25 00:02:21.911621] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:18:26.247 [2024-07-25 00:02:21.912027] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.247 00:02:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:26.247 00:02:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:26.247 00:02:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:26.247 00:02:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:26.247 00:02:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:26.247 00:02:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:26.247 00:02:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:26.247 00:02:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:26.247 00:02:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:26.247 00:02:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:26.247 00:02:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.247 00:02:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.507 00:02:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:26.507 "name": "raid_bdev1", 00:18:26.507 "uuid": "6555ad42-28c1-4555-9105-eb6dc018daa8", 00:18:26.507 "strip_size_kb": 64, 00:18:26.507 "state": "online", 00:18:26.507 "raid_level": "concat", 00:18:26.507 "superblock": true, 00:18:26.507 "num_base_bdevs": 3, 00:18:26.507 "num_base_bdevs_discovered": 3, 00:18:26.507 "num_base_bdevs_operational": 3, 00:18:26.507 "base_bdevs_list": [ 00:18:26.507 { 00:18:26.507 "name": "BaseBdev1", 00:18:26.507 "uuid": "eafd1f90-4c69-5504-ba88-feab4fc9ed42", 00:18:26.507 "is_configured": true, 00:18:26.507 "data_offset": 2048, 00:18:26.507 "data_size": 63488 00:18:26.507 }, 00:18:26.507 { 00:18:26.507 "name": "BaseBdev2", 00:18:26.507 "uuid": "6663fafe-7fa7-5c9c-b273-1817e6a21b99", 00:18:26.507 "is_configured": true, 00:18:26.507 "data_offset": 2048, 00:18:26.507 "data_size": 63488 00:18:26.507 }, 00:18:26.507 { 00:18:26.507 "name": "BaseBdev3", 00:18:26.507 "uuid": "82904cee-b930-56ad-883a-aec852398028", 00:18:26.507 "is_configured": true, 00:18:26.507 "data_offset": 2048, 00:18:26.507 "data_size": 63488 00:18:26.507 } 00:18:26.507 ] 00:18:26.507 }' 00:18:26.507 00:02:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:26.507 00:02:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.766 00:02:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:26.766 00:02:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:18:27.025 [2024-07-25 00:02:22.661443] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:18:27.962 00:02:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:27.962 00:02:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:18:27.962 00:02:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:18:27.962 00:02:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:18:27.962 00:02:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:27.962 00:02:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:27.962 00:02:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:27.962 00:02:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:27.962 00:02:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:27.962 00:02:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:27.962 00:02:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:27.962 00:02:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:27.962 00:02:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:27.962 00:02:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:27.962 00:02:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.220 00:02:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.478 00:02:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:28.478 "name": "raid_bdev1", 00:18:28.478 "uuid": "6555ad42-28c1-4555-9105-eb6dc018daa8", 00:18:28.478 "strip_size_kb": 64, 00:18:28.478 "state": "online", 00:18:28.478 "raid_level": "concat", 00:18:28.478 "superblock": true, 00:18:28.478 "num_base_bdevs": 3, 00:18:28.478 "num_base_bdevs_discovered": 3, 00:18:28.478 "num_base_bdevs_operational": 3, 00:18:28.478 "base_bdevs_list": [ 00:18:28.478 { 00:18:28.478 "name": "BaseBdev1", 00:18:28.478 "uuid": "eafd1f90-4c69-5504-ba88-feab4fc9ed42", 00:18:28.478 "is_configured": true, 00:18:28.478 "data_offset": 2048, 00:18:28.478 "data_size": 63488 00:18:28.478 }, 00:18:28.478 { 00:18:28.478 "name": "BaseBdev2", 00:18:28.478 "uuid": "6663fafe-7fa7-5c9c-b273-1817e6a21b99", 00:18:28.478 "is_configured": true, 00:18:28.478 "data_offset": 2048, 00:18:28.478 "data_size": 63488 00:18:28.478 }, 00:18:28.478 { 00:18:28.478 "name": "BaseBdev3", 00:18:28.478 "uuid": "82904cee-b930-56ad-883a-aec852398028", 00:18:28.478 "is_configured": true, 00:18:28.478 "data_offset": 2048, 00:18:28.478 "data_size": 63488 00:18:28.478 } 00:18:28.478 ] 00:18:28.478 }' 00:18:28.478 00:02:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:28.478 00:02:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.737 00:02:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:28.996 [2024-07-25 00:02:24.728371] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:28.996 [2024-07-25 00:02:24.728449] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:28.996 [2024-07-25 00:02:24.732292] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.996 0 00:18:28.996 [2024-07-25 00:02:24.732644] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.996 [2024-07-25 00:02:24.732731] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.996 [2024-07-25 00:02:24.732752] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:18:28.996 00:02:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 84826 00:18:28.996 00:02:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 84826 ']' 00:18:28.996 00:02:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 84826 00:18:28.996 00:02:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:18:28.996 00:02:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:28.996 00:02:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84826 00:18:28.996 00:02:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:28.996 00:02:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:28.996 00:02:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84826' 00:18:28.996 killing process with pid 84826 00:18:28.996 00:02:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 84826 00:18:28.996 [2024-07-25 00:02:24.787911] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:28.996 00:02:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 84826 00:18:29.255 [2024-07-25 00:02:24.993647] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:30.635 00:02:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:18:30.635 00:02:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.ACQdTXwsE8 00:18:30.635 00:02:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:18:30.635 00:02:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.48 00:18:30.635 00:02:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:18:30.635 00:02:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:30.635 00:02:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:30.635 00:02:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.48 != \0\.\0\0 ]] 00:18:30.635 00:18:30.635 real 0m8.104s 00:18:30.635 user 0m12.066s 00:18:30.635 sys 0m0.994s 00:18:30.635 00:02:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:30.635 ************************************ 00:18:30.635 END TEST raid_read_error_test 00:18:30.635 ************************************ 00:18:30.635 00:02:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.635 00:02:26 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:18:30.635 00:02:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:30.635 00:02:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:30.635 00:02:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:30.635 ************************************ 00:18:30.635 START TEST raid_write_error_test 00:18:30.635 ************************************ 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:18:30.635 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:18:30.636 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:18:30.636 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.2Igf3HzSVE 00:18:30.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:30.636 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=85013 00:18:30.636 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 85013 /var/tmp/spdk-raid.sock 00:18:30.636 00:02:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 85013 ']' 00:18:30.636 00:02:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:30.636 00:02:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:30.636 00:02:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:30.636 00:02:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:30.636 00:02:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.636 00:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:30.636 [2024-07-25 00:02:26.445878] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:18:30.636 [2024-07-25 00:02:26.446366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85013 ] 00:18:30.895 [2024-07-25 00:02:26.615528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.154 [2024-07-25 00:02:26.821850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.154 [2024-07-25 00:02:27.014078] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:31.721 00:02:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:31.721 00:02:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:18:31.721 00:02:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:31.721 00:02:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:31.980 BaseBdev1_malloc 00:18:32.238 00:02:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:32.238 true 00:18:32.498 00:02:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:32.756 [2024-07-25 00:02:28.386255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:32.756 [2024-07-25 00:02:28.386354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.756 [2024-07-25 00:02:28.386401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:18:32.756 [2024-07-25 00:02:28.386421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.756 [2024-07-25 00:02:28.389156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.756 [2024-07-25 00:02:28.389210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:32.756 BaseBdev1 00:18:32.756 00:02:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:32.756 00:02:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:33.015 BaseBdev2_malloc 00:18:33.015 00:02:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:33.273 true 00:18:33.273 00:02:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:33.532 [2024-07-25 00:02:29.154289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:33.532 [2024-07-25 00:02:29.154603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.532 [2024-07-25 00:02:29.154757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:18:33.532 [2024-07-25 00:02:29.154919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.532 [2024-07-25 00:02:29.157643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.532 [2024-07-25 00:02:29.157848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:33.532 BaseBdev2 00:18:33.532 00:02:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:33.532 00:02:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:33.790 BaseBdev3_malloc 00:18:33.790 00:02:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:34.048 true 00:18:34.048 00:02:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:34.307 [2024-07-25 00:02:29.926682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:34.307 [2024-07-25 00:02:29.926781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.307 [2024-07-25 00:02:29.926828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:18:34.307 [2024-07-25 00:02:29.926865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.307 [2024-07-25 00:02:29.929388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.307 [2024-07-25 00:02:29.929452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:34.307 BaseBdev3 00:18:34.307 00:02:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:18:34.307 [2024-07-25 00:02:30.150814] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:34.307 [2024-07-25 00:02:30.153184] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:34.307 [2024-07-25 00:02:30.153445] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:34.307 [2024-07-25 00:02:30.153775] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:18:34.307 [2024-07-25 00:02:30.153948] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:34.307 [2024-07-25 00:02:30.154129] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:18:34.307 [2024-07-25 00:02:30.154571] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:18:34.307 [2024-07-25 00:02:30.154747] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:18:34.307 [2024-07-25 00:02:30.155220] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.307 00:02:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:34.307 00:02:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:34.307 00:02:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:34.307 00:02:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:34.307 00:02:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:34.307 00:02:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:34.307 00:02:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:34.307 00:02:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:34.307 00:02:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:34.307 00:02:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:34.564 00:02:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.564 00:02:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.564 00:02:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:34.564 "name": "raid_bdev1", 00:18:34.564 "uuid": "afe78c34-685c-4afa-9c52-dfb5e695caae", 00:18:34.564 "strip_size_kb": 64, 00:18:34.564 "state": "online", 00:18:34.564 "raid_level": "concat", 00:18:34.564 "superblock": true, 00:18:34.564 "num_base_bdevs": 3, 00:18:34.564 "num_base_bdevs_discovered": 3, 00:18:34.564 "num_base_bdevs_operational": 3, 00:18:34.564 "base_bdevs_list": [ 00:18:34.564 { 00:18:34.564 "name": "BaseBdev1", 00:18:34.564 "uuid": "1907c3b3-e3d3-560a-abbe-428f3252f90b", 00:18:34.564 "is_configured": true, 00:18:34.564 "data_offset": 2048, 00:18:34.564 "data_size": 63488 00:18:34.564 }, 00:18:34.564 { 00:18:34.564 "name": "BaseBdev2", 00:18:34.564 "uuid": "bd31ff0d-1d40-5531-a8b9-8f95f155a713", 00:18:34.564 "is_configured": true, 00:18:34.564 "data_offset": 2048, 00:18:34.564 "data_size": 63488 00:18:34.564 }, 00:18:34.564 { 00:18:34.564 "name": "BaseBdev3", 00:18:34.564 "uuid": "5e3957f0-2975-5bbf-a288-e44b1577174d", 00:18:34.564 "is_configured": true, 00:18:34.564 "data_offset": 2048, 00:18:34.564 "data_size": 63488 00:18:34.564 } 00:18:34.564 ] 00:18:34.564 }' 00:18:34.564 00:02:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:34.564 00:02:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.130 00:02:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:18:35.130 00:02:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:35.130 [2024-07-25 00:02:30.860571] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:18:36.064 00:02:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:36.322 00:02:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:18:36.322 00:02:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:18:36.322 00:02:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:18:36.322 00:02:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:36.322 00:02:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:36.322 00:02:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:36.322 00:02:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:36.322 00:02:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:36.322 00:02:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:36.322 00:02:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:36.322 00:02:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:36.322 00:02:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:36.322 00:02:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:36.322 00:02:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.322 00:02:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.580 00:02:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:36.580 "name": "raid_bdev1", 00:18:36.580 "uuid": "afe78c34-685c-4afa-9c52-dfb5e695caae", 00:18:36.580 "strip_size_kb": 64, 00:18:36.580 "state": "online", 00:18:36.580 "raid_level": "concat", 00:18:36.580 "superblock": true, 00:18:36.580 "num_base_bdevs": 3, 00:18:36.580 "num_base_bdevs_discovered": 3, 00:18:36.580 "num_base_bdevs_operational": 3, 00:18:36.580 "base_bdevs_list": [ 00:18:36.580 { 00:18:36.580 "name": "BaseBdev1", 00:18:36.580 "uuid": "1907c3b3-e3d3-560a-abbe-428f3252f90b", 00:18:36.581 "is_configured": true, 00:18:36.581 "data_offset": 2048, 00:18:36.581 "data_size": 63488 00:18:36.581 }, 00:18:36.581 { 00:18:36.581 "name": "BaseBdev2", 00:18:36.581 "uuid": "bd31ff0d-1d40-5531-a8b9-8f95f155a713", 00:18:36.581 "is_configured": true, 00:18:36.581 "data_offset": 2048, 00:18:36.581 "data_size": 63488 00:18:36.581 }, 00:18:36.581 { 00:18:36.581 "name": "BaseBdev3", 00:18:36.581 "uuid": "5e3957f0-2975-5bbf-a288-e44b1577174d", 00:18:36.581 "is_configured": true, 00:18:36.581 "data_offset": 2048, 00:18:36.581 "data_size": 63488 00:18:36.581 } 00:18:36.581 ] 00:18:36.581 }' 00:18:36.581 00:02:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:36.581 00:02:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.839 00:02:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:37.098 [2024-07-25 00:02:32.774109] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:37.098 [2024-07-25 00:02:32.774352] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:37.098 [2024-07-25 00:02:32.777704] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.098 [2024-07-25 00:02:32.777920] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.098 [2024-07-25 00:02:32.778021] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.098 [2024-07-25 00:02:32.778273] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:18:37.098 0 00:18:37.098 00:02:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 85013 00:18:37.098 00:02:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 85013 ']' 00:18:37.098 00:02:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 85013 00:18:37.098 00:02:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:18:37.098 00:02:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:37.098 00:02:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85013 00:18:37.098 killing process with pid 85013 00:18:37.098 00:02:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:37.098 00:02:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:37.098 00:02:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85013' 00:18:37.098 00:02:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 85013 00:18:37.098 [2024-07-25 00:02:32.826157] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:37.098 00:02:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 85013 00:18:37.357 [2024-07-25 00:02:33.000236] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:38.290 00:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.2Igf3HzSVE 00:18:38.290 00:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:18:38.290 00:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:18:38.290 00:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.52 00:18:38.290 00:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:18:38.290 00:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:38.290 00:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:38.290 00:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.52 != \0\.\0\0 ]] 00:18:38.290 00:18:38.290 real 0m7.740s 00:18:38.290 user 0m11.627s 00:18:38.290 sys 0m0.931s 00:18:38.290 00:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:38.290 00:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.290 ************************************ 00:18:38.290 END TEST raid_write_error_test 00:18:38.290 ************************************ 00:18:38.290 00:02:34 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:18:38.290 00:02:34 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:18:38.290 00:02:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:38.290 00:02:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:38.290 00:02:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:38.548 ************************************ 00:18:38.548 START TEST raid_state_function_test 00:18:38.548 ************************************ 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:38.548 Process raid pid: 85196 00:18:38.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:18:38.548 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=85196 00:18:38.549 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 85196' 00:18:38.549 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:38.549 00:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 85196 /var/tmp/spdk-raid.sock 00:18:38.549 00:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 85196 ']' 00:18:38.549 00:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:38.549 00:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:38.549 00:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:38.549 00:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:38.549 00:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.549 [2024-07-25 00:02:34.227132] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:18:38.549 [2024-07-25 00:02:34.227280] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.549 [2024-07-25 00:02:34.390235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.808 [2024-07-25 00:02:34.570664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.066 [2024-07-25 00:02:34.736225] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.326 00:02:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:39.326 00:02:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:18:39.326 00:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:39.589 [2024-07-25 00:02:35.392473] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:39.589 [2024-07-25 00:02:35.392567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:39.589 [2024-07-25 00:02:35.392584] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:39.589 [2024-07-25 00:02:35.392617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:39.589 [2024-07-25 00:02:35.392628] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:39.589 [2024-07-25 00:02:35.392642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:39.589 00:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:39.589 00:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:39.589 00:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:39.589 00:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:39.589 00:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:39.589 00:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:39.589 00:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:39.589 00:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:39.589 00:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:39.589 00:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:39.589 00:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.589 00:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.848 00:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:39.848 "name": "Existed_Raid", 00:18:39.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.848 "strip_size_kb": 0, 00:18:39.848 "state": "configuring", 00:18:39.848 "raid_level": "raid1", 00:18:39.848 "superblock": false, 00:18:39.848 "num_base_bdevs": 3, 00:18:39.848 "num_base_bdevs_discovered": 0, 00:18:39.848 "num_base_bdevs_operational": 3, 00:18:39.848 "base_bdevs_list": [ 00:18:39.848 { 00:18:39.848 "name": "BaseBdev1", 00:18:39.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.848 "is_configured": false, 00:18:39.848 "data_offset": 0, 00:18:39.848 "data_size": 0 00:18:39.848 }, 00:18:39.848 { 00:18:39.848 "name": "BaseBdev2", 00:18:39.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.848 "is_configured": false, 00:18:39.848 "data_offset": 0, 00:18:39.848 "data_size": 0 00:18:39.848 }, 00:18:39.848 { 00:18:39.848 "name": "BaseBdev3", 00:18:39.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.848 "is_configured": false, 00:18:39.848 "data_offset": 0, 00:18:39.848 "data_size": 0 00:18:39.848 } 00:18:39.848 ] 00:18:39.848 }' 00:18:39.848 00:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:39.848 00:02:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.107 00:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:40.367 [2024-07-25 00:02:36.116543] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:40.367 [2024-07-25 00:02:36.116611] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:18:40.367 00:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:40.627 [2024-07-25 00:02:36.380607] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:40.627 [2024-07-25 00:02:36.380699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:40.627 [2024-07-25 00:02:36.380725] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:40.627 [2024-07-25 00:02:36.380747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:40.627 [2024-07-25 00:02:36.380758] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:40.627 [2024-07-25 00:02:36.380773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:40.627 00:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:40.885 [2024-07-25 00:02:36.639817] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:40.885 BaseBdev1 00:18:40.885 00:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:40.885 00:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:40.885 00:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:40.885 00:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:40.885 00:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:40.885 00:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:40.885 00:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:41.144 00:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:41.403 [ 00:18:41.403 { 00:18:41.403 "name": "BaseBdev1", 00:18:41.403 "aliases": [ 00:18:41.403 "878ae5a2-7aaf-4b39-ac45-f881cd583972" 00:18:41.403 ], 00:18:41.403 "product_name": "Malloc disk", 00:18:41.403 "block_size": 512, 00:18:41.403 "num_blocks": 65536, 00:18:41.403 "uuid": "878ae5a2-7aaf-4b39-ac45-f881cd583972", 00:18:41.403 "assigned_rate_limits": { 00:18:41.403 "rw_ios_per_sec": 0, 00:18:41.403 "rw_mbytes_per_sec": 0, 00:18:41.403 "r_mbytes_per_sec": 0, 00:18:41.403 "w_mbytes_per_sec": 0 00:18:41.403 }, 00:18:41.403 "claimed": true, 00:18:41.403 "claim_type": "exclusive_write", 00:18:41.403 "zoned": false, 00:18:41.403 "supported_io_types": { 00:18:41.403 "read": true, 00:18:41.403 "write": true, 00:18:41.403 "unmap": true, 00:18:41.403 "flush": true, 00:18:41.403 "reset": true, 00:18:41.403 "nvme_admin": false, 00:18:41.403 "nvme_io": false, 00:18:41.403 "nvme_io_md": false, 00:18:41.403 "write_zeroes": true, 00:18:41.403 "zcopy": true, 00:18:41.403 "get_zone_info": false, 00:18:41.403 "zone_management": false, 00:18:41.403 "zone_append": false, 00:18:41.403 "compare": false, 00:18:41.403 "compare_and_write": false, 00:18:41.403 "abort": true, 00:18:41.403 "seek_hole": false, 00:18:41.403 "seek_data": false, 00:18:41.403 "copy": true, 00:18:41.403 "nvme_iov_md": false 00:18:41.403 }, 00:18:41.403 "memory_domains": [ 00:18:41.403 { 00:18:41.403 "dma_device_id": "system", 00:18:41.403 "dma_device_type": 1 00:18:41.403 }, 00:18:41.403 { 00:18:41.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:41.403 "dma_device_type": 2 00:18:41.403 } 00:18:41.403 ], 00:18:41.403 "driver_specific": {} 00:18:41.403 } 00:18:41.403 ] 00:18:41.403 00:02:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:41.403 00:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:41.403 00:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:41.403 00:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:41.403 00:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:41.403 00:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:41.403 00:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:41.403 00:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:41.403 00:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:41.403 00:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:41.403 00:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:41.403 00:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.403 00:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.662 00:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:41.662 "name": "Existed_Raid", 00:18:41.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.662 "strip_size_kb": 0, 00:18:41.662 "state": "configuring", 00:18:41.662 "raid_level": "raid1", 00:18:41.662 "superblock": false, 00:18:41.662 "num_base_bdevs": 3, 00:18:41.662 "num_base_bdevs_discovered": 1, 00:18:41.662 "num_base_bdevs_operational": 3, 00:18:41.662 "base_bdevs_list": [ 00:18:41.662 { 00:18:41.662 "name": "BaseBdev1", 00:18:41.662 "uuid": "878ae5a2-7aaf-4b39-ac45-f881cd583972", 00:18:41.662 "is_configured": true, 00:18:41.662 "data_offset": 0, 00:18:41.662 "data_size": 65536 00:18:41.662 }, 00:18:41.662 { 00:18:41.662 "name": "BaseBdev2", 00:18:41.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.662 "is_configured": false, 00:18:41.662 "data_offset": 0, 00:18:41.662 "data_size": 0 00:18:41.662 }, 00:18:41.662 { 00:18:41.662 "name": "BaseBdev3", 00:18:41.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.662 "is_configured": false, 00:18:41.662 "data_offset": 0, 00:18:41.662 "data_size": 0 00:18:41.662 } 00:18:41.662 ] 00:18:41.662 }' 00:18:41.662 00:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:41.662 00:02:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.920 00:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:42.179 [2024-07-25 00:02:37.892303] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:42.179 [2024-07-25 00:02:37.892383] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:18:42.179 00:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:42.437 [2024-07-25 00:02:38.116411] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:42.437 [2024-07-25 00:02:38.118729] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:42.437 [2024-07-25 00:02:38.118820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:42.437 [2024-07-25 00:02:38.118839] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:42.437 [2024-07-25 00:02:38.118857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:42.437 00:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:42.437 00:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:42.437 00:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:42.437 00:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:42.437 00:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:42.437 00:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:42.437 00:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:42.437 00:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:42.437 00:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:42.437 00:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:42.437 00:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:42.437 00:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:42.437 00:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.437 00:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.696 00:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:42.696 "name": "Existed_Raid", 00:18:42.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.696 "strip_size_kb": 0, 00:18:42.696 "state": "configuring", 00:18:42.696 "raid_level": "raid1", 00:18:42.696 "superblock": false, 00:18:42.696 "num_base_bdevs": 3, 00:18:42.696 "num_base_bdevs_discovered": 1, 00:18:42.696 "num_base_bdevs_operational": 3, 00:18:42.696 "base_bdevs_list": [ 00:18:42.696 { 00:18:42.696 "name": "BaseBdev1", 00:18:42.696 "uuid": "878ae5a2-7aaf-4b39-ac45-f881cd583972", 00:18:42.696 "is_configured": true, 00:18:42.696 "data_offset": 0, 00:18:42.696 "data_size": 65536 00:18:42.696 }, 00:18:42.696 { 00:18:42.696 "name": "BaseBdev2", 00:18:42.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.696 "is_configured": false, 00:18:42.696 "data_offset": 0, 00:18:42.696 "data_size": 0 00:18:42.696 }, 00:18:42.696 { 00:18:42.696 "name": "BaseBdev3", 00:18:42.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.696 "is_configured": false, 00:18:42.696 "data_offset": 0, 00:18:42.696 "data_size": 0 00:18:42.696 } 00:18:42.696 ] 00:18:42.696 }' 00:18:42.696 00:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:42.696 00:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.954 00:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:43.213 [2024-07-25 00:02:38.947746] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:43.213 BaseBdev2 00:18:43.213 00:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:43.213 00:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:43.213 00:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:43.213 00:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:43.213 00:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:43.213 00:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:43.213 00:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:43.471 00:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:43.730 [ 00:18:43.730 { 00:18:43.730 "name": "BaseBdev2", 00:18:43.730 "aliases": [ 00:18:43.730 "3d5fff54-0e4c-44a1-952d-04c51dffd3b0" 00:18:43.730 ], 00:18:43.730 "product_name": "Malloc disk", 00:18:43.730 "block_size": 512, 00:18:43.730 "num_blocks": 65536, 00:18:43.730 "uuid": "3d5fff54-0e4c-44a1-952d-04c51dffd3b0", 00:18:43.730 "assigned_rate_limits": { 00:18:43.730 "rw_ios_per_sec": 0, 00:18:43.730 "rw_mbytes_per_sec": 0, 00:18:43.730 "r_mbytes_per_sec": 0, 00:18:43.730 "w_mbytes_per_sec": 0 00:18:43.730 }, 00:18:43.730 "claimed": true, 00:18:43.730 "claim_type": "exclusive_write", 00:18:43.730 "zoned": false, 00:18:43.730 "supported_io_types": { 00:18:43.730 "read": true, 00:18:43.730 "write": true, 00:18:43.730 "unmap": true, 00:18:43.730 "flush": true, 00:18:43.730 "reset": true, 00:18:43.730 "nvme_admin": false, 00:18:43.730 "nvme_io": false, 00:18:43.730 "nvme_io_md": false, 00:18:43.730 "write_zeroes": true, 00:18:43.730 "zcopy": true, 00:18:43.730 "get_zone_info": false, 00:18:43.730 "zone_management": false, 00:18:43.730 "zone_append": false, 00:18:43.730 "compare": false, 00:18:43.730 "compare_and_write": false, 00:18:43.730 "abort": true, 00:18:43.730 "seek_hole": false, 00:18:43.730 "seek_data": false, 00:18:43.730 "copy": true, 00:18:43.730 "nvme_iov_md": false 00:18:43.730 }, 00:18:43.730 "memory_domains": [ 00:18:43.730 { 00:18:43.730 "dma_device_id": "system", 00:18:43.730 "dma_device_type": 1 00:18:43.730 }, 00:18:43.730 { 00:18:43.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.730 "dma_device_type": 2 00:18:43.730 } 00:18:43.730 ], 00:18:43.730 "driver_specific": {} 00:18:43.730 } 00:18:43.730 ] 00:18:43.730 00:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:43.730 00:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:43.730 00:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:43.730 00:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:43.730 00:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:43.730 00:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:43.730 00:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:43.730 00:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:43.730 00:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:43.730 00:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:43.730 00:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:43.730 00:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:43.730 00:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:43.730 00:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.730 00:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.989 00:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:43.989 "name": "Existed_Raid", 00:18:43.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.989 "strip_size_kb": 0, 00:18:43.989 "state": "configuring", 00:18:43.989 "raid_level": "raid1", 00:18:43.989 "superblock": false, 00:18:43.989 "num_base_bdevs": 3, 00:18:43.989 "num_base_bdevs_discovered": 2, 00:18:43.989 "num_base_bdevs_operational": 3, 00:18:43.989 "base_bdevs_list": [ 00:18:43.989 { 00:18:43.989 "name": "BaseBdev1", 00:18:43.989 "uuid": "878ae5a2-7aaf-4b39-ac45-f881cd583972", 00:18:43.989 "is_configured": true, 00:18:43.989 "data_offset": 0, 00:18:43.989 "data_size": 65536 00:18:43.989 }, 00:18:43.989 { 00:18:43.989 "name": "BaseBdev2", 00:18:43.989 "uuid": "3d5fff54-0e4c-44a1-952d-04c51dffd3b0", 00:18:43.989 "is_configured": true, 00:18:43.989 "data_offset": 0, 00:18:43.989 "data_size": 65536 00:18:43.989 }, 00:18:43.989 { 00:18:43.989 "name": "BaseBdev3", 00:18:43.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.989 "is_configured": false, 00:18:43.989 "data_offset": 0, 00:18:43.989 "data_size": 0 00:18:43.989 } 00:18:43.989 ] 00:18:43.989 }' 00:18:43.989 00:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:43.989 00:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.248 00:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:44.507 [2024-07-25 00:02:40.343552] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:44.507 [2024-07-25 00:02:40.343932] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:18:44.507 [2024-07-25 00:02:40.344079] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:44.507 [2024-07-25 00:02:40.344285] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:18:44.507 [2024-07-25 00:02:40.344831] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:18:44.507 [2024-07-25 00:02:40.345013] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:18:44.507 [2024-07-25 00:02:40.345427] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.507 BaseBdev3 00:18:44.507 00:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:44.507 00:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:44.507 00:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:44.507 00:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:44.507 00:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:44.507 00:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:44.507 00:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:44.766 00:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:45.025 [ 00:18:45.025 { 00:18:45.025 "name": "BaseBdev3", 00:18:45.025 "aliases": [ 00:18:45.025 "3c8db38f-40ea-445d-b011-4a145a25f7c0" 00:18:45.025 ], 00:18:45.025 "product_name": "Malloc disk", 00:18:45.025 "block_size": 512, 00:18:45.025 "num_blocks": 65536, 00:18:45.025 "uuid": "3c8db38f-40ea-445d-b011-4a145a25f7c0", 00:18:45.025 "assigned_rate_limits": { 00:18:45.025 "rw_ios_per_sec": 0, 00:18:45.025 "rw_mbytes_per_sec": 0, 00:18:45.025 "r_mbytes_per_sec": 0, 00:18:45.025 "w_mbytes_per_sec": 0 00:18:45.025 }, 00:18:45.025 "claimed": true, 00:18:45.025 "claim_type": "exclusive_write", 00:18:45.026 "zoned": false, 00:18:45.026 "supported_io_types": { 00:18:45.026 "read": true, 00:18:45.026 "write": true, 00:18:45.026 "unmap": true, 00:18:45.026 "flush": true, 00:18:45.026 "reset": true, 00:18:45.026 "nvme_admin": false, 00:18:45.026 "nvme_io": false, 00:18:45.026 "nvme_io_md": false, 00:18:45.026 "write_zeroes": true, 00:18:45.026 "zcopy": true, 00:18:45.026 "get_zone_info": false, 00:18:45.026 "zone_management": false, 00:18:45.026 "zone_append": false, 00:18:45.026 "compare": false, 00:18:45.026 "compare_and_write": false, 00:18:45.026 "abort": true, 00:18:45.026 "seek_hole": false, 00:18:45.026 "seek_data": false, 00:18:45.026 "copy": true, 00:18:45.026 "nvme_iov_md": false 00:18:45.026 }, 00:18:45.026 "memory_domains": [ 00:18:45.026 { 00:18:45.026 "dma_device_id": "system", 00:18:45.026 "dma_device_type": 1 00:18:45.026 }, 00:18:45.026 { 00:18:45.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.026 "dma_device_type": 2 00:18:45.026 } 00:18:45.026 ], 00:18:45.026 "driver_specific": {} 00:18:45.026 } 00:18:45.026 ] 00:18:45.026 00:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:45.026 00:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:45.026 00:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:45.026 00:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:45.026 00:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:45.026 00:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:45.026 00:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:45.026 00:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:45.026 00:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:45.026 00:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:45.026 00:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:45.026 00:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:45.026 00:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:45.026 00:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.026 00:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.285 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:45.285 "name": "Existed_Raid", 00:18:45.285 "uuid": "b179d4fa-46e0-42c3-9629-a548538d842e", 00:18:45.285 "strip_size_kb": 0, 00:18:45.285 "state": "online", 00:18:45.285 "raid_level": "raid1", 00:18:45.285 "superblock": false, 00:18:45.285 "num_base_bdevs": 3, 00:18:45.285 "num_base_bdevs_discovered": 3, 00:18:45.285 "num_base_bdevs_operational": 3, 00:18:45.285 "base_bdevs_list": [ 00:18:45.285 { 00:18:45.285 "name": "BaseBdev1", 00:18:45.285 "uuid": "878ae5a2-7aaf-4b39-ac45-f881cd583972", 00:18:45.285 "is_configured": true, 00:18:45.285 "data_offset": 0, 00:18:45.285 "data_size": 65536 00:18:45.285 }, 00:18:45.285 { 00:18:45.285 "name": "BaseBdev2", 00:18:45.285 "uuid": "3d5fff54-0e4c-44a1-952d-04c51dffd3b0", 00:18:45.285 "is_configured": true, 00:18:45.285 "data_offset": 0, 00:18:45.285 "data_size": 65536 00:18:45.285 }, 00:18:45.285 { 00:18:45.285 "name": "BaseBdev3", 00:18:45.285 "uuid": "3c8db38f-40ea-445d-b011-4a145a25f7c0", 00:18:45.285 "is_configured": true, 00:18:45.285 "data_offset": 0, 00:18:45.285 "data_size": 65536 00:18:45.285 } 00:18:45.285 ] 00:18:45.285 }' 00:18:45.285 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:45.285 00:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.544 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:45.544 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:45.544 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:45.544 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:45.544 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:45.544 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:45.544 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:45.544 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:45.804 [2024-07-25 00:02:41.568298] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:45.804 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:45.804 "name": "Existed_Raid", 00:18:45.804 "aliases": [ 00:18:45.804 "b179d4fa-46e0-42c3-9629-a548538d842e" 00:18:45.804 ], 00:18:45.804 "product_name": "Raid Volume", 00:18:45.804 "block_size": 512, 00:18:45.804 "num_blocks": 65536, 00:18:45.804 "uuid": "b179d4fa-46e0-42c3-9629-a548538d842e", 00:18:45.804 "assigned_rate_limits": { 00:18:45.804 "rw_ios_per_sec": 0, 00:18:45.804 "rw_mbytes_per_sec": 0, 00:18:45.804 "r_mbytes_per_sec": 0, 00:18:45.804 "w_mbytes_per_sec": 0 00:18:45.804 }, 00:18:45.804 "claimed": false, 00:18:45.804 "zoned": false, 00:18:45.804 "supported_io_types": { 00:18:45.804 "read": true, 00:18:45.804 "write": true, 00:18:45.804 "unmap": false, 00:18:45.804 "flush": false, 00:18:45.804 "reset": true, 00:18:45.804 "nvme_admin": false, 00:18:45.804 "nvme_io": false, 00:18:45.804 "nvme_io_md": false, 00:18:45.804 "write_zeroes": true, 00:18:45.804 "zcopy": false, 00:18:45.804 "get_zone_info": false, 00:18:45.804 "zone_management": false, 00:18:45.804 "zone_append": false, 00:18:45.804 "compare": false, 00:18:45.804 "compare_and_write": false, 00:18:45.804 "abort": false, 00:18:45.804 "seek_hole": false, 00:18:45.804 "seek_data": false, 00:18:45.804 "copy": false, 00:18:45.804 "nvme_iov_md": false 00:18:45.804 }, 00:18:45.804 "memory_domains": [ 00:18:45.804 { 00:18:45.804 "dma_device_id": "system", 00:18:45.804 "dma_device_type": 1 00:18:45.804 }, 00:18:45.804 { 00:18:45.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.804 "dma_device_type": 2 00:18:45.804 }, 00:18:45.804 { 00:18:45.804 "dma_device_id": "system", 00:18:45.804 "dma_device_type": 1 00:18:45.804 }, 00:18:45.804 { 00:18:45.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.804 "dma_device_type": 2 00:18:45.804 }, 00:18:45.804 { 00:18:45.804 "dma_device_id": "system", 00:18:45.804 "dma_device_type": 1 00:18:45.804 }, 00:18:45.804 { 00:18:45.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.804 "dma_device_type": 2 00:18:45.804 } 00:18:45.804 ], 00:18:45.804 "driver_specific": { 00:18:45.804 "raid": { 00:18:45.804 "uuid": "b179d4fa-46e0-42c3-9629-a548538d842e", 00:18:45.804 "strip_size_kb": 0, 00:18:45.804 "state": "online", 00:18:45.804 "raid_level": "raid1", 00:18:45.804 "superblock": false, 00:18:45.804 "num_base_bdevs": 3, 00:18:45.804 "num_base_bdevs_discovered": 3, 00:18:45.804 "num_base_bdevs_operational": 3, 00:18:45.804 "base_bdevs_list": [ 00:18:45.804 { 00:18:45.804 "name": "BaseBdev1", 00:18:45.804 "uuid": "878ae5a2-7aaf-4b39-ac45-f881cd583972", 00:18:45.804 "is_configured": true, 00:18:45.804 "data_offset": 0, 00:18:45.804 "data_size": 65536 00:18:45.804 }, 00:18:45.804 { 00:18:45.804 "name": "BaseBdev2", 00:18:45.804 "uuid": "3d5fff54-0e4c-44a1-952d-04c51dffd3b0", 00:18:45.804 "is_configured": true, 00:18:45.804 "data_offset": 0, 00:18:45.804 "data_size": 65536 00:18:45.804 }, 00:18:45.804 { 00:18:45.804 "name": "BaseBdev3", 00:18:45.804 "uuid": "3c8db38f-40ea-445d-b011-4a145a25f7c0", 00:18:45.804 "is_configured": true, 00:18:45.804 "data_offset": 0, 00:18:45.804 "data_size": 65536 00:18:45.804 } 00:18:45.804 ] 00:18:45.804 } 00:18:45.804 } 00:18:45.804 }' 00:18:45.804 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:45.804 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:45.804 BaseBdev2 00:18:45.804 BaseBdev3' 00:18:45.804 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:45.804 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:45.804 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:46.063 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:46.063 "name": "BaseBdev1", 00:18:46.063 "aliases": [ 00:18:46.063 "878ae5a2-7aaf-4b39-ac45-f881cd583972" 00:18:46.063 ], 00:18:46.063 "product_name": "Malloc disk", 00:18:46.063 "block_size": 512, 00:18:46.063 "num_blocks": 65536, 00:18:46.063 "uuid": "878ae5a2-7aaf-4b39-ac45-f881cd583972", 00:18:46.063 "assigned_rate_limits": { 00:18:46.063 "rw_ios_per_sec": 0, 00:18:46.063 "rw_mbytes_per_sec": 0, 00:18:46.063 "r_mbytes_per_sec": 0, 00:18:46.063 "w_mbytes_per_sec": 0 00:18:46.063 }, 00:18:46.063 "claimed": true, 00:18:46.063 "claim_type": "exclusive_write", 00:18:46.063 "zoned": false, 00:18:46.063 "supported_io_types": { 00:18:46.063 "read": true, 00:18:46.063 "write": true, 00:18:46.063 "unmap": true, 00:18:46.063 "flush": true, 00:18:46.063 "reset": true, 00:18:46.063 "nvme_admin": false, 00:18:46.063 "nvme_io": false, 00:18:46.063 "nvme_io_md": false, 00:18:46.063 "write_zeroes": true, 00:18:46.063 "zcopy": true, 00:18:46.063 "get_zone_info": false, 00:18:46.063 "zone_management": false, 00:18:46.063 "zone_append": false, 00:18:46.063 "compare": false, 00:18:46.063 "compare_and_write": false, 00:18:46.063 "abort": true, 00:18:46.063 "seek_hole": false, 00:18:46.063 "seek_data": false, 00:18:46.063 "copy": true, 00:18:46.063 "nvme_iov_md": false 00:18:46.063 }, 00:18:46.063 "memory_domains": [ 00:18:46.063 { 00:18:46.064 "dma_device_id": "system", 00:18:46.064 "dma_device_type": 1 00:18:46.064 }, 00:18:46.064 { 00:18:46.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.064 "dma_device_type": 2 00:18:46.064 } 00:18:46.064 ], 00:18:46.064 "driver_specific": {} 00:18:46.064 }' 00:18:46.064 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:46.064 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:46.064 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:46.064 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:46.064 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:46.064 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:46.064 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:46.064 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:46.064 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:46.064 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:46.064 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:46.064 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:46.064 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:46.064 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:46.064 00:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:46.323 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:46.323 "name": "BaseBdev2", 00:18:46.323 "aliases": [ 00:18:46.323 "3d5fff54-0e4c-44a1-952d-04c51dffd3b0" 00:18:46.323 ], 00:18:46.323 "product_name": "Malloc disk", 00:18:46.323 "block_size": 512, 00:18:46.323 "num_blocks": 65536, 00:18:46.323 "uuid": "3d5fff54-0e4c-44a1-952d-04c51dffd3b0", 00:18:46.323 "assigned_rate_limits": { 00:18:46.323 "rw_ios_per_sec": 0, 00:18:46.323 "rw_mbytes_per_sec": 0, 00:18:46.323 "r_mbytes_per_sec": 0, 00:18:46.323 "w_mbytes_per_sec": 0 00:18:46.323 }, 00:18:46.323 "claimed": true, 00:18:46.323 "claim_type": "exclusive_write", 00:18:46.323 "zoned": false, 00:18:46.323 "supported_io_types": { 00:18:46.323 "read": true, 00:18:46.323 "write": true, 00:18:46.323 "unmap": true, 00:18:46.323 "flush": true, 00:18:46.323 "reset": true, 00:18:46.323 "nvme_admin": false, 00:18:46.323 "nvme_io": false, 00:18:46.323 "nvme_io_md": false, 00:18:46.323 "write_zeroes": true, 00:18:46.323 "zcopy": true, 00:18:46.323 "get_zone_info": false, 00:18:46.323 "zone_management": false, 00:18:46.323 "zone_append": false, 00:18:46.323 "compare": false, 00:18:46.323 "compare_and_write": false, 00:18:46.323 "abort": true, 00:18:46.323 "seek_hole": false, 00:18:46.323 "seek_data": false, 00:18:46.323 "copy": true, 00:18:46.323 "nvme_iov_md": false 00:18:46.323 }, 00:18:46.323 "memory_domains": [ 00:18:46.323 { 00:18:46.323 "dma_device_id": "system", 00:18:46.323 "dma_device_type": 1 00:18:46.323 }, 00:18:46.323 { 00:18:46.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.323 "dma_device_type": 2 00:18:46.323 } 00:18:46.323 ], 00:18:46.323 "driver_specific": {} 00:18:46.323 }' 00:18:46.323 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:46.323 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:46.323 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:46.323 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:46.323 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:46.323 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:46.323 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:46.582 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:46.582 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:46.582 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:46.582 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:46.582 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:46.582 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:46.582 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:46.582 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:46.841 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:46.841 "name": "BaseBdev3", 00:18:46.841 "aliases": [ 00:18:46.841 "3c8db38f-40ea-445d-b011-4a145a25f7c0" 00:18:46.841 ], 00:18:46.841 "product_name": "Malloc disk", 00:18:46.841 "block_size": 512, 00:18:46.841 "num_blocks": 65536, 00:18:46.841 "uuid": "3c8db38f-40ea-445d-b011-4a145a25f7c0", 00:18:46.841 "assigned_rate_limits": { 00:18:46.841 "rw_ios_per_sec": 0, 00:18:46.841 "rw_mbytes_per_sec": 0, 00:18:46.841 "r_mbytes_per_sec": 0, 00:18:46.841 "w_mbytes_per_sec": 0 00:18:46.841 }, 00:18:46.841 "claimed": true, 00:18:46.841 "claim_type": "exclusive_write", 00:18:46.841 "zoned": false, 00:18:46.841 "supported_io_types": { 00:18:46.841 "read": true, 00:18:46.841 "write": true, 00:18:46.841 "unmap": true, 00:18:46.841 "flush": true, 00:18:46.841 "reset": true, 00:18:46.841 "nvme_admin": false, 00:18:46.841 "nvme_io": false, 00:18:46.841 "nvme_io_md": false, 00:18:46.841 "write_zeroes": true, 00:18:46.841 "zcopy": true, 00:18:46.841 "get_zone_info": false, 00:18:46.841 "zone_management": false, 00:18:46.841 "zone_append": false, 00:18:46.841 "compare": false, 00:18:46.841 "compare_and_write": false, 00:18:46.841 "abort": true, 00:18:46.841 "seek_hole": false, 00:18:46.841 "seek_data": false, 00:18:46.841 "copy": true, 00:18:46.841 "nvme_iov_md": false 00:18:46.841 }, 00:18:46.841 "memory_domains": [ 00:18:46.841 { 00:18:46.841 "dma_device_id": "system", 00:18:46.841 "dma_device_type": 1 00:18:46.841 }, 00:18:46.841 { 00:18:46.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.841 "dma_device_type": 2 00:18:46.841 } 00:18:46.841 ], 00:18:46.841 "driver_specific": {} 00:18:46.841 }' 00:18:46.841 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:46.841 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:46.841 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:46.841 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:46.841 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:46.842 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:46.842 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:46.842 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:46.842 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:46.842 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:46.842 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:46.842 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:46.842 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:47.101 [2024-07-25 00:02:42.836343] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:47.101 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:47.101 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:18:47.101 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:47.101 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:47.101 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:18:47.101 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:47.101 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:47.101 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:47.101 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:47.101 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:47.101 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:47.101 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:47.101 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:47.101 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:47.101 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:47.101 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.101 00:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.360 00:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:47.360 "name": "Existed_Raid", 00:18:47.360 "uuid": "b179d4fa-46e0-42c3-9629-a548538d842e", 00:18:47.360 "strip_size_kb": 0, 00:18:47.360 "state": "online", 00:18:47.360 "raid_level": "raid1", 00:18:47.360 "superblock": false, 00:18:47.360 "num_base_bdevs": 3, 00:18:47.360 "num_base_bdevs_discovered": 2, 00:18:47.360 "num_base_bdevs_operational": 2, 00:18:47.360 "base_bdevs_list": [ 00:18:47.360 { 00:18:47.360 "name": null, 00:18:47.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.360 "is_configured": false, 00:18:47.360 "data_offset": 0, 00:18:47.360 "data_size": 65536 00:18:47.360 }, 00:18:47.360 { 00:18:47.360 "name": "BaseBdev2", 00:18:47.360 "uuid": "3d5fff54-0e4c-44a1-952d-04c51dffd3b0", 00:18:47.360 "is_configured": true, 00:18:47.360 "data_offset": 0, 00:18:47.360 "data_size": 65536 00:18:47.360 }, 00:18:47.360 { 00:18:47.360 "name": "BaseBdev3", 00:18:47.360 "uuid": "3c8db38f-40ea-445d-b011-4a145a25f7c0", 00:18:47.360 "is_configured": true, 00:18:47.360 "data_offset": 0, 00:18:47.360 "data_size": 65536 00:18:47.360 } 00:18:47.360 ] 00:18:47.360 }' 00:18:47.360 00:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:47.360 00:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.618 00:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:47.618 00:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:47.618 00:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.618 00:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:47.876 00:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:47.876 00:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:47.876 00:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:48.139 [2024-07-25 00:02:43.911021] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:48.411 00:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:48.411 00:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:48.411 00:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:48.411 00:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.411 00:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:48.411 00:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:48.411 00:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:48.670 [2024-07-25 00:02:44.477762] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:48.670 [2024-07-25 00:02:44.477943] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:48.929 [2024-07-25 00:02:44.559442] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:48.929 [2024-07-25 00:02:44.559495] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:48.929 [2024-07-25 00:02:44.559514] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:18:48.929 00:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:48.929 00:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:48.929 00:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.929 00:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:49.188 00:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:49.188 00:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:49.188 00:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:49.188 00:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:49.188 00:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:49.188 00:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:49.447 BaseBdev2 00:18:49.447 00:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:49.447 00:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:49.447 00:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:49.447 00:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:49.447 00:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:49.447 00:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:49.447 00:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:49.706 00:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:49.965 [ 00:18:49.965 { 00:18:49.965 "name": "BaseBdev2", 00:18:49.965 "aliases": [ 00:18:49.965 "9211a75e-4cb4-4999-a061-e80409961c65" 00:18:49.965 ], 00:18:49.965 "product_name": "Malloc disk", 00:18:49.965 "block_size": 512, 00:18:49.965 "num_blocks": 65536, 00:18:49.965 "uuid": "9211a75e-4cb4-4999-a061-e80409961c65", 00:18:49.965 "assigned_rate_limits": { 00:18:49.965 "rw_ios_per_sec": 0, 00:18:49.965 "rw_mbytes_per_sec": 0, 00:18:49.965 "r_mbytes_per_sec": 0, 00:18:49.965 "w_mbytes_per_sec": 0 00:18:49.965 }, 00:18:49.965 "claimed": false, 00:18:49.965 "zoned": false, 00:18:49.965 "supported_io_types": { 00:18:49.965 "read": true, 00:18:49.965 "write": true, 00:18:49.965 "unmap": true, 00:18:49.965 "flush": true, 00:18:49.965 "reset": true, 00:18:49.965 "nvme_admin": false, 00:18:49.965 "nvme_io": false, 00:18:49.965 "nvme_io_md": false, 00:18:49.965 "write_zeroes": true, 00:18:49.965 "zcopy": true, 00:18:49.965 "get_zone_info": false, 00:18:49.965 "zone_management": false, 00:18:49.965 "zone_append": false, 00:18:49.965 "compare": false, 00:18:49.965 "compare_and_write": false, 00:18:49.965 "abort": true, 00:18:49.965 "seek_hole": false, 00:18:49.965 "seek_data": false, 00:18:49.965 "copy": true, 00:18:49.965 "nvme_iov_md": false 00:18:49.965 }, 00:18:49.965 "memory_domains": [ 00:18:49.965 { 00:18:49.965 "dma_device_id": "system", 00:18:49.965 "dma_device_type": 1 00:18:49.965 }, 00:18:49.965 { 00:18:49.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.965 "dma_device_type": 2 00:18:49.965 } 00:18:49.965 ], 00:18:49.965 "driver_specific": {} 00:18:49.965 } 00:18:49.966 ] 00:18:49.966 00:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:49.966 00:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:49.966 00:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:49.966 00:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:50.225 BaseBdev3 00:18:50.225 00:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:50.225 00:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:50.225 00:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:50.225 00:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:50.225 00:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:50.225 00:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:50.225 00:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:50.483 00:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:50.483 [ 00:18:50.483 { 00:18:50.483 "name": "BaseBdev3", 00:18:50.483 "aliases": [ 00:18:50.483 "a0872007-41d5-4173-a644-31b4af700cfe" 00:18:50.483 ], 00:18:50.483 "product_name": "Malloc disk", 00:18:50.483 "block_size": 512, 00:18:50.483 "num_blocks": 65536, 00:18:50.483 "uuid": "a0872007-41d5-4173-a644-31b4af700cfe", 00:18:50.483 "assigned_rate_limits": { 00:18:50.483 "rw_ios_per_sec": 0, 00:18:50.483 "rw_mbytes_per_sec": 0, 00:18:50.483 "r_mbytes_per_sec": 0, 00:18:50.483 "w_mbytes_per_sec": 0 00:18:50.483 }, 00:18:50.483 "claimed": false, 00:18:50.483 "zoned": false, 00:18:50.483 "supported_io_types": { 00:18:50.483 "read": true, 00:18:50.483 "write": true, 00:18:50.483 "unmap": true, 00:18:50.483 "flush": true, 00:18:50.483 "reset": true, 00:18:50.483 "nvme_admin": false, 00:18:50.483 "nvme_io": false, 00:18:50.483 "nvme_io_md": false, 00:18:50.483 "write_zeroes": true, 00:18:50.483 "zcopy": true, 00:18:50.483 "get_zone_info": false, 00:18:50.483 "zone_management": false, 00:18:50.483 "zone_append": false, 00:18:50.483 "compare": false, 00:18:50.483 "compare_and_write": false, 00:18:50.483 "abort": true, 00:18:50.483 "seek_hole": false, 00:18:50.483 "seek_data": false, 00:18:50.483 "copy": true, 00:18:50.483 "nvme_iov_md": false 00:18:50.483 }, 00:18:50.483 "memory_domains": [ 00:18:50.483 { 00:18:50.483 "dma_device_id": "system", 00:18:50.483 "dma_device_type": 1 00:18:50.483 }, 00:18:50.483 { 00:18:50.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.483 "dma_device_type": 2 00:18:50.483 } 00:18:50.483 ], 00:18:50.483 "driver_specific": {} 00:18:50.483 } 00:18:50.483 ] 00:18:50.483 00:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:50.483 00:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:50.483 00:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:50.483 00:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:50.743 [2024-07-25 00:02:46.550969] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:50.743 [2024-07-25 00:02:46.551243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:50.743 [2024-07-25 00:02:46.551288] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:50.743 [2024-07-25 00:02:46.553569] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:50.743 00:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:50.743 00:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:50.743 00:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:50.743 00:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:50.743 00:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:50.743 00:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:50.743 00:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:50.743 00:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:50.743 00:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:50.743 00:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:50.743 00:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.743 00:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.002 00:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:51.002 "name": "Existed_Raid", 00:18:51.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.002 "strip_size_kb": 0, 00:18:51.002 "state": "configuring", 00:18:51.002 "raid_level": "raid1", 00:18:51.002 "superblock": false, 00:18:51.002 "num_base_bdevs": 3, 00:18:51.002 "num_base_bdevs_discovered": 2, 00:18:51.002 "num_base_bdevs_operational": 3, 00:18:51.002 "base_bdevs_list": [ 00:18:51.002 { 00:18:51.002 "name": "BaseBdev1", 00:18:51.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.002 "is_configured": false, 00:18:51.002 "data_offset": 0, 00:18:51.002 "data_size": 0 00:18:51.002 }, 00:18:51.002 { 00:18:51.002 "name": "BaseBdev2", 00:18:51.002 "uuid": "9211a75e-4cb4-4999-a061-e80409961c65", 00:18:51.002 "is_configured": true, 00:18:51.002 "data_offset": 0, 00:18:51.002 "data_size": 65536 00:18:51.002 }, 00:18:51.002 { 00:18:51.002 "name": "BaseBdev3", 00:18:51.002 "uuid": "a0872007-41d5-4173-a644-31b4af700cfe", 00:18:51.002 "is_configured": true, 00:18:51.002 "data_offset": 0, 00:18:51.002 "data_size": 65536 00:18:51.002 } 00:18:51.002 ] 00:18:51.002 }' 00:18:51.002 00:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:51.002 00:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.261 00:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:51.520 [2024-07-25 00:02:47.363187] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:51.520 00:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:51.520 00:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:51.520 00:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:51.520 00:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:51.520 00:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:51.520 00:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:51.520 00:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:51.520 00:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:51.520 00:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:51.520 00:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:51.520 00:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.778 00:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.778 00:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:51.778 "name": "Existed_Raid", 00:18:51.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.778 "strip_size_kb": 0, 00:18:51.778 "state": "configuring", 00:18:51.778 "raid_level": "raid1", 00:18:51.778 "superblock": false, 00:18:51.778 "num_base_bdevs": 3, 00:18:51.778 "num_base_bdevs_discovered": 1, 00:18:51.778 "num_base_bdevs_operational": 3, 00:18:51.778 "base_bdevs_list": [ 00:18:51.778 { 00:18:51.778 "name": "BaseBdev1", 00:18:51.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.778 "is_configured": false, 00:18:51.778 "data_offset": 0, 00:18:51.778 "data_size": 0 00:18:51.778 }, 00:18:51.778 { 00:18:51.778 "name": null, 00:18:51.778 "uuid": "9211a75e-4cb4-4999-a061-e80409961c65", 00:18:51.778 "is_configured": false, 00:18:51.778 "data_offset": 0, 00:18:51.778 "data_size": 65536 00:18:51.778 }, 00:18:51.778 { 00:18:51.778 "name": "BaseBdev3", 00:18:51.778 "uuid": "a0872007-41d5-4173-a644-31b4af700cfe", 00:18:51.778 "is_configured": true, 00:18:51.778 "data_offset": 0, 00:18:51.778 "data_size": 65536 00:18:51.778 } 00:18:51.778 ] 00:18:51.778 }' 00:18:51.778 00:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:51.778 00:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.036 00:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.037 00:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:52.296 00:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:52.296 00:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:52.555 [2024-07-25 00:02:48.408370] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:52.555 BaseBdev1 00:18:52.813 00:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:52.813 00:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:52.813 00:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:52.813 00:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:52.813 00:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:52.813 00:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:52.813 00:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:52.813 00:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:53.072 [ 00:18:53.072 { 00:18:53.072 "name": "BaseBdev1", 00:18:53.072 "aliases": [ 00:18:53.072 "92dd77fe-8387-4524-8685-5e717df0456e" 00:18:53.072 ], 00:18:53.072 "product_name": "Malloc disk", 00:18:53.072 "block_size": 512, 00:18:53.072 "num_blocks": 65536, 00:18:53.072 "uuid": "92dd77fe-8387-4524-8685-5e717df0456e", 00:18:53.072 "assigned_rate_limits": { 00:18:53.072 "rw_ios_per_sec": 0, 00:18:53.072 "rw_mbytes_per_sec": 0, 00:18:53.072 "r_mbytes_per_sec": 0, 00:18:53.072 "w_mbytes_per_sec": 0 00:18:53.072 }, 00:18:53.072 "claimed": true, 00:18:53.072 "claim_type": "exclusive_write", 00:18:53.072 "zoned": false, 00:18:53.072 "supported_io_types": { 00:18:53.072 "read": true, 00:18:53.072 "write": true, 00:18:53.072 "unmap": true, 00:18:53.072 "flush": true, 00:18:53.072 "reset": true, 00:18:53.072 "nvme_admin": false, 00:18:53.072 "nvme_io": false, 00:18:53.072 "nvme_io_md": false, 00:18:53.072 "write_zeroes": true, 00:18:53.072 "zcopy": true, 00:18:53.072 "get_zone_info": false, 00:18:53.072 "zone_management": false, 00:18:53.072 "zone_append": false, 00:18:53.072 "compare": false, 00:18:53.072 "compare_and_write": false, 00:18:53.072 "abort": true, 00:18:53.072 "seek_hole": false, 00:18:53.072 "seek_data": false, 00:18:53.072 "copy": true, 00:18:53.072 "nvme_iov_md": false 00:18:53.072 }, 00:18:53.072 "memory_domains": [ 00:18:53.072 { 00:18:53.072 "dma_device_id": "system", 00:18:53.072 "dma_device_type": 1 00:18:53.072 }, 00:18:53.072 { 00:18:53.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.072 "dma_device_type": 2 00:18:53.072 } 00:18:53.072 ], 00:18:53.072 "driver_specific": {} 00:18:53.072 } 00:18:53.072 ] 00:18:53.073 00:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:53.073 00:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:53.073 00:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:53.073 00:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:53.073 00:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:53.073 00:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:53.073 00:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:53.073 00:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:53.073 00:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:53.073 00:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:53.073 00:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:53.073 00:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.073 00:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.332 00:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:53.332 "name": "Existed_Raid", 00:18:53.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.332 "strip_size_kb": 0, 00:18:53.332 "state": "configuring", 00:18:53.332 "raid_level": "raid1", 00:18:53.332 "superblock": false, 00:18:53.332 "num_base_bdevs": 3, 00:18:53.332 "num_base_bdevs_discovered": 2, 00:18:53.332 "num_base_bdevs_operational": 3, 00:18:53.332 "base_bdevs_list": [ 00:18:53.332 { 00:18:53.332 "name": "BaseBdev1", 00:18:53.332 "uuid": "92dd77fe-8387-4524-8685-5e717df0456e", 00:18:53.332 "is_configured": true, 00:18:53.332 "data_offset": 0, 00:18:53.332 "data_size": 65536 00:18:53.332 }, 00:18:53.332 { 00:18:53.332 "name": null, 00:18:53.332 "uuid": "9211a75e-4cb4-4999-a061-e80409961c65", 00:18:53.332 "is_configured": false, 00:18:53.332 "data_offset": 0, 00:18:53.332 "data_size": 65536 00:18:53.332 }, 00:18:53.332 { 00:18:53.332 "name": "BaseBdev3", 00:18:53.332 "uuid": "a0872007-41d5-4173-a644-31b4af700cfe", 00:18:53.332 "is_configured": true, 00:18:53.332 "data_offset": 0, 00:18:53.332 "data_size": 65536 00:18:53.332 } 00:18:53.332 ] 00:18:53.332 }' 00:18:53.332 00:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:53.332 00:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.591 00:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.591 00:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:53.849 00:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:53.849 00:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:54.108 [2024-07-25 00:02:49.900845] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:54.108 00:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:54.108 00:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:54.108 00:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:54.108 00:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:54.108 00:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:54.108 00:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:54.108 00:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:54.108 00:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:54.108 00:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:54.108 00:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:54.108 00:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.108 00:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.367 00:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:54.367 "name": "Existed_Raid", 00:18:54.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.367 "strip_size_kb": 0, 00:18:54.367 "state": "configuring", 00:18:54.367 "raid_level": "raid1", 00:18:54.367 "superblock": false, 00:18:54.367 "num_base_bdevs": 3, 00:18:54.367 "num_base_bdevs_discovered": 1, 00:18:54.367 "num_base_bdevs_operational": 3, 00:18:54.367 "base_bdevs_list": [ 00:18:54.367 { 00:18:54.367 "name": "BaseBdev1", 00:18:54.367 "uuid": "92dd77fe-8387-4524-8685-5e717df0456e", 00:18:54.367 "is_configured": true, 00:18:54.367 "data_offset": 0, 00:18:54.367 "data_size": 65536 00:18:54.367 }, 00:18:54.367 { 00:18:54.367 "name": null, 00:18:54.367 "uuid": "9211a75e-4cb4-4999-a061-e80409961c65", 00:18:54.367 "is_configured": false, 00:18:54.367 "data_offset": 0, 00:18:54.367 "data_size": 65536 00:18:54.367 }, 00:18:54.367 { 00:18:54.367 "name": null, 00:18:54.367 "uuid": "a0872007-41d5-4173-a644-31b4af700cfe", 00:18:54.367 "is_configured": false, 00:18:54.367 "data_offset": 0, 00:18:54.367 "data_size": 65536 00:18:54.367 } 00:18:54.367 ] 00:18:54.367 }' 00:18:54.367 00:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:54.367 00:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.625 00:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.625 00:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:54.883 00:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:54.883 00:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:55.141 [2024-07-25 00:02:50.933211] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:55.141 00:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:55.141 00:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:55.141 00:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:55.141 00:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:55.141 00:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:55.141 00:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:55.141 00:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:55.141 00:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:55.141 00:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:55.141 00:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:55.141 00:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.141 00:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.399 00:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:55.399 "name": "Existed_Raid", 00:18:55.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.399 "strip_size_kb": 0, 00:18:55.399 "state": "configuring", 00:18:55.399 "raid_level": "raid1", 00:18:55.399 "superblock": false, 00:18:55.399 "num_base_bdevs": 3, 00:18:55.399 "num_base_bdevs_discovered": 2, 00:18:55.399 "num_base_bdevs_operational": 3, 00:18:55.399 "base_bdevs_list": [ 00:18:55.399 { 00:18:55.399 "name": "BaseBdev1", 00:18:55.399 "uuid": "92dd77fe-8387-4524-8685-5e717df0456e", 00:18:55.399 "is_configured": true, 00:18:55.399 "data_offset": 0, 00:18:55.399 "data_size": 65536 00:18:55.399 }, 00:18:55.399 { 00:18:55.399 "name": null, 00:18:55.399 "uuid": "9211a75e-4cb4-4999-a061-e80409961c65", 00:18:55.399 "is_configured": false, 00:18:55.399 "data_offset": 0, 00:18:55.399 "data_size": 65536 00:18:55.399 }, 00:18:55.399 { 00:18:55.399 "name": "BaseBdev3", 00:18:55.399 "uuid": "a0872007-41d5-4173-a644-31b4af700cfe", 00:18:55.399 "is_configured": true, 00:18:55.400 "data_offset": 0, 00:18:55.400 "data_size": 65536 00:18:55.400 } 00:18:55.400 ] 00:18:55.400 }' 00:18:55.400 00:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:55.400 00:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.967 00:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.967 00:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:55.967 00:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:55.967 00:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:56.225 [2024-07-25 00:02:51.965589] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:56.225 00:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:56.225 00:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:56.225 00:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:56.225 00:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:56.225 00:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:56.225 00:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:56.225 00:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:56.225 00:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:56.225 00:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:56.226 00:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:56.226 00:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.226 00:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.793 00:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:56.793 "name": "Existed_Raid", 00:18:56.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.793 "strip_size_kb": 0, 00:18:56.793 "state": "configuring", 00:18:56.793 "raid_level": "raid1", 00:18:56.793 "superblock": false, 00:18:56.793 "num_base_bdevs": 3, 00:18:56.793 "num_base_bdevs_discovered": 1, 00:18:56.793 "num_base_bdevs_operational": 3, 00:18:56.793 "base_bdevs_list": [ 00:18:56.793 { 00:18:56.793 "name": null, 00:18:56.793 "uuid": "92dd77fe-8387-4524-8685-5e717df0456e", 00:18:56.793 "is_configured": false, 00:18:56.793 "data_offset": 0, 00:18:56.793 "data_size": 65536 00:18:56.793 }, 00:18:56.793 { 00:18:56.793 "name": null, 00:18:56.793 "uuid": "9211a75e-4cb4-4999-a061-e80409961c65", 00:18:56.793 "is_configured": false, 00:18:56.793 "data_offset": 0, 00:18:56.793 "data_size": 65536 00:18:56.793 }, 00:18:56.793 { 00:18:56.793 "name": "BaseBdev3", 00:18:56.793 "uuid": "a0872007-41d5-4173-a644-31b4af700cfe", 00:18:56.793 "is_configured": true, 00:18:56.793 "data_offset": 0, 00:18:56.793 "data_size": 65536 00:18:56.793 } 00:18:56.793 ] 00:18:56.793 }' 00:18:56.793 00:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:56.793 00:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.793 00:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.793 00:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:57.052 00:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:57.052 00:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:57.310 [2024-07-25 00:02:53.116560] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:57.310 00:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:57.310 00:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:57.310 00:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:57.310 00:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:57.310 00:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:57.311 00:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:57.311 00:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:57.311 00:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:57.311 00:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:57.311 00:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:57.311 00:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.311 00:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.573 00:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:57.573 "name": "Existed_Raid", 00:18:57.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.573 "strip_size_kb": 0, 00:18:57.573 "state": "configuring", 00:18:57.573 "raid_level": "raid1", 00:18:57.573 "superblock": false, 00:18:57.573 "num_base_bdevs": 3, 00:18:57.573 "num_base_bdevs_discovered": 2, 00:18:57.573 "num_base_bdevs_operational": 3, 00:18:57.573 "base_bdevs_list": [ 00:18:57.573 { 00:18:57.573 "name": null, 00:18:57.573 "uuid": "92dd77fe-8387-4524-8685-5e717df0456e", 00:18:57.573 "is_configured": false, 00:18:57.573 "data_offset": 0, 00:18:57.573 "data_size": 65536 00:18:57.573 }, 00:18:57.573 { 00:18:57.573 "name": "BaseBdev2", 00:18:57.573 "uuid": "9211a75e-4cb4-4999-a061-e80409961c65", 00:18:57.573 "is_configured": true, 00:18:57.573 "data_offset": 0, 00:18:57.573 "data_size": 65536 00:18:57.573 }, 00:18:57.573 { 00:18:57.573 "name": "BaseBdev3", 00:18:57.573 "uuid": "a0872007-41d5-4173-a644-31b4af700cfe", 00:18:57.573 "is_configured": true, 00:18:57.573 "data_offset": 0, 00:18:57.573 "data_size": 65536 00:18:57.573 } 00:18:57.573 ] 00:18:57.573 }' 00:18:57.573 00:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:57.573 00:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.837 00:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.837 00:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:58.095 00:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:58.096 00:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:58.096 00:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.354 00:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 92dd77fe-8387-4524-8685-5e717df0456e 00:18:58.612 [2024-07-25 00:02:54.346141] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:58.612 [2024-07-25 00:02:54.346213] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:18:58.612 [2024-07-25 00:02:54.346241] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:58.612 [2024-07-25 00:02:54.346343] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005d40 00:18:58.612 [2024-07-25 00:02:54.346728] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:18:58.613 [2024-07-25 00:02:54.346750] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000008a80 00:18:58.613 [2024-07-25 00:02:54.347081] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.613 NewBaseBdev 00:18:58.613 00:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:58.613 00:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:58.613 00:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:58.613 00:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:58.613 00:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:58.613 00:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:58.613 00:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:58.871 00:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:59.130 [ 00:18:59.130 { 00:18:59.130 "name": "NewBaseBdev", 00:18:59.130 "aliases": [ 00:18:59.130 "92dd77fe-8387-4524-8685-5e717df0456e" 00:18:59.130 ], 00:18:59.130 "product_name": "Malloc disk", 00:18:59.130 "block_size": 512, 00:18:59.130 "num_blocks": 65536, 00:18:59.130 "uuid": "92dd77fe-8387-4524-8685-5e717df0456e", 00:18:59.130 "assigned_rate_limits": { 00:18:59.130 "rw_ios_per_sec": 0, 00:18:59.130 "rw_mbytes_per_sec": 0, 00:18:59.130 "r_mbytes_per_sec": 0, 00:18:59.130 "w_mbytes_per_sec": 0 00:18:59.130 }, 00:18:59.130 "claimed": true, 00:18:59.130 "claim_type": "exclusive_write", 00:18:59.130 "zoned": false, 00:18:59.130 "supported_io_types": { 00:18:59.130 "read": true, 00:18:59.130 "write": true, 00:18:59.130 "unmap": true, 00:18:59.130 "flush": true, 00:18:59.130 "reset": true, 00:18:59.130 "nvme_admin": false, 00:18:59.130 "nvme_io": false, 00:18:59.130 "nvme_io_md": false, 00:18:59.130 "write_zeroes": true, 00:18:59.130 "zcopy": true, 00:18:59.130 "get_zone_info": false, 00:18:59.130 "zone_management": false, 00:18:59.130 "zone_append": false, 00:18:59.130 "compare": false, 00:18:59.130 "compare_and_write": false, 00:18:59.130 "abort": true, 00:18:59.130 "seek_hole": false, 00:18:59.130 "seek_data": false, 00:18:59.130 "copy": true, 00:18:59.130 "nvme_iov_md": false 00:18:59.130 }, 00:18:59.130 "memory_domains": [ 00:18:59.130 { 00:18:59.130 "dma_device_id": "system", 00:18:59.130 "dma_device_type": 1 00:18:59.130 }, 00:18:59.130 { 00:18:59.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.130 "dma_device_type": 2 00:18:59.130 } 00:18:59.130 ], 00:18:59.130 "driver_specific": {} 00:18:59.130 } 00:18:59.130 ] 00:18:59.130 00:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:59.130 00:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:59.130 00:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:59.130 00:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:59.130 00:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:59.130 00:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:59.130 00:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:59.130 00:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:59.130 00:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:59.130 00:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:59.130 00:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:59.130 00:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.130 00:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.392 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:59.392 "name": "Existed_Raid", 00:18:59.392 "uuid": "5d122dc5-c8ff-4644-b8e1-8a3d0a969150", 00:18:59.392 "strip_size_kb": 0, 00:18:59.392 "state": "online", 00:18:59.392 "raid_level": "raid1", 00:18:59.392 "superblock": false, 00:18:59.392 "num_base_bdevs": 3, 00:18:59.392 "num_base_bdevs_discovered": 3, 00:18:59.392 "num_base_bdevs_operational": 3, 00:18:59.392 "base_bdevs_list": [ 00:18:59.392 { 00:18:59.392 "name": "NewBaseBdev", 00:18:59.392 "uuid": "92dd77fe-8387-4524-8685-5e717df0456e", 00:18:59.392 "is_configured": true, 00:18:59.392 "data_offset": 0, 00:18:59.392 "data_size": 65536 00:18:59.392 }, 00:18:59.392 { 00:18:59.392 "name": "BaseBdev2", 00:18:59.392 "uuid": "9211a75e-4cb4-4999-a061-e80409961c65", 00:18:59.392 "is_configured": true, 00:18:59.392 "data_offset": 0, 00:18:59.392 "data_size": 65536 00:18:59.392 }, 00:18:59.392 { 00:18:59.392 "name": "BaseBdev3", 00:18:59.392 "uuid": "a0872007-41d5-4173-a644-31b4af700cfe", 00:18:59.392 "is_configured": true, 00:18:59.392 "data_offset": 0, 00:18:59.392 "data_size": 65536 00:18:59.392 } 00:18:59.392 ] 00:18:59.392 }' 00:18:59.392 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:59.392 00:02:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.675 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:59.675 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:59.675 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:59.675 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:59.675 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:59.675 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:59.675 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:59.675 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:59.945 [2024-07-25 00:02:55.691007] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.945 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:59.945 "name": "Existed_Raid", 00:18:59.945 "aliases": [ 00:18:59.945 "5d122dc5-c8ff-4644-b8e1-8a3d0a969150" 00:18:59.945 ], 00:18:59.945 "product_name": "Raid Volume", 00:18:59.945 "block_size": 512, 00:18:59.945 "num_blocks": 65536, 00:18:59.945 "uuid": "5d122dc5-c8ff-4644-b8e1-8a3d0a969150", 00:18:59.945 "assigned_rate_limits": { 00:18:59.945 "rw_ios_per_sec": 0, 00:18:59.945 "rw_mbytes_per_sec": 0, 00:18:59.945 "r_mbytes_per_sec": 0, 00:18:59.945 "w_mbytes_per_sec": 0 00:18:59.945 }, 00:18:59.945 "claimed": false, 00:18:59.945 "zoned": false, 00:18:59.945 "supported_io_types": { 00:18:59.945 "read": true, 00:18:59.945 "write": true, 00:18:59.945 "unmap": false, 00:18:59.945 "flush": false, 00:18:59.945 "reset": true, 00:18:59.945 "nvme_admin": false, 00:18:59.945 "nvme_io": false, 00:18:59.945 "nvme_io_md": false, 00:18:59.945 "write_zeroes": true, 00:18:59.945 "zcopy": false, 00:18:59.945 "get_zone_info": false, 00:18:59.945 "zone_management": false, 00:18:59.945 "zone_append": false, 00:18:59.945 "compare": false, 00:18:59.945 "compare_and_write": false, 00:18:59.945 "abort": false, 00:18:59.945 "seek_hole": false, 00:18:59.945 "seek_data": false, 00:18:59.945 "copy": false, 00:18:59.945 "nvme_iov_md": false 00:18:59.945 }, 00:18:59.945 "memory_domains": [ 00:18:59.945 { 00:18:59.945 "dma_device_id": "system", 00:18:59.945 "dma_device_type": 1 00:18:59.945 }, 00:18:59.945 { 00:18:59.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.945 "dma_device_type": 2 00:18:59.945 }, 00:18:59.945 { 00:18:59.945 "dma_device_id": "system", 00:18:59.946 "dma_device_type": 1 00:18:59.946 }, 00:18:59.946 { 00:18:59.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.946 "dma_device_type": 2 00:18:59.946 }, 00:18:59.946 { 00:18:59.946 "dma_device_id": "system", 00:18:59.946 "dma_device_type": 1 00:18:59.946 }, 00:18:59.946 { 00:18:59.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.946 "dma_device_type": 2 00:18:59.946 } 00:18:59.946 ], 00:18:59.946 "driver_specific": { 00:18:59.946 "raid": { 00:18:59.946 "uuid": "5d122dc5-c8ff-4644-b8e1-8a3d0a969150", 00:18:59.946 "strip_size_kb": 0, 00:18:59.946 "state": "online", 00:18:59.946 "raid_level": "raid1", 00:18:59.946 "superblock": false, 00:18:59.946 "num_base_bdevs": 3, 00:18:59.946 "num_base_bdevs_discovered": 3, 00:18:59.946 "num_base_bdevs_operational": 3, 00:18:59.946 "base_bdevs_list": [ 00:18:59.946 { 00:18:59.946 "name": "NewBaseBdev", 00:18:59.946 "uuid": "92dd77fe-8387-4524-8685-5e717df0456e", 00:18:59.946 "is_configured": true, 00:18:59.946 "data_offset": 0, 00:18:59.946 "data_size": 65536 00:18:59.946 }, 00:18:59.946 { 00:18:59.946 "name": "BaseBdev2", 00:18:59.946 "uuid": "9211a75e-4cb4-4999-a061-e80409961c65", 00:18:59.946 "is_configured": true, 00:18:59.946 "data_offset": 0, 00:18:59.946 "data_size": 65536 00:18:59.946 }, 00:18:59.946 { 00:18:59.946 "name": "BaseBdev3", 00:18:59.946 "uuid": "a0872007-41d5-4173-a644-31b4af700cfe", 00:18:59.946 "is_configured": true, 00:18:59.946 "data_offset": 0, 00:18:59.946 "data_size": 65536 00:18:59.946 } 00:18:59.946 ] 00:18:59.946 } 00:18:59.946 } 00:18:59.946 }' 00:18:59.946 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:59.946 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:59.946 BaseBdev2 00:18:59.946 BaseBdev3' 00:18:59.946 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:59.946 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:59.946 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:00.205 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:00.205 "name": "NewBaseBdev", 00:19:00.205 "aliases": [ 00:19:00.205 "92dd77fe-8387-4524-8685-5e717df0456e" 00:19:00.205 ], 00:19:00.205 "product_name": "Malloc disk", 00:19:00.205 "block_size": 512, 00:19:00.205 "num_blocks": 65536, 00:19:00.205 "uuid": "92dd77fe-8387-4524-8685-5e717df0456e", 00:19:00.205 "assigned_rate_limits": { 00:19:00.205 "rw_ios_per_sec": 0, 00:19:00.205 "rw_mbytes_per_sec": 0, 00:19:00.205 "r_mbytes_per_sec": 0, 00:19:00.205 "w_mbytes_per_sec": 0 00:19:00.205 }, 00:19:00.205 "claimed": true, 00:19:00.205 "claim_type": "exclusive_write", 00:19:00.205 "zoned": false, 00:19:00.205 "supported_io_types": { 00:19:00.205 "read": true, 00:19:00.205 "write": true, 00:19:00.205 "unmap": true, 00:19:00.205 "flush": true, 00:19:00.205 "reset": true, 00:19:00.205 "nvme_admin": false, 00:19:00.205 "nvme_io": false, 00:19:00.205 "nvme_io_md": false, 00:19:00.205 "write_zeroes": true, 00:19:00.205 "zcopy": true, 00:19:00.205 "get_zone_info": false, 00:19:00.205 "zone_management": false, 00:19:00.205 "zone_append": false, 00:19:00.205 "compare": false, 00:19:00.205 "compare_and_write": false, 00:19:00.205 "abort": true, 00:19:00.205 "seek_hole": false, 00:19:00.205 "seek_data": false, 00:19:00.205 "copy": true, 00:19:00.205 "nvme_iov_md": false 00:19:00.205 }, 00:19:00.205 "memory_domains": [ 00:19:00.205 { 00:19:00.205 "dma_device_id": "system", 00:19:00.205 "dma_device_type": 1 00:19:00.205 }, 00:19:00.205 { 00:19:00.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.205 "dma_device_type": 2 00:19:00.205 } 00:19:00.205 ], 00:19:00.205 "driver_specific": {} 00:19:00.205 }' 00:19:00.205 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:00.205 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:00.205 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:00.205 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:00.205 00:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:00.205 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:00.205 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:00.205 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:00.205 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:00.205 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:00.205 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:00.205 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:00.205 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:00.205 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:00.205 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:00.465 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:00.465 "name": "BaseBdev2", 00:19:00.465 "aliases": [ 00:19:00.465 "9211a75e-4cb4-4999-a061-e80409961c65" 00:19:00.465 ], 00:19:00.465 "product_name": "Malloc disk", 00:19:00.465 "block_size": 512, 00:19:00.465 "num_blocks": 65536, 00:19:00.465 "uuid": "9211a75e-4cb4-4999-a061-e80409961c65", 00:19:00.465 "assigned_rate_limits": { 00:19:00.465 "rw_ios_per_sec": 0, 00:19:00.465 "rw_mbytes_per_sec": 0, 00:19:00.465 "r_mbytes_per_sec": 0, 00:19:00.465 "w_mbytes_per_sec": 0 00:19:00.465 }, 00:19:00.465 "claimed": true, 00:19:00.465 "claim_type": "exclusive_write", 00:19:00.465 "zoned": false, 00:19:00.465 "supported_io_types": { 00:19:00.465 "read": true, 00:19:00.465 "write": true, 00:19:00.465 "unmap": true, 00:19:00.465 "flush": true, 00:19:00.465 "reset": true, 00:19:00.465 "nvme_admin": false, 00:19:00.465 "nvme_io": false, 00:19:00.465 "nvme_io_md": false, 00:19:00.465 "write_zeroes": true, 00:19:00.465 "zcopy": true, 00:19:00.465 "get_zone_info": false, 00:19:00.465 "zone_management": false, 00:19:00.465 "zone_append": false, 00:19:00.465 "compare": false, 00:19:00.465 "compare_and_write": false, 00:19:00.465 "abort": true, 00:19:00.465 "seek_hole": false, 00:19:00.465 "seek_data": false, 00:19:00.465 "copy": true, 00:19:00.465 "nvme_iov_md": false 00:19:00.465 }, 00:19:00.465 "memory_domains": [ 00:19:00.465 { 00:19:00.465 "dma_device_id": "system", 00:19:00.465 "dma_device_type": 1 00:19:00.465 }, 00:19:00.465 { 00:19:00.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.465 "dma_device_type": 2 00:19:00.465 } 00:19:00.465 ], 00:19:00.465 "driver_specific": {} 00:19:00.465 }' 00:19:00.465 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:00.465 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:00.724 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:00.724 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:00.724 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:00.724 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:00.724 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:00.724 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:00.724 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:00.724 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:00.724 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:00.724 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:00.724 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:00.724 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:00.724 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:00.984 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:00.984 "name": "BaseBdev3", 00:19:00.984 "aliases": [ 00:19:00.984 "a0872007-41d5-4173-a644-31b4af700cfe" 00:19:00.984 ], 00:19:00.984 "product_name": "Malloc disk", 00:19:00.984 "block_size": 512, 00:19:00.984 "num_blocks": 65536, 00:19:00.984 "uuid": "a0872007-41d5-4173-a644-31b4af700cfe", 00:19:00.984 "assigned_rate_limits": { 00:19:00.984 "rw_ios_per_sec": 0, 00:19:00.984 "rw_mbytes_per_sec": 0, 00:19:00.984 "r_mbytes_per_sec": 0, 00:19:00.984 "w_mbytes_per_sec": 0 00:19:00.984 }, 00:19:00.984 "claimed": true, 00:19:00.984 "claim_type": "exclusive_write", 00:19:00.984 "zoned": false, 00:19:00.984 "supported_io_types": { 00:19:00.984 "read": true, 00:19:00.984 "write": true, 00:19:00.984 "unmap": true, 00:19:00.984 "flush": true, 00:19:00.984 "reset": true, 00:19:00.984 "nvme_admin": false, 00:19:00.984 "nvme_io": false, 00:19:00.984 "nvme_io_md": false, 00:19:00.984 "write_zeroes": true, 00:19:00.984 "zcopy": true, 00:19:00.984 "get_zone_info": false, 00:19:00.984 "zone_management": false, 00:19:00.984 "zone_append": false, 00:19:00.984 "compare": false, 00:19:00.984 "compare_and_write": false, 00:19:00.984 "abort": true, 00:19:00.984 "seek_hole": false, 00:19:00.984 "seek_data": false, 00:19:00.984 "copy": true, 00:19:00.984 "nvme_iov_md": false 00:19:00.984 }, 00:19:00.984 "memory_domains": [ 00:19:00.984 { 00:19:00.984 "dma_device_id": "system", 00:19:00.984 "dma_device_type": 1 00:19:00.984 }, 00:19:00.984 { 00:19:00.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.984 "dma_device_type": 2 00:19:00.984 } 00:19:00.984 ], 00:19:00.984 "driver_specific": {} 00:19:00.984 }' 00:19:00.984 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:00.984 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:00.984 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:00.984 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:00.984 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:00.984 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:00.984 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:00.984 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:00.984 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:00.984 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:00.984 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:00.984 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:00.984 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:01.243 [2024-07-25 00:02:56.970983] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:01.243 [2024-07-25 00:02:56.971026] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:01.244 [2024-07-25 00:02:56.971113] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:01.244 [2024-07-25 00:02:56.971555] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:01.244 [2024-07-25 00:02:56.971575] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name Existed_Raid, state offline 00:19:01.244 00:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 85196 00:19:01.244 00:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 85196 ']' 00:19:01.244 00:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 85196 00:19:01.244 00:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:19:01.244 00:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:01.244 00:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85196 00:19:01.244 killing process with pid 85196 00:19:01.244 00:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:01.244 00:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:01.244 00:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85196' 00:19:01.244 00:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 85196 00:19:01.244 00:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 85196 00:19:01.244 [2024-07-25 00:02:57.028261] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:01.503 [2024-07-25 00:02:57.258810] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:02.880 ************************************ 00:19:02.880 END TEST raid_state_function_test 00:19:02.880 ************************************ 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:19:02.880 00:19:02.880 real 0m24.197s 00:19:02.880 user 0m42.297s 00:19:02.880 sys 0m3.650s 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.880 00:02:58 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:19:02.880 00:02:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:02.880 00:02:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:02.880 00:02:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:02.880 ************************************ 00:19:02.880 START TEST raid_state_function_test_sb 00:19:02.880 ************************************ 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:02.880 Process raid pid: 86080 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=86080 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 86080' 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 86080 /var/tmp/spdk-raid.sock 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86080 ']' 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:02.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:02.880 00:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.880 [2024-07-25 00:02:58.489682] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:19:02.880 [2024-07-25 00:02:58.489869] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.880 [2024-07-25 00:02:58.665842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.140 [2024-07-25 00:02:58.893307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.399 [2024-07-25 00:02:59.070537] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:03.658 00:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:03.658 00:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:19:03.658 00:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:03.917 [2024-07-25 00:02:59.624221] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:03.917 [2024-07-25 00:02:59.624500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:03.917 [2024-07-25 00:02:59.624648] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:03.917 [2024-07-25 00:02:59.624711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:03.917 [2024-07-25 00:02:59.624846] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:03.917 [2024-07-25 00:02:59.624913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:03.917 00:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:03.917 00:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:03.917 00:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:03.917 00:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:03.917 00:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:03.917 00:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:03.917 00:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:03.917 00:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:03.917 00:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:03.917 00:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:03.917 00:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.917 00:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.176 00:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:04.176 "name": "Existed_Raid", 00:19:04.176 "uuid": "62258fb0-1873-4f29-9a61-3e2c1468bc77", 00:19:04.176 "strip_size_kb": 0, 00:19:04.176 "state": "configuring", 00:19:04.176 "raid_level": "raid1", 00:19:04.176 "superblock": true, 00:19:04.176 "num_base_bdevs": 3, 00:19:04.176 "num_base_bdevs_discovered": 0, 00:19:04.176 "num_base_bdevs_operational": 3, 00:19:04.176 "base_bdevs_list": [ 00:19:04.176 { 00:19:04.176 "name": "BaseBdev1", 00:19:04.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.177 "is_configured": false, 00:19:04.177 "data_offset": 0, 00:19:04.177 "data_size": 0 00:19:04.177 }, 00:19:04.177 { 00:19:04.177 "name": "BaseBdev2", 00:19:04.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.177 "is_configured": false, 00:19:04.177 "data_offset": 0, 00:19:04.177 "data_size": 0 00:19:04.177 }, 00:19:04.177 { 00:19:04.177 "name": "BaseBdev3", 00:19:04.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.177 "is_configured": false, 00:19:04.177 "data_offset": 0, 00:19:04.177 "data_size": 0 00:19:04.177 } 00:19:04.177 ] 00:19:04.177 }' 00:19:04.177 00:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:04.177 00:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.435 00:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:04.694 [2024-07-25 00:03:00.456425] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:04.694 [2024-07-25 00:03:00.456477] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:19:04.694 00:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:04.952 [2024-07-25 00:03:00.684495] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:04.952 [2024-07-25 00:03:00.684576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:04.952 [2024-07-25 00:03:00.684601] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:04.952 [2024-07-25 00:03:00.684621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:04.952 [2024-07-25 00:03:00.684631] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:04.952 [2024-07-25 00:03:00.684645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:04.952 00:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:05.211 [2024-07-25 00:03:00.942938] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:05.211 BaseBdev1 00:19:05.211 00:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:05.211 00:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:05.211 00:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:05.211 00:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:05.211 00:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:05.211 00:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:05.211 00:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:05.470 00:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:05.729 [ 00:19:05.729 { 00:19:05.729 "name": "BaseBdev1", 00:19:05.729 "aliases": [ 00:19:05.729 "c3308c2a-3a51-42aa-8a1f-53b87b7caeb0" 00:19:05.729 ], 00:19:05.729 "product_name": "Malloc disk", 00:19:05.729 "block_size": 512, 00:19:05.729 "num_blocks": 65536, 00:19:05.729 "uuid": "c3308c2a-3a51-42aa-8a1f-53b87b7caeb0", 00:19:05.729 "assigned_rate_limits": { 00:19:05.729 "rw_ios_per_sec": 0, 00:19:05.729 "rw_mbytes_per_sec": 0, 00:19:05.729 "r_mbytes_per_sec": 0, 00:19:05.729 "w_mbytes_per_sec": 0 00:19:05.729 }, 00:19:05.729 "claimed": true, 00:19:05.729 "claim_type": "exclusive_write", 00:19:05.729 "zoned": false, 00:19:05.729 "supported_io_types": { 00:19:05.729 "read": true, 00:19:05.729 "write": true, 00:19:05.729 "unmap": true, 00:19:05.729 "flush": true, 00:19:05.729 "reset": true, 00:19:05.729 "nvme_admin": false, 00:19:05.729 "nvme_io": false, 00:19:05.729 "nvme_io_md": false, 00:19:05.729 "write_zeroes": true, 00:19:05.729 "zcopy": true, 00:19:05.729 "get_zone_info": false, 00:19:05.729 "zone_management": false, 00:19:05.729 "zone_append": false, 00:19:05.729 "compare": false, 00:19:05.729 "compare_and_write": false, 00:19:05.729 "abort": true, 00:19:05.729 "seek_hole": false, 00:19:05.729 "seek_data": false, 00:19:05.729 "copy": true, 00:19:05.729 "nvme_iov_md": false 00:19:05.729 }, 00:19:05.729 "memory_domains": [ 00:19:05.729 { 00:19:05.729 "dma_device_id": "system", 00:19:05.729 "dma_device_type": 1 00:19:05.729 }, 00:19:05.729 { 00:19:05.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.729 "dma_device_type": 2 00:19:05.729 } 00:19:05.729 ], 00:19:05.729 "driver_specific": {} 00:19:05.729 } 00:19:05.729 ] 00:19:05.729 00:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:05.729 00:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:05.729 00:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:05.729 00:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:05.729 00:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:05.729 00:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:05.729 00:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:05.729 00:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:05.729 00:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:05.729 00:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:05.729 00:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:05.729 00:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.729 00:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.987 00:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:05.987 "name": "Existed_Raid", 00:19:05.987 "uuid": "8e9c40c9-30ed-4b7c-9caa-98e07f4ff0e0", 00:19:05.987 "strip_size_kb": 0, 00:19:05.987 "state": "configuring", 00:19:05.987 "raid_level": "raid1", 00:19:05.987 "superblock": true, 00:19:05.987 "num_base_bdevs": 3, 00:19:05.987 "num_base_bdevs_discovered": 1, 00:19:05.987 "num_base_bdevs_operational": 3, 00:19:05.987 "base_bdevs_list": [ 00:19:05.987 { 00:19:05.987 "name": "BaseBdev1", 00:19:05.987 "uuid": "c3308c2a-3a51-42aa-8a1f-53b87b7caeb0", 00:19:05.987 "is_configured": true, 00:19:05.987 "data_offset": 2048, 00:19:05.987 "data_size": 63488 00:19:05.987 }, 00:19:05.987 { 00:19:05.987 "name": "BaseBdev2", 00:19:05.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.987 "is_configured": false, 00:19:05.987 "data_offset": 0, 00:19:05.987 "data_size": 0 00:19:05.987 }, 00:19:05.987 { 00:19:05.987 "name": "BaseBdev3", 00:19:05.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.987 "is_configured": false, 00:19:05.987 "data_offset": 0, 00:19:05.987 "data_size": 0 00:19:05.987 } 00:19:05.987 ] 00:19:05.987 }' 00:19:05.987 00:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:05.987 00:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.246 00:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:06.505 [2024-07-25 00:03:02.247399] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:06.505 [2024-07-25 00:03:02.247473] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:19:06.505 00:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:06.765 [2024-07-25 00:03:02.503553] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:06.765 [2024-07-25 00:03:02.505996] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:06.765 [2024-07-25 00:03:02.506068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:06.765 [2024-07-25 00:03:02.506100] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:06.765 [2024-07-25 00:03:02.506115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:06.765 00:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:06.765 00:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:06.765 00:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:06.765 00:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:06.765 00:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:06.765 00:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:06.765 00:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:06.765 00:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:06.765 00:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:06.765 00:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:06.765 00:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:06.765 00:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:06.765 00:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.765 00:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.024 00:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:07.024 "name": "Existed_Raid", 00:19:07.024 "uuid": "571a6a0b-9e07-484c-838e-ec8f8ce390c6", 00:19:07.024 "strip_size_kb": 0, 00:19:07.024 "state": "configuring", 00:19:07.024 "raid_level": "raid1", 00:19:07.024 "superblock": true, 00:19:07.024 "num_base_bdevs": 3, 00:19:07.024 "num_base_bdevs_discovered": 1, 00:19:07.024 "num_base_bdevs_operational": 3, 00:19:07.024 "base_bdevs_list": [ 00:19:07.024 { 00:19:07.024 "name": "BaseBdev1", 00:19:07.024 "uuid": "c3308c2a-3a51-42aa-8a1f-53b87b7caeb0", 00:19:07.024 "is_configured": true, 00:19:07.024 "data_offset": 2048, 00:19:07.024 "data_size": 63488 00:19:07.024 }, 00:19:07.024 { 00:19:07.024 "name": "BaseBdev2", 00:19:07.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.024 "is_configured": false, 00:19:07.024 "data_offset": 0, 00:19:07.024 "data_size": 0 00:19:07.024 }, 00:19:07.024 { 00:19:07.024 "name": "BaseBdev3", 00:19:07.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.024 "is_configured": false, 00:19:07.024 "data_offset": 0, 00:19:07.024 "data_size": 0 00:19:07.024 } 00:19:07.024 ] 00:19:07.024 }' 00:19:07.024 00:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:07.024 00:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.283 00:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:07.541 [2024-07-25 00:03:03.399427] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:07.541 BaseBdev2 00:19:07.799 00:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:07.799 00:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:07.799 00:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:07.799 00:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:07.799 00:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:07.799 00:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:07.799 00:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:08.057 00:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:08.057 [ 00:19:08.057 { 00:19:08.057 "name": "BaseBdev2", 00:19:08.057 "aliases": [ 00:19:08.057 "046cdce6-e73a-42e4-b7e3-ce24749b01e1" 00:19:08.057 ], 00:19:08.057 "product_name": "Malloc disk", 00:19:08.057 "block_size": 512, 00:19:08.057 "num_blocks": 65536, 00:19:08.057 "uuid": "046cdce6-e73a-42e4-b7e3-ce24749b01e1", 00:19:08.057 "assigned_rate_limits": { 00:19:08.057 "rw_ios_per_sec": 0, 00:19:08.057 "rw_mbytes_per_sec": 0, 00:19:08.057 "r_mbytes_per_sec": 0, 00:19:08.057 "w_mbytes_per_sec": 0 00:19:08.057 }, 00:19:08.057 "claimed": true, 00:19:08.057 "claim_type": "exclusive_write", 00:19:08.057 "zoned": false, 00:19:08.057 "supported_io_types": { 00:19:08.057 "read": true, 00:19:08.057 "write": true, 00:19:08.057 "unmap": true, 00:19:08.057 "flush": true, 00:19:08.057 "reset": true, 00:19:08.057 "nvme_admin": false, 00:19:08.057 "nvme_io": false, 00:19:08.057 "nvme_io_md": false, 00:19:08.057 "write_zeroes": true, 00:19:08.057 "zcopy": true, 00:19:08.057 "get_zone_info": false, 00:19:08.057 "zone_management": false, 00:19:08.057 "zone_append": false, 00:19:08.057 "compare": false, 00:19:08.057 "compare_and_write": false, 00:19:08.057 "abort": true, 00:19:08.057 "seek_hole": false, 00:19:08.057 "seek_data": false, 00:19:08.057 "copy": true, 00:19:08.057 "nvme_iov_md": false 00:19:08.057 }, 00:19:08.057 "memory_domains": [ 00:19:08.057 { 00:19:08.057 "dma_device_id": "system", 00:19:08.057 "dma_device_type": 1 00:19:08.057 }, 00:19:08.057 { 00:19:08.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.057 "dma_device_type": 2 00:19:08.057 } 00:19:08.057 ], 00:19:08.057 "driver_specific": {} 00:19:08.057 } 00:19:08.057 ] 00:19:08.057 00:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:08.057 00:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:08.058 00:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:08.058 00:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:08.058 00:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:08.058 00:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:08.058 00:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:08.058 00:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:08.058 00:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:08.058 00:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:08.058 00:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:08.058 00:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:08.058 00:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:08.058 00:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.058 00:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.316 00:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:08.316 "name": "Existed_Raid", 00:19:08.316 "uuid": "571a6a0b-9e07-484c-838e-ec8f8ce390c6", 00:19:08.316 "strip_size_kb": 0, 00:19:08.316 "state": "configuring", 00:19:08.316 "raid_level": "raid1", 00:19:08.316 "superblock": true, 00:19:08.316 "num_base_bdevs": 3, 00:19:08.316 "num_base_bdevs_discovered": 2, 00:19:08.316 "num_base_bdevs_operational": 3, 00:19:08.316 "base_bdevs_list": [ 00:19:08.316 { 00:19:08.316 "name": "BaseBdev1", 00:19:08.316 "uuid": "c3308c2a-3a51-42aa-8a1f-53b87b7caeb0", 00:19:08.316 "is_configured": true, 00:19:08.316 "data_offset": 2048, 00:19:08.316 "data_size": 63488 00:19:08.316 }, 00:19:08.316 { 00:19:08.316 "name": "BaseBdev2", 00:19:08.316 "uuid": "046cdce6-e73a-42e4-b7e3-ce24749b01e1", 00:19:08.316 "is_configured": true, 00:19:08.316 "data_offset": 2048, 00:19:08.316 "data_size": 63488 00:19:08.316 }, 00:19:08.316 { 00:19:08.316 "name": "BaseBdev3", 00:19:08.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.316 "is_configured": false, 00:19:08.316 "data_offset": 0, 00:19:08.316 "data_size": 0 00:19:08.316 } 00:19:08.316 ] 00:19:08.316 }' 00:19:08.316 00:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:08.316 00:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.883 00:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:08.883 [2024-07-25 00:03:04.751414] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:08.883 [2024-07-25 00:03:04.752023] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:19:08.884 [2024-07-25 00:03:04.752205] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:08.884 [2024-07-25 00:03:04.752414] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:19:09.142 [2024-07-25 00:03:04.753026] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:19:09.142 [2024-07-25 00:03:04.753184] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:19:09.142 BaseBdev3 00:19:09.142 [2024-07-25 00:03:04.753519] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.142 00:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:09.142 00:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:09.142 00:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:09.142 00:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:09.142 00:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:09.142 00:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:09.142 00:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:09.142 00:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:09.401 [ 00:19:09.401 { 00:19:09.401 "name": "BaseBdev3", 00:19:09.401 "aliases": [ 00:19:09.401 "e431f9e7-c352-49be-8923-c937cf95d109" 00:19:09.401 ], 00:19:09.401 "product_name": "Malloc disk", 00:19:09.401 "block_size": 512, 00:19:09.401 "num_blocks": 65536, 00:19:09.401 "uuid": "e431f9e7-c352-49be-8923-c937cf95d109", 00:19:09.401 "assigned_rate_limits": { 00:19:09.401 "rw_ios_per_sec": 0, 00:19:09.401 "rw_mbytes_per_sec": 0, 00:19:09.401 "r_mbytes_per_sec": 0, 00:19:09.401 "w_mbytes_per_sec": 0 00:19:09.401 }, 00:19:09.401 "claimed": true, 00:19:09.401 "claim_type": "exclusive_write", 00:19:09.401 "zoned": false, 00:19:09.401 "supported_io_types": { 00:19:09.401 "read": true, 00:19:09.401 "write": true, 00:19:09.401 "unmap": true, 00:19:09.401 "flush": true, 00:19:09.401 "reset": true, 00:19:09.401 "nvme_admin": false, 00:19:09.401 "nvme_io": false, 00:19:09.401 "nvme_io_md": false, 00:19:09.401 "write_zeroes": true, 00:19:09.401 "zcopy": true, 00:19:09.401 "get_zone_info": false, 00:19:09.401 "zone_management": false, 00:19:09.401 "zone_append": false, 00:19:09.401 "compare": false, 00:19:09.401 "compare_and_write": false, 00:19:09.401 "abort": true, 00:19:09.401 "seek_hole": false, 00:19:09.401 "seek_data": false, 00:19:09.401 "copy": true, 00:19:09.401 "nvme_iov_md": false 00:19:09.401 }, 00:19:09.401 "memory_domains": [ 00:19:09.401 { 00:19:09.401 "dma_device_id": "system", 00:19:09.401 "dma_device_type": 1 00:19:09.401 }, 00:19:09.401 { 00:19:09.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.401 "dma_device_type": 2 00:19:09.401 } 00:19:09.401 ], 00:19:09.401 "driver_specific": {} 00:19:09.401 } 00:19:09.401 ] 00:19:09.659 00:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:09.659 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:09.660 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:09.660 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:09.660 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:09.660 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:09.660 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:09.660 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:09.660 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:09.660 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:09.660 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:09.660 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:09.660 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:09.660 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.660 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.660 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:09.660 "name": "Existed_Raid", 00:19:09.660 "uuid": "571a6a0b-9e07-484c-838e-ec8f8ce390c6", 00:19:09.660 "strip_size_kb": 0, 00:19:09.660 "state": "online", 00:19:09.660 "raid_level": "raid1", 00:19:09.660 "superblock": true, 00:19:09.660 "num_base_bdevs": 3, 00:19:09.660 "num_base_bdevs_discovered": 3, 00:19:09.660 "num_base_bdevs_operational": 3, 00:19:09.660 "base_bdevs_list": [ 00:19:09.660 { 00:19:09.660 "name": "BaseBdev1", 00:19:09.660 "uuid": "c3308c2a-3a51-42aa-8a1f-53b87b7caeb0", 00:19:09.660 "is_configured": true, 00:19:09.660 "data_offset": 2048, 00:19:09.660 "data_size": 63488 00:19:09.660 }, 00:19:09.660 { 00:19:09.660 "name": "BaseBdev2", 00:19:09.660 "uuid": "046cdce6-e73a-42e4-b7e3-ce24749b01e1", 00:19:09.660 "is_configured": true, 00:19:09.660 "data_offset": 2048, 00:19:09.660 "data_size": 63488 00:19:09.660 }, 00:19:09.660 { 00:19:09.660 "name": "BaseBdev3", 00:19:09.660 "uuid": "e431f9e7-c352-49be-8923-c937cf95d109", 00:19:09.660 "is_configured": true, 00:19:09.660 "data_offset": 2048, 00:19:09.660 "data_size": 63488 00:19:09.660 } 00:19:09.660 ] 00:19:09.660 }' 00:19:09.660 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:09.660 00:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.227 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:10.227 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:10.227 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:10.227 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:10.227 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:10.227 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:10.227 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:10.227 00:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:10.227 [2024-07-25 00:03:06.088207] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:10.486 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:10.486 "name": "Existed_Raid", 00:19:10.486 "aliases": [ 00:19:10.486 "571a6a0b-9e07-484c-838e-ec8f8ce390c6" 00:19:10.486 ], 00:19:10.486 "product_name": "Raid Volume", 00:19:10.486 "block_size": 512, 00:19:10.486 "num_blocks": 63488, 00:19:10.486 "uuid": "571a6a0b-9e07-484c-838e-ec8f8ce390c6", 00:19:10.486 "assigned_rate_limits": { 00:19:10.486 "rw_ios_per_sec": 0, 00:19:10.486 "rw_mbytes_per_sec": 0, 00:19:10.486 "r_mbytes_per_sec": 0, 00:19:10.486 "w_mbytes_per_sec": 0 00:19:10.486 }, 00:19:10.486 "claimed": false, 00:19:10.486 "zoned": false, 00:19:10.486 "supported_io_types": { 00:19:10.486 "read": true, 00:19:10.486 "write": true, 00:19:10.486 "unmap": false, 00:19:10.486 "flush": false, 00:19:10.486 "reset": true, 00:19:10.486 "nvme_admin": false, 00:19:10.486 "nvme_io": false, 00:19:10.486 "nvme_io_md": false, 00:19:10.486 "write_zeroes": true, 00:19:10.486 "zcopy": false, 00:19:10.486 "get_zone_info": false, 00:19:10.486 "zone_management": false, 00:19:10.486 "zone_append": false, 00:19:10.486 "compare": false, 00:19:10.486 "compare_and_write": false, 00:19:10.486 "abort": false, 00:19:10.486 "seek_hole": false, 00:19:10.486 "seek_data": false, 00:19:10.486 "copy": false, 00:19:10.486 "nvme_iov_md": false 00:19:10.486 }, 00:19:10.486 "memory_domains": [ 00:19:10.486 { 00:19:10.486 "dma_device_id": "system", 00:19:10.486 "dma_device_type": 1 00:19:10.486 }, 00:19:10.486 { 00:19:10.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.486 "dma_device_type": 2 00:19:10.486 }, 00:19:10.486 { 00:19:10.486 "dma_device_id": "system", 00:19:10.486 "dma_device_type": 1 00:19:10.486 }, 00:19:10.486 { 00:19:10.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.486 "dma_device_type": 2 00:19:10.486 }, 00:19:10.486 { 00:19:10.486 "dma_device_id": "system", 00:19:10.486 "dma_device_type": 1 00:19:10.486 }, 00:19:10.486 { 00:19:10.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.486 "dma_device_type": 2 00:19:10.486 } 00:19:10.486 ], 00:19:10.486 "driver_specific": { 00:19:10.486 "raid": { 00:19:10.486 "uuid": "571a6a0b-9e07-484c-838e-ec8f8ce390c6", 00:19:10.486 "strip_size_kb": 0, 00:19:10.486 "state": "online", 00:19:10.486 "raid_level": "raid1", 00:19:10.486 "superblock": true, 00:19:10.486 "num_base_bdevs": 3, 00:19:10.486 "num_base_bdevs_discovered": 3, 00:19:10.486 "num_base_bdevs_operational": 3, 00:19:10.486 "base_bdevs_list": [ 00:19:10.486 { 00:19:10.486 "name": "BaseBdev1", 00:19:10.486 "uuid": "c3308c2a-3a51-42aa-8a1f-53b87b7caeb0", 00:19:10.486 "is_configured": true, 00:19:10.486 "data_offset": 2048, 00:19:10.486 "data_size": 63488 00:19:10.486 }, 00:19:10.486 { 00:19:10.486 "name": "BaseBdev2", 00:19:10.486 "uuid": "046cdce6-e73a-42e4-b7e3-ce24749b01e1", 00:19:10.486 "is_configured": true, 00:19:10.486 "data_offset": 2048, 00:19:10.486 "data_size": 63488 00:19:10.486 }, 00:19:10.486 { 00:19:10.486 "name": "BaseBdev3", 00:19:10.486 "uuid": "e431f9e7-c352-49be-8923-c937cf95d109", 00:19:10.486 "is_configured": true, 00:19:10.486 "data_offset": 2048, 00:19:10.486 "data_size": 63488 00:19:10.486 } 00:19:10.486 ] 00:19:10.486 } 00:19:10.486 } 00:19:10.486 }' 00:19:10.486 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:10.486 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:10.486 BaseBdev2 00:19:10.486 BaseBdev3' 00:19:10.486 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:10.486 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:10.486 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:10.745 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:10.745 "name": "BaseBdev1", 00:19:10.745 "aliases": [ 00:19:10.745 "c3308c2a-3a51-42aa-8a1f-53b87b7caeb0" 00:19:10.745 ], 00:19:10.745 "product_name": "Malloc disk", 00:19:10.745 "block_size": 512, 00:19:10.745 "num_blocks": 65536, 00:19:10.745 "uuid": "c3308c2a-3a51-42aa-8a1f-53b87b7caeb0", 00:19:10.745 "assigned_rate_limits": { 00:19:10.745 "rw_ios_per_sec": 0, 00:19:10.745 "rw_mbytes_per_sec": 0, 00:19:10.745 "r_mbytes_per_sec": 0, 00:19:10.745 "w_mbytes_per_sec": 0 00:19:10.745 }, 00:19:10.745 "claimed": true, 00:19:10.745 "claim_type": "exclusive_write", 00:19:10.745 "zoned": false, 00:19:10.745 "supported_io_types": { 00:19:10.745 "read": true, 00:19:10.745 "write": true, 00:19:10.745 "unmap": true, 00:19:10.745 "flush": true, 00:19:10.745 "reset": true, 00:19:10.745 "nvme_admin": false, 00:19:10.745 "nvme_io": false, 00:19:10.745 "nvme_io_md": false, 00:19:10.745 "write_zeroes": true, 00:19:10.745 "zcopy": true, 00:19:10.745 "get_zone_info": false, 00:19:10.745 "zone_management": false, 00:19:10.745 "zone_append": false, 00:19:10.745 "compare": false, 00:19:10.745 "compare_and_write": false, 00:19:10.745 "abort": true, 00:19:10.745 "seek_hole": false, 00:19:10.745 "seek_data": false, 00:19:10.745 "copy": true, 00:19:10.745 "nvme_iov_md": false 00:19:10.745 }, 00:19:10.745 "memory_domains": [ 00:19:10.745 { 00:19:10.745 "dma_device_id": "system", 00:19:10.745 "dma_device_type": 1 00:19:10.745 }, 00:19:10.745 { 00:19:10.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.745 "dma_device_type": 2 00:19:10.745 } 00:19:10.745 ], 00:19:10.745 "driver_specific": {} 00:19:10.745 }' 00:19:10.745 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:10.745 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:10.745 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:10.745 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:10.745 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:10.745 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:10.745 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:10.745 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:10.745 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:10.745 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:10.745 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:10.745 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:10.745 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:10.745 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:10.745 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:11.004 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:11.004 "name": "BaseBdev2", 00:19:11.004 "aliases": [ 00:19:11.004 "046cdce6-e73a-42e4-b7e3-ce24749b01e1" 00:19:11.004 ], 00:19:11.004 "product_name": "Malloc disk", 00:19:11.004 "block_size": 512, 00:19:11.004 "num_blocks": 65536, 00:19:11.004 "uuid": "046cdce6-e73a-42e4-b7e3-ce24749b01e1", 00:19:11.004 "assigned_rate_limits": { 00:19:11.004 "rw_ios_per_sec": 0, 00:19:11.004 "rw_mbytes_per_sec": 0, 00:19:11.004 "r_mbytes_per_sec": 0, 00:19:11.004 "w_mbytes_per_sec": 0 00:19:11.004 }, 00:19:11.004 "claimed": true, 00:19:11.004 "claim_type": "exclusive_write", 00:19:11.004 "zoned": false, 00:19:11.004 "supported_io_types": { 00:19:11.004 "read": true, 00:19:11.004 "write": true, 00:19:11.004 "unmap": true, 00:19:11.004 "flush": true, 00:19:11.004 "reset": true, 00:19:11.004 "nvme_admin": false, 00:19:11.004 "nvme_io": false, 00:19:11.004 "nvme_io_md": false, 00:19:11.004 "write_zeroes": true, 00:19:11.004 "zcopy": true, 00:19:11.004 "get_zone_info": false, 00:19:11.004 "zone_management": false, 00:19:11.004 "zone_append": false, 00:19:11.004 "compare": false, 00:19:11.004 "compare_and_write": false, 00:19:11.004 "abort": true, 00:19:11.004 "seek_hole": false, 00:19:11.004 "seek_data": false, 00:19:11.004 "copy": true, 00:19:11.004 "nvme_iov_md": false 00:19:11.004 }, 00:19:11.004 "memory_domains": [ 00:19:11.004 { 00:19:11.004 "dma_device_id": "system", 00:19:11.004 "dma_device_type": 1 00:19:11.004 }, 00:19:11.004 { 00:19:11.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.004 "dma_device_type": 2 00:19:11.004 } 00:19:11.004 ], 00:19:11.004 "driver_specific": {} 00:19:11.004 }' 00:19:11.004 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:11.004 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:11.004 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:11.004 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:11.004 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:11.004 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:11.004 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:11.004 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:11.004 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:11.004 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:11.004 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:11.004 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:11.004 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:11.004 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:11.004 00:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:11.262 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:11.263 "name": "BaseBdev3", 00:19:11.263 "aliases": [ 00:19:11.263 "e431f9e7-c352-49be-8923-c937cf95d109" 00:19:11.263 ], 00:19:11.263 "product_name": "Malloc disk", 00:19:11.263 "block_size": 512, 00:19:11.263 "num_blocks": 65536, 00:19:11.263 "uuid": "e431f9e7-c352-49be-8923-c937cf95d109", 00:19:11.263 "assigned_rate_limits": { 00:19:11.263 "rw_ios_per_sec": 0, 00:19:11.263 "rw_mbytes_per_sec": 0, 00:19:11.263 "r_mbytes_per_sec": 0, 00:19:11.263 "w_mbytes_per_sec": 0 00:19:11.263 }, 00:19:11.263 "claimed": true, 00:19:11.263 "claim_type": "exclusive_write", 00:19:11.263 "zoned": false, 00:19:11.263 "supported_io_types": { 00:19:11.263 "read": true, 00:19:11.263 "write": true, 00:19:11.263 "unmap": true, 00:19:11.263 "flush": true, 00:19:11.263 "reset": true, 00:19:11.263 "nvme_admin": false, 00:19:11.263 "nvme_io": false, 00:19:11.263 "nvme_io_md": false, 00:19:11.263 "write_zeroes": true, 00:19:11.263 "zcopy": true, 00:19:11.263 "get_zone_info": false, 00:19:11.263 "zone_management": false, 00:19:11.263 "zone_append": false, 00:19:11.263 "compare": false, 00:19:11.263 "compare_and_write": false, 00:19:11.263 "abort": true, 00:19:11.263 "seek_hole": false, 00:19:11.263 "seek_data": false, 00:19:11.263 "copy": true, 00:19:11.263 "nvme_iov_md": false 00:19:11.263 }, 00:19:11.263 "memory_domains": [ 00:19:11.263 { 00:19:11.263 "dma_device_id": "system", 00:19:11.263 "dma_device_type": 1 00:19:11.263 }, 00:19:11.263 { 00:19:11.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.263 "dma_device_type": 2 00:19:11.263 } 00:19:11.263 ], 00:19:11.263 "driver_specific": {} 00:19:11.263 }' 00:19:11.263 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:11.263 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:11.263 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:11.263 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:11.263 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:11.263 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:11.263 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:11.263 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:11.263 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:11.263 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:11.263 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:11.263 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:11.263 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:11.521 [2024-07-25 00:03:07.332324] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:11.779 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:11.779 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:19:11.779 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:11.779 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:19:11.779 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:19:11.779 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:11.779 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:11.779 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:11.779 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:11.780 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:11.780 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:11.780 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:11.780 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:11.780 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:11.780 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:11.780 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.780 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.038 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:12.038 "name": "Existed_Raid", 00:19:12.038 "uuid": "571a6a0b-9e07-484c-838e-ec8f8ce390c6", 00:19:12.038 "strip_size_kb": 0, 00:19:12.038 "state": "online", 00:19:12.038 "raid_level": "raid1", 00:19:12.038 "superblock": true, 00:19:12.038 "num_base_bdevs": 3, 00:19:12.038 "num_base_bdevs_discovered": 2, 00:19:12.038 "num_base_bdevs_operational": 2, 00:19:12.038 "base_bdevs_list": [ 00:19:12.038 { 00:19:12.038 "name": null, 00:19:12.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.038 "is_configured": false, 00:19:12.038 "data_offset": 2048, 00:19:12.038 "data_size": 63488 00:19:12.038 }, 00:19:12.038 { 00:19:12.038 "name": "BaseBdev2", 00:19:12.038 "uuid": "046cdce6-e73a-42e4-b7e3-ce24749b01e1", 00:19:12.038 "is_configured": true, 00:19:12.038 "data_offset": 2048, 00:19:12.038 "data_size": 63488 00:19:12.038 }, 00:19:12.038 { 00:19:12.038 "name": "BaseBdev3", 00:19:12.038 "uuid": "e431f9e7-c352-49be-8923-c937cf95d109", 00:19:12.038 "is_configured": true, 00:19:12.038 "data_offset": 2048, 00:19:12.038 "data_size": 63488 00:19:12.038 } 00:19:12.038 ] 00:19:12.038 }' 00:19:12.038 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:12.038 00:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.296 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:12.296 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:12.296 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.296 00:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:12.555 00:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:12.555 00:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:12.555 00:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:12.555 [2024-07-25 00:03:08.384827] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:12.823 00:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:12.823 00:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:12.823 00:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.823 00:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:13.093 00:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:13.093 00:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:13.093 00:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:13.352 [2024-07-25 00:03:08.992308] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:13.352 [2024-07-25 00:03:08.992428] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:13.352 [2024-07-25 00:03:09.073923] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:13.352 [2024-07-25 00:03:09.073979] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:13.352 [2024-07-25 00:03:09.073997] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:19:13.352 00:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:13.352 00:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:13.352 00:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:13.352 00:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.610 00:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:13.610 00:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:13.610 00:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:19:13.610 00:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:13.610 00:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:13.610 00:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:13.868 BaseBdev2 00:19:13.868 00:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:13.868 00:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:13.868 00:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:13.868 00:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:13.868 00:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:13.868 00:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:13.868 00:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:14.126 00:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:14.385 [ 00:19:14.385 { 00:19:14.385 "name": "BaseBdev2", 00:19:14.385 "aliases": [ 00:19:14.385 "3535325b-9a4b-4049-9246-b3de7c6c1b1a" 00:19:14.385 ], 00:19:14.385 "product_name": "Malloc disk", 00:19:14.385 "block_size": 512, 00:19:14.385 "num_blocks": 65536, 00:19:14.385 "uuid": "3535325b-9a4b-4049-9246-b3de7c6c1b1a", 00:19:14.385 "assigned_rate_limits": { 00:19:14.385 "rw_ios_per_sec": 0, 00:19:14.385 "rw_mbytes_per_sec": 0, 00:19:14.385 "r_mbytes_per_sec": 0, 00:19:14.385 "w_mbytes_per_sec": 0 00:19:14.385 }, 00:19:14.385 "claimed": false, 00:19:14.385 "zoned": false, 00:19:14.385 "supported_io_types": { 00:19:14.385 "read": true, 00:19:14.385 "write": true, 00:19:14.385 "unmap": true, 00:19:14.385 "flush": true, 00:19:14.385 "reset": true, 00:19:14.385 "nvme_admin": false, 00:19:14.385 "nvme_io": false, 00:19:14.385 "nvme_io_md": false, 00:19:14.385 "write_zeroes": true, 00:19:14.385 "zcopy": true, 00:19:14.385 "get_zone_info": false, 00:19:14.385 "zone_management": false, 00:19:14.385 "zone_append": false, 00:19:14.385 "compare": false, 00:19:14.385 "compare_and_write": false, 00:19:14.385 "abort": true, 00:19:14.385 "seek_hole": false, 00:19:14.385 "seek_data": false, 00:19:14.385 "copy": true, 00:19:14.385 "nvme_iov_md": false 00:19:14.385 }, 00:19:14.385 "memory_domains": [ 00:19:14.385 { 00:19:14.385 "dma_device_id": "system", 00:19:14.385 "dma_device_type": 1 00:19:14.385 }, 00:19:14.385 { 00:19:14.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.385 "dma_device_type": 2 00:19:14.385 } 00:19:14.385 ], 00:19:14.385 "driver_specific": {} 00:19:14.385 } 00:19:14.385 ] 00:19:14.385 00:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:14.385 00:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:14.385 00:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:14.385 00:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:14.644 BaseBdev3 00:19:14.644 00:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:14.644 00:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:14.644 00:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:14.644 00:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:14.644 00:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:14.644 00:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:14.644 00:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:14.903 00:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:14.903 [ 00:19:14.903 { 00:19:14.903 "name": "BaseBdev3", 00:19:14.903 "aliases": [ 00:19:14.903 "861b1ec1-4022-49b6-9436-686a332dbc5c" 00:19:14.903 ], 00:19:14.903 "product_name": "Malloc disk", 00:19:14.903 "block_size": 512, 00:19:14.903 "num_blocks": 65536, 00:19:14.903 "uuid": "861b1ec1-4022-49b6-9436-686a332dbc5c", 00:19:14.903 "assigned_rate_limits": { 00:19:14.903 "rw_ios_per_sec": 0, 00:19:14.903 "rw_mbytes_per_sec": 0, 00:19:14.903 "r_mbytes_per_sec": 0, 00:19:14.903 "w_mbytes_per_sec": 0 00:19:14.903 }, 00:19:14.903 "claimed": false, 00:19:14.903 "zoned": false, 00:19:14.903 "supported_io_types": { 00:19:14.903 "read": true, 00:19:14.903 "write": true, 00:19:14.903 "unmap": true, 00:19:14.903 "flush": true, 00:19:14.903 "reset": true, 00:19:14.903 "nvme_admin": false, 00:19:14.903 "nvme_io": false, 00:19:14.903 "nvme_io_md": false, 00:19:14.903 "write_zeroes": true, 00:19:14.903 "zcopy": true, 00:19:14.903 "get_zone_info": false, 00:19:14.903 "zone_management": false, 00:19:14.903 "zone_append": false, 00:19:14.903 "compare": false, 00:19:14.903 "compare_and_write": false, 00:19:14.903 "abort": true, 00:19:14.903 "seek_hole": false, 00:19:14.903 "seek_data": false, 00:19:14.903 "copy": true, 00:19:14.903 "nvme_iov_md": false 00:19:14.903 }, 00:19:14.903 "memory_domains": [ 00:19:14.903 { 00:19:14.903 "dma_device_id": "system", 00:19:14.903 "dma_device_type": 1 00:19:14.903 }, 00:19:14.903 { 00:19:14.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.903 "dma_device_type": 2 00:19:14.903 } 00:19:14.903 ], 00:19:14.903 "driver_specific": {} 00:19:14.903 } 00:19:14.903 ] 00:19:14.903 00:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:14.903 00:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:14.903 00:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:14.903 00:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:15.163 [2024-07-25 00:03:10.942657] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:15.163 [2024-07-25 00:03:10.942736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:15.163 [2024-07-25 00:03:10.942766] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:15.163 [2024-07-25 00:03:10.945002] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:15.163 00:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:15.163 00:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:15.163 00:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:15.163 00:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:15.163 00:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:15.163 00:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:15.163 00:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:15.163 00:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:15.163 00:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:15.163 00:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:15.163 00:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.163 00:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.421 00:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:15.421 "name": "Existed_Raid", 00:19:15.421 "uuid": "e929fb1d-373d-45bf-8083-5e082903feab", 00:19:15.421 "strip_size_kb": 0, 00:19:15.421 "state": "configuring", 00:19:15.421 "raid_level": "raid1", 00:19:15.421 "superblock": true, 00:19:15.421 "num_base_bdevs": 3, 00:19:15.421 "num_base_bdevs_discovered": 2, 00:19:15.421 "num_base_bdevs_operational": 3, 00:19:15.421 "base_bdevs_list": [ 00:19:15.421 { 00:19:15.421 "name": "BaseBdev1", 00:19:15.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.421 "is_configured": false, 00:19:15.421 "data_offset": 0, 00:19:15.421 "data_size": 0 00:19:15.421 }, 00:19:15.421 { 00:19:15.421 "name": "BaseBdev2", 00:19:15.421 "uuid": "3535325b-9a4b-4049-9246-b3de7c6c1b1a", 00:19:15.421 "is_configured": true, 00:19:15.421 "data_offset": 2048, 00:19:15.421 "data_size": 63488 00:19:15.421 }, 00:19:15.421 { 00:19:15.421 "name": "BaseBdev3", 00:19:15.421 "uuid": "861b1ec1-4022-49b6-9436-686a332dbc5c", 00:19:15.421 "is_configured": true, 00:19:15.421 "data_offset": 2048, 00:19:15.421 "data_size": 63488 00:19:15.421 } 00:19:15.421 ] 00:19:15.421 }' 00:19:15.421 00:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:15.421 00:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.681 00:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:15.940 [2024-07-25 00:03:11.774938] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:15.940 00:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:15.940 00:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:15.940 00:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:15.940 00:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:15.940 00:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:15.940 00:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:15.940 00:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:15.940 00:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:15.940 00:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:15.940 00:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:15.940 00:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.940 00:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.199 00:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:16.199 "name": "Existed_Raid", 00:19:16.199 "uuid": "e929fb1d-373d-45bf-8083-5e082903feab", 00:19:16.199 "strip_size_kb": 0, 00:19:16.199 "state": "configuring", 00:19:16.199 "raid_level": "raid1", 00:19:16.199 "superblock": true, 00:19:16.199 "num_base_bdevs": 3, 00:19:16.199 "num_base_bdevs_discovered": 1, 00:19:16.199 "num_base_bdevs_operational": 3, 00:19:16.199 "base_bdevs_list": [ 00:19:16.199 { 00:19:16.199 "name": "BaseBdev1", 00:19:16.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.200 "is_configured": false, 00:19:16.200 "data_offset": 0, 00:19:16.200 "data_size": 0 00:19:16.200 }, 00:19:16.200 { 00:19:16.200 "name": null, 00:19:16.200 "uuid": "3535325b-9a4b-4049-9246-b3de7c6c1b1a", 00:19:16.200 "is_configured": false, 00:19:16.200 "data_offset": 2048, 00:19:16.200 "data_size": 63488 00:19:16.200 }, 00:19:16.200 { 00:19:16.200 "name": "BaseBdev3", 00:19:16.200 "uuid": "861b1ec1-4022-49b6-9436-686a332dbc5c", 00:19:16.200 "is_configured": true, 00:19:16.200 "data_offset": 2048, 00:19:16.200 "data_size": 63488 00:19:16.200 } 00:19:16.200 ] 00:19:16.200 }' 00:19:16.200 00:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:16.200 00:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.766 00:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.766 00:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:16.766 00:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:16.766 00:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:17.024 [2024-07-25 00:03:12.831915] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:17.024 BaseBdev1 00:19:17.024 00:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:17.024 00:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:17.024 00:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:17.024 00:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:17.024 00:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:17.024 00:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:17.024 00:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:17.283 00:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:17.541 [ 00:19:17.541 { 00:19:17.541 "name": "BaseBdev1", 00:19:17.541 "aliases": [ 00:19:17.541 "0d6321f5-b69e-423f-9c0c-d0eed97195d5" 00:19:17.541 ], 00:19:17.541 "product_name": "Malloc disk", 00:19:17.541 "block_size": 512, 00:19:17.541 "num_blocks": 65536, 00:19:17.541 "uuid": "0d6321f5-b69e-423f-9c0c-d0eed97195d5", 00:19:17.541 "assigned_rate_limits": { 00:19:17.541 "rw_ios_per_sec": 0, 00:19:17.541 "rw_mbytes_per_sec": 0, 00:19:17.541 "r_mbytes_per_sec": 0, 00:19:17.541 "w_mbytes_per_sec": 0 00:19:17.541 }, 00:19:17.541 "claimed": true, 00:19:17.541 "claim_type": "exclusive_write", 00:19:17.541 "zoned": false, 00:19:17.541 "supported_io_types": { 00:19:17.541 "read": true, 00:19:17.541 "write": true, 00:19:17.542 "unmap": true, 00:19:17.542 "flush": true, 00:19:17.542 "reset": true, 00:19:17.542 "nvme_admin": false, 00:19:17.542 "nvme_io": false, 00:19:17.542 "nvme_io_md": false, 00:19:17.542 "write_zeroes": true, 00:19:17.542 "zcopy": true, 00:19:17.542 "get_zone_info": false, 00:19:17.542 "zone_management": false, 00:19:17.542 "zone_append": false, 00:19:17.542 "compare": false, 00:19:17.542 "compare_and_write": false, 00:19:17.542 "abort": true, 00:19:17.542 "seek_hole": false, 00:19:17.542 "seek_data": false, 00:19:17.542 "copy": true, 00:19:17.542 "nvme_iov_md": false 00:19:17.542 }, 00:19:17.542 "memory_domains": [ 00:19:17.542 { 00:19:17.542 "dma_device_id": "system", 00:19:17.542 "dma_device_type": 1 00:19:17.542 }, 00:19:17.542 { 00:19:17.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.542 "dma_device_type": 2 00:19:17.542 } 00:19:17.542 ], 00:19:17.542 "driver_specific": {} 00:19:17.542 } 00:19:17.542 ] 00:19:17.542 00:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:17.542 00:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:17.542 00:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:17.542 00:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:17.542 00:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:17.542 00:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:17.542 00:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:17.542 00:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:17.542 00:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:17.542 00:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:17.542 00:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:17.542 00:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.542 00:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.800 00:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:17.800 "name": "Existed_Raid", 00:19:17.800 "uuid": "e929fb1d-373d-45bf-8083-5e082903feab", 00:19:17.800 "strip_size_kb": 0, 00:19:17.800 "state": "configuring", 00:19:17.800 "raid_level": "raid1", 00:19:17.800 "superblock": true, 00:19:17.800 "num_base_bdevs": 3, 00:19:17.800 "num_base_bdevs_discovered": 2, 00:19:17.800 "num_base_bdevs_operational": 3, 00:19:17.800 "base_bdevs_list": [ 00:19:17.800 { 00:19:17.800 "name": "BaseBdev1", 00:19:17.800 "uuid": "0d6321f5-b69e-423f-9c0c-d0eed97195d5", 00:19:17.800 "is_configured": true, 00:19:17.800 "data_offset": 2048, 00:19:17.800 "data_size": 63488 00:19:17.800 }, 00:19:17.800 { 00:19:17.800 "name": null, 00:19:17.800 "uuid": "3535325b-9a4b-4049-9246-b3de7c6c1b1a", 00:19:17.800 "is_configured": false, 00:19:17.800 "data_offset": 2048, 00:19:17.800 "data_size": 63488 00:19:17.800 }, 00:19:17.800 { 00:19:17.800 "name": "BaseBdev3", 00:19:17.800 "uuid": "861b1ec1-4022-49b6-9436-686a332dbc5c", 00:19:17.800 "is_configured": true, 00:19:17.800 "data_offset": 2048, 00:19:17.800 "data_size": 63488 00:19:17.800 } 00:19:17.800 ] 00:19:17.800 }' 00:19:17.800 00:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:17.800 00:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.057 00:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.057 00:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:18.314 00:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:18.314 00:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:18.573 [2024-07-25 00:03:14.296414] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:18.573 00:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:18.573 00:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:18.573 00:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:18.573 00:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:18.573 00:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:18.573 00:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:18.573 00:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:18.573 00:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:18.573 00:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:18.573 00:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:18.573 00:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.573 00:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.831 00:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:18.831 "name": "Existed_Raid", 00:19:18.831 "uuid": "e929fb1d-373d-45bf-8083-5e082903feab", 00:19:18.831 "strip_size_kb": 0, 00:19:18.831 "state": "configuring", 00:19:18.831 "raid_level": "raid1", 00:19:18.831 "superblock": true, 00:19:18.831 "num_base_bdevs": 3, 00:19:18.831 "num_base_bdevs_discovered": 1, 00:19:18.831 "num_base_bdevs_operational": 3, 00:19:18.831 "base_bdevs_list": [ 00:19:18.831 { 00:19:18.831 "name": "BaseBdev1", 00:19:18.831 "uuid": "0d6321f5-b69e-423f-9c0c-d0eed97195d5", 00:19:18.831 "is_configured": true, 00:19:18.831 "data_offset": 2048, 00:19:18.831 "data_size": 63488 00:19:18.831 }, 00:19:18.831 { 00:19:18.831 "name": null, 00:19:18.831 "uuid": "3535325b-9a4b-4049-9246-b3de7c6c1b1a", 00:19:18.831 "is_configured": false, 00:19:18.831 "data_offset": 2048, 00:19:18.831 "data_size": 63488 00:19:18.831 }, 00:19:18.831 { 00:19:18.831 "name": null, 00:19:18.831 "uuid": "861b1ec1-4022-49b6-9436-686a332dbc5c", 00:19:18.831 "is_configured": false, 00:19:18.831 "data_offset": 2048, 00:19:18.831 "data_size": 63488 00:19:18.831 } 00:19:18.831 ] 00:19:18.831 }' 00:19:18.831 00:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:18.831 00:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.089 00:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.089 00:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:19.347 00:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:19.347 00:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:19.606 [2024-07-25 00:03:15.380724] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:19.606 00:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:19.606 00:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:19.606 00:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:19.606 00:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:19.606 00:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:19.606 00:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:19.606 00:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:19.606 00:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:19.606 00:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:19.606 00:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:19.606 00:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.606 00:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.864 00:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:19.864 "name": "Existed_Raid", 00:19:19.864 "uuid": "e929fb1d-373d-45bf-8083-5e082903feab", 00:19:19.864 "strip_size_kb": 0, 00:19:19.864 "state": "configuring", 00:19:19.864 "raid_level": "raid1", 00:19:19.864 "superblock": true, 00:19:19.864 "num_base_bdevs": 3, 00:19:19.864 "num_base_bdevs_discovered": 2, 00:19:19.864 "num_base_bdevs_operational": 3, 00:19:19.864 "base_bdevs_list": [ 00:19:19.864 { 00:19:19.864 "name": "BaseBdev1", 00:19:19.864 "uuid": "0d6321f5-b69e-423f-9c0c-d0eed97195d5", 00:19:19.864 "is_configured": true, 00:19:19.864 "data_offset": 2048, 00:19:19.864 "data_size": 63488 00:19:19.864 }, 00:19:19.864 { 00:19:19.864 "name": null, 00:19:19.864 "uuid": "3535325b-9a4b-4049-9246-b3de7c6c1b1a", 00:19:19.864 "is_configured": false, 00:19:19.864 "data_offset": 2048, 00:19:19.864 "data_size": 63488 00:19:19.864 }, 00:19:19.864 { 00:19:19.864 "name": "BaseBdev3", 00:19:19.864 "uuid": "861b1ec1-4022-49b6-9436-686a332dbc5c", 00:19:19.864 "is_configured": true, 00:19:19.864 "data_offset": 2048, 00:19:19.864 "data_size": 63488 00:19:19.864 } 00:19:19.864 ] 00:19:19.864 }' 00:19:19.864 00:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:19.864 00:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.122 00:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:20.122 00:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.380 00:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:20.380 00:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:20.639 [2024-07-25 00:03:16.417069] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:20.898 00:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:20.898 00:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:20.898 00:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:20.898 00:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:20.898 00:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:20.898 00:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:20.898 00:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:20.898 00:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:20.898 00:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:20.898 00:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:20.898 00:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.898 00:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.156 00:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:21.156 "name": "Existed_Raid", 00:19:21.156 "uuid": "e929fb1d-373d-45bf-8083-5e082903feab", 00:19:21.156 "strip_size_kb": 0, 00:19:21.156 "state": "configuring", 00:19:21.156 "raid_level": "raid1", 00:19:21.156 "superblock": true, 00:19:21.156 "num_base_bdevs": 3, 00:19:21.156 "num_base_bdevs_discovered": 1, 00:19:21.156 "num_base_bdevs_operational": 3, 00:19:21.156 "base_bdevs_list": [ 00:19:21.156 { 00:19:21.156 "name": null, 00:19:21.156 "uuid": "0d6321f5-b69e-423f-9c0c-d0eed97195d5", 00:19:21.156 "is_configured": false, 00:19:21.156 "data_offset": 2048, 00:19:21.156 "data_size": 63488 00:19:21.156 }, 00:19:21.156 { 00:19:21.156 "name": null, 00:19:21.156 "uuid": "3535325b-9a4b-4049-9246-b3de7c6c1b1a", 00:19:21.156 "is_configured": false, 00:19:21.156 "data_offset": 2048, 00:19:21.156 "data_size": 63488 00:19:21.156 }, 00:19:21.156 { 00:19:21.156 "name": "BaseBdev3", 00:19:21.156 "uuid": "861b1ec1-4022-49b6-9436-686a332dbc5c", 00:19:21.156 "is_configured": true, 00:19:21.156 "data_offset": 2048, 00:19:21.156 "data_size": 63488 00:19:21.156 } 00:19:21.156 ] 00:19:21.156 }' 00:19:21.156 00:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:21.156 00:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.415 00:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.415 00:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:21.673 00:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:21.673 00:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:21.932 [2024-07-25 00:03:17.583663] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:21.932 00:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:21.932 00:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:21.932 00:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:21.932 00:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:21.932 00:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:21.932 00:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:21.932 00:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:21.932 00:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:21.932 00:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:21.932 00:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:21.932 00:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.932 00:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.191 00:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:22.191 "name": "Existed_Raid", 00:19:22.191 "uuid": "e929fb1d-373d-45bf-8083-5e082903feab", 00:19:22.191 "strip_size_kb": 0, 00:19:22.191 "state": "configuring", 00:19:22.191 "raid_level": "raid1", 00:19:22.191 "superblock": true, 00:19:22.191 "num_base_bdevs": 3, 00:19:22.191 "num_base_bdevs_discovered": 2, 00:19:22.191 "num_base_bdevs_operational": 3, 00:19:22.191 "base_bdevs_list": [ 00:19:22.191 { 00:19:22.191 "name": null, 00:19:22.191 "uuid": "0d6321f5-b69e-423f-9c0c-d0eed97195d5", 00:19:22.191 "is_configured": false, 00:19:22.191 "data_offset": 2048, 00:19:22.191 "data_size": 63488 00:19:22.191 }, 00:19:22.191 { 00:19:22.191 "name": "BaseBdev2", 00:19:22.191 "uuid": "3535325b-9a4b-4049-9246-b3de7c6c1b1a", 00:19:22.191 "is_configured": true, 00:19:22.191 "data_offset": 2048, 00:19:22.191 "data_size": 63488 00:19:22.191 }, 00:19:22.191 { 00:19:22.191 "name": "BaseBdev3", 00:19:22.191 "uuid": "861b1ec1-4022-49b6-9436-686a332dbc5c", 00:19:22.192 "is_configured": true, 00:19:22.192 "data_offset": 2048, 00:19:22.192 "data_size": 63488 00:19:22.192 } 00:19:22.192 ] 00:19:22.192 }' 00:19:22.192 00:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:22.192 00:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.449 00:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.449 00:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:22.707 00:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:22.707 00:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.707 00:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:22.965 00:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 0d6321f5-b69e-423f-9c0c-d0eed97195d5 00:19:23.224 [2024-07-25 00:03:18.949107] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:23.224 [2024-07-25 00:03:18.949366] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:19:23.224 [2024-07-25 00:03:18.949384] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:23.224 [2024-07-25 00:03:18.949498] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005d40 00:19:23.224 [2024-07-25 00:03:18.949934] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:19:23.224 [2024-07-25 00:03:18.949958] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000008a80 00:19:23.224 NewBaseBdev 00:19:23.224 [2024-07-25 00:03:18.950124] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.224 00:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:23.224 00:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:19:23.224 00:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:23.224 00:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:23.224 00:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:23.224 00:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:23.224 00:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:23.490 00:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:23.749 [ 00:19:23.749 { 00:19:23.749 "name": "NewBaseBdev", 00:19:23.749 "aliases": [ 00:19:23.749 "0d6321f5-b69e-423f-9c0c-d0eed97195d5" 00:19:23.749 ], 00:19:23.749 "product_name": "Malloc disk", 00:19:23.749 "block_size": 512, 00:19:23.749 "num_blocks": 65536, 00:19:23.749 "uuid": "0d6321f5-b69e-423f-9c0c-d0eed97195d5", 00:19:23.749 "assigned_rate_limits": { 00:19:23.749 "rw_ios_per_sec": 0, 00:19:23.749 "rw_mbytes_per_sec": 0, 00:19:23.749 "r_mbytes_per_sec": 0, 00:19:23.749 "w_mbytes_per_sec": 0 00:19:23.749 }, 00:19:23.749 "claimed": true, 00:19:23.749 "claim_type": "exclusive_write", 00:19:23.749 "zoned": false, 00:19:23.749 "supported_io_types": { 00:19:23.749 "read": true, 00:19:23.749 "write": true, 00:19:23.749 "unmap": true, 00:19:23.749 "flush": true, 00:19:23.749 "reset": true, 00:19:23.749 "nvme_admin": false, 00:19:23.749 "nvme_io": false, 00:19:23.749 "nvme_io_md": false, 00:19:23.749 "write_zeroes": true, 00:19:23.749 "zcopy": true, 00:19:23.749 "get_zone_info": false, 00:19:23.749 "zone_management": false, 00:19:23.749 "zone_append": false, 00:19:23.749 "compare": false, 00:19:23.749 "compare_and_write": false, 00:19:23.749 "abort": true, 00:19:23.749 "seek_hole": false, 00:19:23.749 "seek_data": false, 00:19:23.749 "copy": true, 00:19:23.749 "nvme_iov_md": false 00:19:23.749 }, 00:19:23.749 "memory_domains": [ 00:19:23.749 { 00:19:23.749 "dma_device_id": "system", 00:19:23.749 "dma_device_type": 1 00:19:23.749 }, 00:19:23.749 { 00:19:23.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.749 "dma_device_type": 2 00:19:23.749 } 00:19:23.749 ], 00:19:23.749 "driver_specific": {} 00:19:23.749 } 00:19:23.749 ] 00:19:23.749 00:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:23.749 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:23.749 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:23.749 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:23.749 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:23.749 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:23.749 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:23.749 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:23.749 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:23.749 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:23.749 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:23.749 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.749 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.007 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:24.007 "name": "Existed_Raid", 00:19:24.007 "uuid": "e929fb1d-373d-45bf-8083-5e082903feab", 00:19:24.007 "strip_size_kb": 0, 00:19:24.007 "state": "online", 00:19:24.007 "raid_level": "raid1", 00:19:24.007 "superblock": true, 00:19:24.007 "num_base_bdevs": 3, 00:19:24.007 "num_base_bdevs_discovered": 3, 00:19:24.007 "num_base_bdevs_operational": 3, 00:19:24.007 "base_bdevs_list": [ 00:19:24.007 { 00:19:24.007 "name": "NewBaseBdev", 00:19:24.007 "uuid": "0d6321f5-b69e-423f-9c0c-d0eed97195d5", 00:19:24.007 "is_configured": true, 00:19:24.007 "data_offset": 2048, 00:19:24.007 "data_size": 63488 00:19:24.007 }, 00:19:24.007 { 00:19:24.007 "name": "BaseBdev2", 00:19:24.007 "uuid": "3535325b-9a4b-4049-9246-b3de7c6c1b1a", 00:19:24.007 "is_configured": true, 00:19:24.007 "data_offset": 2048, 00:19:24.007 "data_size": 63488 00:19:24.007 }, 00:19:24.007 { 00:19:24.007 "name": "BaseBdev3", 00:19:24.007 "uuid": "861b1ec1-4022-49b6-9436-686a332dbc5c", 00:19:24.007 "is_configured": true, 00:19:24.007 "data_offset": 2048, 00:19:24.007 "data_size": 63488 00:19:24.007 } 00:19:24.007 ] 00:19:24.007 }' 00:19:24.007 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:24.007 00:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.265 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:24.265 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:24.265 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:24.265 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:24.265 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:24.265 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:24.266 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:24.266 00:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:24.524 [2024-07-25 00:03:20.177870] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:24.524 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:24.524 "name": "Existed_Raid", 00:19:24.524 "aliases": [ 00:19:24.524 "e929fb1d-373d-45bf-8083-5e082903feab" 00:19:24.524 ], 00:19:24.524 "product_name": "Raid Volume", 00:19:24.524 "block_size": 512, 00:19:24.524 "num_blocks": 63488, 00:19:24.524 "uuid": "e929fb1d-373d-45bf-8083-5e082903feab", 00:19:24.524 "assigned_rate_limits": { 00:19:24.524 "rw_ios_per_sec": 0, 00:19:24.524 "rw_mbytes_per_sec": 0, 00:19:24.524 "r_mbytes_per_sec": 0, 00:19:24.524 "w_mbytes_per_sec": 0 00:19:24.524 }, 00:19:24.524 "claimed": false, 00:19:24.524 "zoned": false, 00:19:24.524 "supported_io_types": { 00:19:24.524 "read": true, 00:19:24.524 "write": true, 00:19:24.524 "unmap": false, 00:19:24.524 "flush": false, 00:19:24.524 "reset": true, 00:19:24.524 "nvme_admin": false, 00:19:24.524 "nvme_io": false, 00:19:24.524 "nvme_io_md": false, 00:19:24.524 "write_zeroes": true, 00:19:24.524 "zcopy": false, 00:19:24.524 "get_zone_info": false, 00:19:24.524 "zone_management": false, 00:19:24.524 "zone_append": false, 00:19:24.524 "compare": false, 00:19:24.524 "compare_and_write": false, 00:19:24.524 "abort": false, 00:19:24.524 "seek_hole": false, 00:19:24.524 "seek_data": false, 00:19:24.524 "copy": false, 00:19:24.524 "nvme_iov_md": false 00:19:24.524 }, 00:19:24.524 "memory_domains": [ 00:19:24.524 { 00:19:24.524 "dma_device_id": "system", 00:19:24.524 "dma_device_type": 1 00:19:24.524 }, 00:19:24.524 { 00:19:24.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.525 "dma_device_type": 2 00:19:24.525 }, 00:19:24.525 { 00:19:24.525 "dma_device_id": "system", 00:19:24.525 "dma_device_type": 1 00:19:24.525 }, 00:19:24.525 { 00:19:24.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.525 "dma_device_type": 2 00:19:24.525 }, 00:19:24.525 { 00:19:24.525 "dma_device_id": "system", 00:19:24.525 "dma_device_type": 1 00:19:24.525 }, 00:19:24.525 { 00:19:24.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.525 "dma_device_type": 2 00:19:24.525 } 00:19:24.525 ], 00:19:24.525 "driver_specific": { 00:19:24.525 "raid": { 00:19:24.525 "uuid": "e929fb1d-373d-45bf-8083-5e082903feab", 00:19:24.525 "strip_size_kb": 0, 00:19:24.525 "state": "online", 00:19:24.525 "raid_level": "raid1", 00:19:24.525 "superblock": true, 00:19:24.525 "num_base_bdevs": 3, 00:19:24.525 "num_base_bdevs_discovered": 3, 00:19:24.525 "num_base_bdevs_operational": 3, 00:19:24.525 "base_bdevs_list": [ 00:19:24.525 { 00:19:24.525 "name": "NewBaseBdev", 00:19:24.525 "uuid": "0d6321f5-b69e-423f-9c0c-d0eed97195d5", 00:19:24.525 "is_configured": true, 00:19:24.525 "data_offset": 2048, 00:19:24.525 "data_size": 63488 00:19:24.525 }, 00:19:24.525 { 00:19:24.525 "name": "BaseBdev2", 00:19:24.525 "uuid": "3535325b-9a4b-4049-9246-b3de7c6c1b1a", 00:19:24.525 "is_configured": true, 00:19:24.525 "data_offset": 2048, 00:19:24.525 "data_size": 63488 00:19:24.525 }, 00:19:24.525 { 00:19:24.525 "name": "BaseBdev3", 00:19:24.525 "uuid": "861b1ec1-4022-49b6-9436-686a332dbc5c", 00:19:24.525 "is_configured": true, 00:19:24.525 "data_offset": 2048, 00:19:24.525 "data_size": 63488 00:19:24.525 } 00:19:24.525 ] 00:19:24.525 } 00:19:24.525 } 00:19:24.525 }' 00:19:24.525 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:24.525 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:24.525 BaseBdev2 00:19:24.525 BaseBdev3' 00:19:24.525 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:24.525 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:24.525 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:24.783 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:24.783 "name": "NewBaseBdev", 00:19:24.783 "aliases": [ 00:19:24.783 "0d6321f5-b69e-423f-9c0c-d0eed97195d5" 00:19:24.783 ], 00:19:24.783 "product_name": "Malloc disk", 00:19:24.783 "block_size": 512, 00:19:24.783 "num_blocks": 65536, 00:19:24.783 "uuid": "0d6321f5-b69e-423f-9c0c-d0eed97195d5", 00:19:24.783 "assigned_rate_limits": { 00:19:24.783 "rw_ios_per_sec": 0, 00:19:24.783 "rw_mbytes_per_sec": 0, 00:19:24.783 "r_mbytes_per_sec": 0, 00:19:24.783 "w_mbytes_per_sec": 0 00:19:24.783 }, 00:19:24.783 "claimed": true, 00:19:24.783 "claim_type": "exclusive_write", 00:19:24.783 "zoned": false, 00:19:24.783 "supported_io_types": { 00:19:24.783 "read": true, 00:19:24.783 "write": true, 00:19:24.783 "unmap": true, 00:19:24.783 "flush": true, 00:19:24.783 "reset": true, 00:19:24.783 "nvme_admin": false, 00:19:24.783 "nvme_io": false, 00:19:24.783 "nvme_io_md": false, 00:19:24.783 "write_zeroes": true, 00:19:24.783 "zcopy": true, 00:19:24.783 "get_zone_info": false, 00:19:24.783 "zone_management": false, 00:19:24.783 "zone_append": false, 00:19:24.783 "compare": false, 00:19:24.783 "compare_and_write": false, 00:19:24.783 "abort": true, 00:19:24.783 "seek_hole": false, 00:19:24.783 "seek_data": false, 00:19:24.783 "copy": true, 00:19:24.783 "nvme_iov_md": false 00:19:24.783 }, 00:19:24.783 "memory_domains": [ 00:19:24.783 { 00:19:24.783 "dma_device_id": "system", 00:19:24.783 "dma_device_type": 1 00:19:24.783 }, 00:19:24.783 { 00:19:24.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.783 "dma_device_type": 2 00:19:24.783 } 00:19:24.783 ], 00:19:24.783 "driver_specific": {} 00:19:24.783 }' 00:19:24.783 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:24.783 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:24.783 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:24.783 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:24.783 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:24.783 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:24.783 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:24.784 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:24.784 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:24.784 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:24.784 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:24.784 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:24.784 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:24.784 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:24.784 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:25.042 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:25.042 "name": "BaseBdev2", 00:19:25.042 "aliases": [ 00:19:25.042 "3535325b-9a4b-4049-9246-b3de7c6c1b1a" 00:19:25.042 ], 00:19:25.042 "product_name": "Malloc disk", 00:19:25.042 "block_size": 512, 00:19:25.042 "num_blocks": 65536, 00:19:25.042 "uuid": "3535325b-9a4b-4049-9246-b3de7c6c1b1a", 00:19:25.042 "assigned_rate_limits": { 00:19:25.042 "rw_ios_per_sec": 0, 00:19:25.042 "rw_mbytes_per_sec": 0, 00:19:25.042 "r_mbytes_per_sec": 0, 00:19:25.042 "w_mbytes_per_sec": 0 00:19:25.042 }, 00:19:25.042 "claimed": true, 00:19:25.042 "claim_type": "exclusive_write", 00:19:25.042 "zoned": false, 00:19:25.042 "supported_io_types": { 00:19:25.042 "read": true, 00:19:25.042 "write": true, 00:19:25.042 "unmap": true, 00:19:25.042 "flush": true, 00:19:25.042 "reset": true, 00:19:25.042 "nvme_admin": false, 00:19:25.042 "nvme_io": false, 00:19:25.042 "nvme_io_md": false, 00:19:25.042 "write_zeroes": true, 00:19:25.042 "zcopy": true, 00:19:25.042 "get_zone_info": false, 00:19:25.042 "zone_management": false, 00:19:25.042 "zone_append": false, 00:19:25.042 "compare": false, 00:19:25.042 "compare_and_write": false, 00:19:25.042 "abort": true, 00:19:25.042 "seek_hole": false, 00:19:25.042 "seek_data": false, 00:19:25.042 "copy": true, 00:19:25.042 "nvme_iov_md": false 00:19:25.042 }, 00:19:25.042 "memory_domains": [ 00:19:25.042 { 00:19:25.042 "dma_device_id": "system", 00:19:25.042 "dma_device_type": 1 00:19:25.042 }, 00:19:25.042 { 00:19:25.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.042 "dma_device_type": 2 00:19:25.042 } 00:19:25.042 ], 00:19:25.042 "driver_specific": {} 00:19:25.042 }' 00:19:25.042 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.042 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.042 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:25.042 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.042 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.042 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:25.042 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.042 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.042 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:25.042 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.042 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.042 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:25.042 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:25.042 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:25.042 00:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:25.300 00:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:25.300 "name": "BaseBdev3", 00:19:25.300 "aliases": [ 00:19:25.300 "861b1ec1-4022-49b6-9436-686a332dbc5c" 00:19:25.300 ], 00:19:25.300 "product_name": "Malloc disk", 00:19:25.300 "block_size": 512, 00:19:25.300 "num_blocks": 65536, 00:19:25.300 "uuid": "861b1ec1-4022-49b6-9436-686a332dbc5c", 00:19:25.300 "assigned_rate_limits": { 00:19:25.300 "rw_ios_per_sec": 0, 00:19:25.300 "rw_mbytes_per_sec": 0, 00:19:25.300 "r_mbytes_per_sec": 0, 00:19:25.300 "w_mbytes_per_sec": 0 00:19:25.300 }, 00:19:25.300 "claimed": true, 00:19:25.300 "claim_type": "exclusive_write", 00:19:25.300 "zoned": false, 00:19:25.300 "supported_io_types": { 00:19:25.300 "read": true, 00:19:25.300 "write": true, 00:19:25.300 "unmap": true, 00:19:25.300 "flush": true, 00:19:25.300 "reset": true, 00:19:25.300 "nvme_admin": false, 00:19:25.300 "nvme_io": false, 00:19:25.300 "nvme_io_md": false, 00:19:25.300 "write_zeroes": true, 00:19:25.300 "zcopy": true, 00:19:25.300 "get_zone_info": false, 00:19:25.300 "zone_management": false, 00:19:25.300 "zone_append": false, 00:19:25.300 "compare": false, 00:19:25.300 "compare_and_write": false, 00:19:25.300 "abort": true, 00:19:25.300 "seek_hole": false, 00:19:25.300 "seek_data": false, 00:19:25.300 "copy": true, 00:19:25.300 "nvme_iov_md": false 00:19:25.300 }, 00:19:25.300 "memory_domains": [ 00:19:25.300 { 00:19:25.300 "dma_device_id": "system", 00:19:25.300 "dma_device_type": 1 00:19:25.300 }, 00:19:25.300 { 00:19:25.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.300 "dma_device_type": 2 00:19:25.300 } 00:19:25.300 ], 00:19:25.300 "driver_specific": {} 00:19:25.300 }' 00:19:25.301 00:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.301 00:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.301 00:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:25.301 00:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.301 00:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.301 00:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:25.301 00:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.301 00:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.301 00:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:25.301 00:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.301 00:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.301 00:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:25.301 00:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:25.559 [2024-07-25 00:03:21.385788] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:25.559 [2024-07-25 00:03:21.385840] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:25.559 [2024-07-25 00:03:21.385917] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.559 [2024-07-25 00:03:21.386237] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.559 [2024-07-25 00:03:21.386253] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name Existed_Raid, state offline 00:19:25.559 00:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 86080 00:19:25.559 00:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86080 ']' 00:19:25.559 00:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 86080 00:19:25.559 00:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:19:25.559 00:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:25.559 00:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86080 00:19:25.817 killing process with pid 86080 00:19:25.817 00:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:25.817 00:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:25.817 00:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86080' 00:19:25.817 00:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 86080 00:19:25.817 [2024-07-25 00:03:21.438675] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:25.817 00:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 86080 00:19:25.817 [2024-07-25 00:03:21.668202] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:27.194 00:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:19:27.194 00:19:27.194 real 0m24.340s 00:19:27.194 user 0m42.459s 00:19:27.194 sys 0m3.833s 00:19:27.194 00:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:27.194 ************************************ 00:19:27.194 END TEST raid_state_function_test_sb 00:19:27.194 ************************************ 00:19:27.194 00:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.194 00:03:22 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:19:27.194 00:03:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:27.194 00:03:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:27.194 00:03:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.194 ************************************ 00:19:27.194 START TEST raid_superblock_test 00:19:27.194 ************************************ 00:19:27.194 00:03:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:19:27.194 00:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=86962 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 86962 /var/tmp/spdk-raid.sock 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 86962 ']' 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:27.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:27.195 00:03:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.195 [2024-07-25 00:03:22.886445] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:19:27.195 [2024-07-25 00:03:22.886632] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86962 ] 00:19:27.195 [2024-07-25 00:03:23.061083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.454 [2024-07-25 00:03:23.274033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.710 [2024-07-25 00:03:23.447675] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:27.968 00:03:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:27.968 00:03:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:19:27.968 00:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:19:27.968 00:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:19:27.968 00:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:19:27.968 00:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:19:27.968 00:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:27.968 00:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:27.968 00:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:19:27.968 00:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:27.968 00:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:28.226 malloc1 00:19:28.226 00:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:28.483 [2024-07-25 00:03:24.260026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:28.483 [2024-07-25 00:03:24.260130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.483 [2024-07-25 00:03:24.260166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:19:28.483 [2024-07-25 00:03:24.260180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.483 [2024-07-25 00:03:24.262697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.483 [2024-07-25 00:03:24.262744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:28.483 pt1 00:19:28.483 00:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:19:28.483 00:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:19:28.483 00:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:19:28.483 00:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:19:28.483 00:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:28.484 00:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:28.484 00:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:19:28.484 00:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:28.484 00:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:28.742 malloc2 00:19:28.742 00:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:29.001 [2024-07-25 00:03:24.754996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:29.001 [2024-07-25 00:03:24.755347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.001 [2024-07-25 00:03:24.755396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:19:29.001 [2024-07-25 00:03:24.755414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.001 [2024-07-25 00:03:24.758187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.001 [2024-07-25 00:03:24.758260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:29.001 pt2 00:19:29.001 00:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:19:29.001 00:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:19:29.001 00:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:19:29.001 00:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:19:29.001 00:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:29.001 00:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:29.001 00:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:19:29.001 00:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:29.001 00:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:29.259 malloc3 00:19:29.259 00:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:29.518 [2024-07-25 00:03:25.225692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:29.518 [2024-07-25 00:03:25.225786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.518 [2024-07-25 00:03:25.225855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:19:29.518 [2024-07-25 00:03:25.225888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.518 [2024-07-25 00:03:25.228613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.518 [2024-07-25 00:03:25.228658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:29.518 pt3 00:19:29.518 00:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:19:29.518 00:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:19:29.518 00:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:19:29.776 [2024-07-25 00:03:25.449770] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:29.776 [2024-07-25 00:03:25.452007] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:29.776 [2024-07-25 00:03:25.452093] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:29.776 [2024-07-25 00:03:25.452319] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:19:29.776 [2024-07-25 00:03:25.452341] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:29.776 [2024-07-25 00:03:25.452482] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:19:29.776 [2024-07-25 00:03:25.452898] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:19:29.776 [2024-07-25 00:03:25.452916] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008a80 00:19:29.776 [2024-07-25 00:03:25.453092] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.776 00:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:29.776 00:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:29.776 00:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:29.776 00:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:29.776 00:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:29.776 00:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:29.776 00:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:29.776 00:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:29.776 00:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:29.776 00:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:29.776 00:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.776 00:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.035 00:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:30.035 "name": "raid_bdev1", 00:19:30.035 "uuid": "1f20ad3d-93f9-4a59-8062-25e6612807c4", 00:19:30.035 "strip_size_kb": 0, 00:19:30.035 "state": "online", 00:19:30.035 "raid_level": "raid1", 00:19:30.035 "superblock": true, 00:19:30.035 "num_base_bdevs": 3, 00:19:30.035 "num_base_bdevs_discovered": 3, 00:19:30.035 "num_base_bdevs_operational": 3, 00:19:30.035 "base_bdevs_list": [ 00:19:30.035 { 00:19:30.035 "name": "pt1", 00:19:30.035 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:30.035 "is_configured": true, 00:19:30.035 "data_offset": 2048, 00:19:30.035 "data_size": 63488 00:19:30.035 }, 00:19:30.035 { 00:19:30.035 "name": "pt2", 00:19:30.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:30.035 "is_configured": true, 00:19:30.035 "data_offset": 2048, 00:19:30.035 "data_size": 63488 00:19:30.035 }, 00:19:30.035 { 00:19:30.035 "name": "pt3", 00:19:30.035 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:30.035 "is_configured": true, 00:19:30.035 "data_offset": 2048, 00:19:30.035 "data_size": 63488 00:19:30.035 } 00:19:30.035 ] 00:19:30.035 }' 00:19:30.035 00:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:30.035 00:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.293 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:19:30.293 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:30.293 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:30.293 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:30.293 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:30.293 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:30.294 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:30.294 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:30.552 [2024-07-25 00:03:26.214248] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:30.552 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:30.552 "name": "raid_bdev1", 00:19:30.552 "aliases": [ 00:19:30.552 "1f20ad3d-93f9-4a59-8062-25e6612807c4" 00:19:30.552 ], 00:19:30.552 "product_name": "Raid Volume", 00:19:30.552 "block_size": 512, 00:19:30.552 "num_blocks": 63488, 00:19:30.552 "uuid": "1f20ad3d-93f9-4a59-8062-25e6612807c4", 00:19:30.552 "assigned_rate_limits": { 00:19:30.552 "rw_ios_per_sec": 0, 00:19:30.552 "rw_mbytes_per_sec": 0, 00:19:30.552 "r_mbytes_per_sec": 0, 00:19:30.552 "w_mbytes_per_sec": 0 00:19:30.552 }, 00:19:30.552 "claimed": false, 00:19:30.552 "zoned": false, 00:19:30.552 "supported_io_types": { 00:19:30.552 "read": true, 00:19:30.552 "write": true, 00:19:30.552 "unmap": false, 00:19:30.552 "flush": false, 00:19:30.552 "reset": true, 00:19:30.552 "nvme_admin": false, 00:19:30.552 "nvme_io": false, 00:19:30.552 "nvme_io_md": false, 00:19:30.552 "write_zeroes": true, 00:19:30.552 "zcopy": false, 00:19:30.552 "get_zone_info": false, 00:19:30.552 "zone_management": false, 00:19:30.552 "zone_append": false, 00:19:30.552 "compare": false, 00:19:30.552 "compare_and_write": false, 00:19:30.552 "abort": false, 00:19:30.552 "seek_hole": false, 00:19:30.552 "seek_data": false, 00:19:30.552 "copy": false, 00:19:30.552 "nvme_iov_md": false 00:19:30.552 }, 00:19:30.552 "memory_domains": [ 00:19:30.552 { 00:19:30.552 "dma_device_id": "system", 00:19:30.552 "dma_device_type": 1 00:19:30.552 }, 00:19:30.552 { 00:19:30.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.552 "dma_device_type": 2 00:19:30.552 }, 00:19:30.552 { 00:19:30.552 "dma_device_id": "system", 00:19:30.552 "dma_device_type": 1 00:19:30.552 }, 00:19:30.552 { 00:19:30.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.552 "dma_device_type": 2 00:19:30.552 }, 00:19:30.552 { 00:19:30.552 "dma_device_id": "system", 00:19:30.552 "dma_device_type": 1 00:19:30.552 }, 00:19:30.552 { 00:19:30.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.552 "dma_device_type": 2 00:19:30.552 } 00:19:30.552 ], 00:19:30.552 "driver_specific": { 00:19:30.552 "raid": { 00:19:30.552 "uuid": "1f20ad3d-93f9-4a59-8062-25e6612807c4", 00:19:30.552 "strip_size_kb": 0, 00:19:30.552 "state": "online", 00:19:30.552 "raid_level": "raid1", 00:19:30.552 "superblock": true, 00:19:30.552 "num_base_bdevs": 3, 00:19:30.552 "num_base_bdevs_discovered": 3, 00:19:30.552 "num_base_bdevs_operational": 3, 00:19:30.552 "base_bdevs_list": [ 00:19:30.552 { 00:19:30.552 "name": "pt1", 00:19:30.552 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:30.552 "is_configured": true, 00:19:30.552 "data_offset": 2048, 00:19:30.553 "data_size": 63488 00:19:30.553 }, 00:19:30.553 { 00:19:30.553 "name": "pt2", 00:19:30.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:30.553 "is_configured": true, 00:19:30.553 "data_offset": 2048, 00:19:30.553 "data_size": 63488 00:19:30.553 }, 00:19:30.553 { 00:19:30.553 "name": "pt3", 00:19:30.553 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:30.553 "is_configured": true, 00:19:30.553 "data_offset": 2048, 00:19:30.553 "data_size": 63488 00:19:30.553 } 00:19:30.553 ] 00:19:30.553 } 00:19:30.553 } 00:19:30.553 }' 00:19:30.553 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:30.553 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:30.553 pt2 00:19:30.553 pt3' 00:19:30.553 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:30.553 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:30.553 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:30.812 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:30.812 "name": "pt1", 00:19:30.812 "aliases": [ 00:19:30.812 "00000000-0000-0000-0000-000000000001" 00:19:30.812 ], 00:19:30.812 "product_name": "passthru", 00:19:30.812 "block_size": 512, 00:19:30.812 "num_blocks": 65536, 00:19:30.812 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:30.812 "assigned_rate_limits": { 00:19:30.812 "rw_ios_per_sec": 0, 00:19:30.812 "rw_mbytes_per_sec": 0, 00:19:30.812 "r_mbytes_per_sec": 0, 00:19:30.812 "w_mbytes_per_sec": 0 00:19:30.812 }, 00:19:30.812 "claimed": true, 00:19:30.812 "claim_type": "exclusive_write", 00:19:30.812 "zoned": false, 00:19:30.812 "supported_io_types": { 00:19:30.812 "read": true, 00:19:30.812 "write": true, 00:19:30.812 "unmap": true, 00:19:30.812 "flush": true, 00:19:30.812 "reset": true, 00:19:30.812 "nvme_admin": false, 00:19:30.812 "nvme_io": false, 00:19:30.812 "nvme_io_md": false, 00:19:30.812 "write_zeroes": true, 00:19:30.812 "zcopy": true, 00:19:30.812 "get_zone_info": false, 00:19:30.812 "zone_management": false, 00:19:30.812 "zone_append": false, 00:19:30.812 "compare": false, 00:19:30.812 "compare_and_write": false, 00:19:30.812 "abort": true, 00:19:30.812 "seek_hole": false, 00:19:30.812 "seek_data": false, 00:19:30.812 "copy": true, 00:19:30.812 "nvme_iov_md": false 00:19:30.812 }, 00:19:30.812 "memory_domains": [ 00:19:30.812 { 00:19:30.812 "dma_device_id": "system", 00:19:30.812 "dma_device_type": 1 00:19:30.812 }, 00:19:30.812 { 00:19:30.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.812 "dma_device_type": 2 00:19:30.812 } 00:19:30.812 ], 00:19:30.812 "driver_specific": { 00:19:30.812 "passthru": { 00:19:30.812 "name": "pt1", 00:19:30.812 "base_bdev_name": "malloc1" 00:19:30.812 } 00:19:30.812 } 00:19:30.812 }' 00:19:30.812 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:30.812 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:30.812 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:30.812 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:30.812 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:30.812 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:30.812 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:30.812 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:30.812 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:30.812 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:30.812 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:30.812 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:30.812 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:30.812 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:30.812 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:31.071 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:31.071 "name": "pt2", 00:19:31.071 "aliases": [ 00:19:31.071 "00000000-0000-0000-0000-000000000002" 00:19:31.071 ], 00:19:31.071 "product_name": "passthru", 00:19:31.071 "block_size": 512, 00:19:31.071 "num_blocks": 65536, 00:19:31.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:31.071 "assigned_rate_limits": { 00:19:31.071 "rw_ios_per_sec": 0, 00:19:31.071 "rw_mbytes_per_sec": 0, 00:19:31.071 "r_mbytes_per_sec": 0, 00:19:31.071 "w_mbytes_per_sec": 0 00:19:31.071 }, 00:19:31.071 "claimed": true, 00:19:31.071 "claim_type": "exclusive_write", 00:19:31.071 "zoned": false, 00:19:31.071 "supported_io_types": { 00:19:31.071 "read": true, 00:19:31.071 "write": true, 00:19:31.071 "unmap": true, 00:19:31.071 "flush": true, 00:19:31.071 "reset": true, 00:19:31.071 "nvme_admin": false, 00:19:31.071 "nvme_io": false, 00:19:31.071 "nvme_io_md": false, 00:19:31.071 "write_zeroes": true, 00:19:31.071 "zcopy": true, 00:19:31.071 "get_zone_info": false, 00:19:31.071 "zone_management": false, 00:19:31.071 "zone_append": false, 00:19:31.071 "compare": false, 00:19:31.071 "compare_and_write": false, 00:19:31.071 "abort": true, 00:19:31.071 "seek_hole": false, 00:19:31.071 "seek_data": false, 00:19:31.071 "copy": true, 00:19:31.071 "nvme_iov_md": false 00:19:31.071 }, 00:19:31.071 "memory_domains": [ 00:19:31.071 { 00:19:31.071 "dma_device_id": "system", 00:19:31.071 "dma_device_type": 1 00:19:31.071 }, 00:19:31.071 { 00:19:31.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.071 "dma_device_type": 2 00:19:31.071 } 00:19:31.071 ], 00:19:31.071 "driver_specific": { 00:19:31.071 "passthru": { 00:19:31.071 "name": "pt2", 00:19:31.071 "base_bdev_name": "malloc2" 00:19:31.071 } 00:19:31.071 } 00:19:31.071 }' 00:19:31.072 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:31.072 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:31.072 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:31.072 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:31.072 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:31.072 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:31.072 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:31.072 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:31.072 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:31.072 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:31.072 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:31.072 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:31.072 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:31.072 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:31.072 00:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:31.331 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:31.331 "name": "pt3", 00:19:31.331 "aliases": [ 00:19:31.331 "00000000-0000-0000-0000-000000000003" 00:19:31.331 ], 00:19:31.331 "product_name": "passthru", 00:19:31.331 "block_size": 512, 00:19:31.331 "num_blocks": 65536, 00:19:31.331 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:31.331 "assigned_rate_limits": { 00:19:31.331 "rw_ios_per_sec": 0, 00:19:31.331 "rw_mbytes_per_sec": 0, 00:19:31.331 "r_mbytes_per_sec": 0, 00:19:31.331 "w_mbytes_per_sec": 0 00:19:31.331 }, 00:19:31.331 "claimed": true, 00:19:31.331 "claim_type": "exclusive_write", 00:19:31.331 "zoned": false, 00:19:31.331 "supported_io_types": { 00:19:31.331 "read": true, 00:19:31.331 "write": true, 00:19:31.331 "unmap": true, 00:19:31.331 "flush": true, 00:19:31.331 "reset": true, 00:19:31.331 "nvme_admin": false, 00:19:31.331 "nvme_io": false, 00:19:31.331 "nvme_io_md": false, 00:19:31.331 "write_zeroes": true, 00:19:31.331 "zcopy": true, 00:19:31.331 "get_zone_info": false, 00:19:31.331 "zone_management": false, 00:19:31.331 "zone_append": false, 00:19:31.331 "compare": false, 00:19:31.331 "compare_and_write": false, 00:19:31.331 "abort": true, 00:19:31.331 "seek_hole": false, 00:19:31.331 "seek_data": false, 00:19:31.331 "copy": true, 00:19:31.331 "nvme_iov_md": false 00:19:31.331 }, 00:19:31.331 "memory_domains": [ 00:19:31.331 { 00:19:31.331 "dma_device_id": "system", 00:19:31.331 "dma_device_type": 1 00:19:31.331 }, 00:19:31.331 { 00:19:31.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.331 "dma_device_type": 2 00:19:31.331 } 00:19:31.331 ], 00:19:31.331 "driver_specific": { 00:19:31.331 "passthru": { 00:19:31.331 "name": "pt3", 00:19:31.331 "base_bdev_name": "malloc3" 00:19:31.331 } 00:19:31.331 } 00:19:31.331 }' 00:19:31.331 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:31.331 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:31.331 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:31.331 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:31.331 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:31.331 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:31.331 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:31.331 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:31.331 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:31.331 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:31.331 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:31.331 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:31.331 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:31.331 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:19:31.590 [2024-07-25 00:03:27.430595] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:31.590 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=1f20ad3d-93f9-4a59-8062-25e6612807c4 00:19:31.590 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 1f20ad3d-93f9-4a59-8062-25e6612807c4 ']' 00:19:31.590 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:31.849 [2024-07-25 00:03:27.642311] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:31.849 [2024-07-25 00:03:27.642366] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:31.849 [2024-07-25 00:03:27.642454] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:31.849 [2024-07-25 00:03:27.642545] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:31.849 [2024-07-25 00:03:27.642561] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name raid_bdev1, state offline 00:19:31.849 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.849 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:19:32.115 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:19:32.115 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:19:32.115 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:19:32.115 00:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:32.377 00:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:19:32.377 00:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:32.635 00:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:19:32.635 00:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:32.893 00:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:32.893 00:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:33.152 00:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:19:33.152 00:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:33.152 00:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:19:33.152 00:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:33.152 00:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:33.152 00:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.152 00:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:33.152 00:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.152 00:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:33.152 00:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.152 00:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:33.152 00:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:33.152 00:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:33.410 [2024-07-25 00:03:29.130616] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:33.410 [2024-07-25 00:03:29.132730] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:33.410 [2024-07-25 00:03:29.132840] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:33.410 [2024-07-25 00:03:29.132907] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:33.410 [2024-07-25 00:03:29.132995] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:33.410 [2024-07-25 00:03:29.133027] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:33.410 [2024-07-25 00:03:29.133051] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:33.410 [2024-07-25 00:03:29.133073] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state configuring 00:19:33.410 request: 00:19:33.410 { 00:19:33.410 "name": "raid_bdev1", 00:19:33.410 "raid_level": "raid1", 00:19:33.410 "base_bdevs": [ 00:19:33.410 "malloc1", 00:19:33.410 "malloc2", 00:19:33.410 "malloc3" 00:19:33.410 ], 00:19:33.410 "superblock": false, 00:19:33.410 "method": "bdev_raid_create", 00:19:33.410 "req_id": 1 00:19:33.410 } 00:19:33.410 Got JSON-RPC error response 00:19:33.410 response: 00:19:33.410 { 00:19:33.410 "code": -17, 00:19:33.410 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:33.410 } 00:19:33.410 00:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:19:33.410 00:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:33.410 00:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:33.410 00:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:33.410 00:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:19:33.410 00:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.668 00:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:19:33.668 00:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:19:33.668 00:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:33.927 [2024-07-25 00:03:29.622655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:33.927 [2024-07-25 00:03:29.622961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.927 [2024-07-25 00:03:29.623189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009680 00:19:33.927 [2024-07-25 00:03:29.623334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.927 [2024-07-25 00:03:29.625857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.927 [2024-07-25 00:03:29.625913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:33.927 [2024-07-25 00:03:29.626028] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:33.927 [2024-07-25 00:03:29.626088] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:33.927 pt1 00:19:33.927 00:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:33.927 00:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:33.927 00:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:33.927 00:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:33.927 00:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:33.927 00:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:33.927 00:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:33.927 00:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:33.927 00:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:33.927 00:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:33.927 00:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.927 00:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.185 00:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:34.185 "name": "raid_bdev1", 00:19:34.185 "uuid": "1f20ad3d-93f9-4a59-8062-25e6612807c4", 00:19:34.185 "strip_size_kb": 0, 00:19:34.185 "state": "configuring", 00:19:34.185 "raid_level": "raid1", 00:19:34.185 "superblock": true, 00:19:34.185 "num_base_bdevs": 3, 00:19:34.185 "num_base_bdevs_discovered": 1, 00:19:34.185 "num_base_bdevs_operational": 3, 00:19:34.185 "base_bdevs_list": [ 00:19:34.185 { 00:19:34.185 "name": "pt1", 00:19:34.185 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:34.185 "is_configured": true, 00:19:34.185 "data_offset": 2048, 00:19:34.185 "data_size": 63488 00:19:34.185 }, 00:19:34.185 { 00:19:34.185 "name": null, 00:19:34.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:34.185 "is_configured": false, 00:19:34.185 "data_offset": 2048, 00:19:34.185 "data_size": 63488 00:19:34.185 }, 00:19:34.185 { 00:19:34.185 "name": null, 00:19:34.185 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:34.185 "is_configured": false, 00:19:34.185 "data_offset": 2048, 00:19:34.185 "data_size": 63488 00:19:34.185 } 00:19:34.185 ] 00:19:34.185 }' 00:19:34.185 00:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:34.185 00:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.444 00:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:19:34.444 00:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:34.702 [2024-07-25 00:03:30.458950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:34.702 [2024-07-25 00:03:30.459325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.702 [2024-07-25 00:03:30.459372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:19:34.702 [2024-07-25 00:03:30.459388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.702 [2024-07-25 00:03:30.459992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.702 [2024-07-25 00:03:30.460019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:34.702 [2024-07-25 00:03:30.460130] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:34.702 [2024-07-25 00:03:30.460177] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:34.702 pt2 00:19:34.702 00:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:34.960 [2024-07-25 00:03:30.707052] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:34.960 00:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:34.960 00:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:34.960 00:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:34.960 00:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:34.960 00:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:34.960 00:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:34.960 00:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:34.960 00:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:34.960 00:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:34.960 00:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:34.960 00:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.960 00:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.218 00:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:35.218 "name": "raid_bdev1", 00:19:35.218 "uuid": "1f20ad3d-93f9-4a59-8062-25e6612807c4", 00:19:35.218 "strip_size_kb": 0, 00:19:35.218 "state": "configuring", 00:19:35.218 "raid_level": "raid1", 00:19:35.218 "superblock": true, 00:19:35.218 "num_base_bdevs": 3, 00:19:35.218 "num_base_bdevs_discovered": 1, 00:19:35.218 "num_base_bdevs_operational": 3, 00:19:35.218 "base_bdevs_list": [ 00:19:35.218 { 00:19:35.218 "name": "pt1", 00:19:35.218 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:35.218 "is_configured": true, 00:19:35.218 "data_offset": 2048, 00:19:35.218 "data_size": 63488 00:19:35.218 }, 00:19:35.218 { 00:19:35.218 "name": null, 00:19:35.218 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.218 "is_configured": false, 00:19:35.218 "data_offset": 2048, 00:19:35.218 "data_size": 63488 00:19:35.218 }, 00:19:35.218 { 00:19:35.218 "name": null, 00:19:35.218 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:35.218 "is_configured": false, 00:19:35.218 "data_offset": 2048, 00:19:35.218 "data_size": 63488 00:19:35.218 } 00:19:35.218 ] 00:19:35.218 }' 00:19:35.218 00:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:35.218 00:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.476 00:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:19:35.476 00:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:35.476 00:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:35.735 [2024-07-25 00:03:31.579249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:35.735 [2024-07-25 00:03:31.579520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.735 [2024-07-25 00:03:31.579558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:19:35.735 [2024-07-25 00:03:31.579577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.735 [2024-07-25 00:03:31.580122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.735 [2024-07-25 00:03:31.580169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:35.735 [2024-07-25 00:03:31.580267] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:35.735 [2024-07-25 00:03:31.580307] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:35.735 pt2 00:19:35.735 00:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:19:35.735 00:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:35.735 00:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:35.993 [2024-07-25 00:03:31.827298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:35.993 [2024-07-25 00:03:31.827612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.993 [2024-07-25 00:03:31.827695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:19:35.993 [2024-07-25 00:03:31.827915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.993 [2024-07-25 00:03:31.828480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.993 [2024-07-25 00:03:31.828709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:35.993 [2024-07-25 00:03:31.828944] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:35.993 [2024-07-25 00:03:31.829110] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:35.993 [2024-07-25 00:03:31.829408] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009c80 00:19:35.993 [2024-07-25 00:03:31.829586] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:35.993 [2024-07-25 00:03:31.829829] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:19:35.993 [2024-07-25 00:03:31.830348] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009c80 00:19:35.993 [2024-07-25 00:03:31.830500] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009c80 00:19:35.993 [2024-07-25 00:03:31.830838] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.993 pt3 00:19:35.993 00:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:19:35.993 00:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:35.993 00:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:35.993 00:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:35.993 00:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:35.993 00:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:35.993 00:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:35.993 00:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:35.993 00:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:35.993 00:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:35.993 00:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:35.993 00:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:36.252 00:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.252 00:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.252 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:36.252 "name": "raid_bdev1", 00:19:36.252 "uuid": "1f20ad3d-93f9-4a59-8062-25e6612807c4", 00:19:36.252 "strip_size_kb": 0, 00:19:36.252 "state": "online", 00:19:36.252 "raid_level": "raid1", 00:19:36.252 "superblock": true, 00:19:36.252 "num_base_bdevs": 3, 00:19:36.252 "num_base_bdevs_discovered": 3, 00:19:36.252 "num_base_bdevs_operational": 3, 00:19:36.252 "base_bdevs_list": [ 00:19:36.252 { 00:19:36.252 "name": "pt1", 00:19:36.252 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:36.252 "is_configured": true, 00:19:36.252 "data_offset": 2048, 00:19:36.252 "data_size": 63488 00:19:36.252 }, 00:19:36.252 { 00:19:36.252 "name": "pt2", 00:19:36.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.252 "is_configured": true, 00:19:36.252 "data_offset": 2048, 00:19:36.252 "data_size": 63488 00:19:36.252 }, 00:19:36.252 { 00:19:36.252 "name": "pt3", 00:19:36.252 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:36.252 "is_configured": true, 00:19:36.252 "data_offset": 2048, 00:19:36.252 "data_size": 63488 00:19:36.252 } 00:19:36.252 ] 00:19:36.252 }' 00:19:36.252 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:36.252 00:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.821 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:19:36.821 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:36.821 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:36.821 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:36.821 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:36.821 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:36.821 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:36.821 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:36.821 [2024-07-25 00:03:32.644604] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:36.821 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:36.821 "name": "raid_bdev1", 00:19:36.821 "aliases": [ 00:19:36.821 "1f20ad3d-93f9-4a59-8062-25e6612807c4" 00:19:36.821 ], 00:19:36.821 "product_name": "Raid Volume", 00:19:36.821 "block_size": 512, 00:19:36.821 "num_blocks": 63488, 00:19:36.821 "uuid": "1f20ad3d-93f9-4a59-8062-25e6612807c4", 00:19:36.821 "assigned_rate_limits": { 00:19:36.821 "rw_ios_per_sec": 0, 00:19:36.821 "rw_mbytes_per_sec": 0, 00:19:36.821 "r_mbytes_per_sec": 0, 00:19:36.821 "w_mbytes_per_sec": 0 00:19:36.821 }, 00:19:36.821 "claimed": false, 00:19:36.821 "zoned": false, 00:19:36.821 "supported_io_types": { 00:19:36.821 "read": true, 00:19:36.821 "write": true, 00:19:36.821 "unmap": false, 00:19:36.821 "flush": false, 00:19:36.821 "reset": true, 00:19:36.821 "nvme_admin": false, 00:19:36.821 "nvme_io": false, 00:19:36.821 "nvme_io_md": false, 00:19:36.821 "write_zeroes": true, 00:19:36.821 "zcopy": false, 00:19:36.821 "get_zone_info": false, 00:19:36.821 "zone_management": false, 00:19:36.821 "zone_append": false, 00:19:36.821 "compare": false, 00:19:36.821 "compare_and_write": false, 00:19:36.821 "abort": false, 00:19:36.821 "seek_hole": false, 00:19:36.821 "seek_data": false, 00:19:36.821 "copy": false, 00:19:36.821 "nvme_iov_md": false 00:19:36.821 }, 00:19:36.821 "memory_domains": [ 00:19:36.821 { 00:19:36.821 "dma_device_id": "system", 00:19:36.821 "dma_device_type": 1 00:19:36.821 }, 00:19:36.821 { 00:19:36.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.821 "dma_device_type": 2 00:19:36.821 }, 00:19:36.821 { 00:19:36.821 "dma_device_id": "system", 00:19:36.821 "dma_device_type": 1 00:19:36.821 }, 00:19:36.821 { 00:19:36.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.821 "dma_device_type": 2 00:19:36.821 }, 00:19:36.821 { 00:19:36.821 "dma_device_id": "system", 00:19:36.821 "dma_device_type": 1 00:19:36.821 }, 00:19:36.821 { 00:19:36.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.821 "dma_device_type": 2 00:19:36.821 } 00:19:36.821 ], 00:19:36.821 "driver_specific": { 00:19:36.821 "raid": { 00:19:36.821 "uuid": "1f20ad3d-93f9-4a59-8062-25e6612807c4", 00:19:36.821 "strip_size_kb": 0, 00:19:36.821 "state": "online", 00:19:36.821 "raid_level": "raid1", 00:19:36.821 "superblock": true, 00:19:36.821 "num_base_bdevs": 3, 00:19:36.821 "num_base_bdevs_discovered": 3, 00:19:36.821 "num_base_bdevs_operational": 3, 00:19:36.821 "base_bdevs_list": [ 00:19:36.821 { 00:19:36.821 "name": "pt1", 00:19:36.821 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:36.821 "is_configured": true, 00:19:36.821 "data_offset": 2048, 00:19:36.821 "data_size": 63488 00:19:36.821 }, 00:19:36.821 { 00:19:36.821 "name": "pt2", 00:19:36.821 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.821 "is_configured": true, 00:19:36.821 "data_offset": 2048, 00:19:36.821 "data_size": 63488 00:19:36.821 }, 00:19:36.821 { 00:19:36.821 "name": "pt3", 00:19:36.821 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:36.821 "is_configured": true, 00:19:36.821 "data_offset": 2048, 00:19:36.821 "data_size": 63488 00:19:36.821 } 00:19:36.821 ] 00:19:36.821 } 00:19:36.821 } 00:19:36.821 }' 00:19:36.821 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:36.821 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:36.821 pt2 00:19:36.821 pt3' 00:19:36.821 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:36.821 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:36.821 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:37.080 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:37.080 "name": "pt1", 00:19:37.080 "aliases": [ 00:19:37.080 "00000000-0000-0000-0000-000000000001" 00:19:37.080 ], 00:19:37.080 "product_name": "passthru", 00:19:37.080 "block_size": 512, 00:19:37.080 "num_blocks": 65536, 00:19:37.081 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:37.081 "assigned_rate_limits": { 00:19:37.081 "rw_ios_per_sec": 0, 00:19:37.081 "rw_mbytes_per_sec": 0, 00:19:37.081 "r_mbytes_per_sec": 0, 00:19:37.081 "w_mbytes_per_sec": 0 00:19:37.081 }, 00:19:37.081 "claimed": true, 00:19:37.081 "claim_type": "exclusive_write", 00:19:37.081 "zoned": false, 00:19:37.081 "supported_io_types": { 00:19:37.081 "read": true, 00:19:37.081 "write": true, 00:19:37.081 "unmap": true, 00:19:37.081 "flush": true, 00:19:37.081 "reset": true, 00:19:37.081 "nvme_admin": false, 00:19:37.081 "nvme_io": false, 00:19:37.081 "nvme_io_md": false, 00:19:37.081 "write_zeroes": true, 00:19:37.081 "zcopy": true, 00:19:37.081 "get_zone_info": false, 00:19:37.081 "zone_management": false, 00:19:37.081 "zone_append": false, 00:19:37.081 "compare": false, 00:19:37.081 "compare_and_write": false, 00:19:37.081 "abort": true, 00:19:37.081 "seek_hole": false, 00:19:37.081 "seek_data": false, 00:19:37.081 "copy": true, 00:19:37.081 "nvme_iov_md": false 00:19:37.081 }, 00:19:37.081 "memory_domains": [ 00:19:37.081 { 00:19:37.081 "dma_device_id": "system", 00:19:37.081 "dma_device_type": 1 00:19:37.081 }, 00:19:37.081 { 00:19:37.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.081 "dma_device_type": 2 00:19:37.081 } 00:19:37.081 ], 00:19:37.081 "driver_specific": { 00:19:37.081 "passthru": { 00:19:37.081 "name": "pt1", 00:19:37.081 "base_bdev_name": "malloc1" 00:19:37.081 } 00:19:37.081 } 00:19:37.081 }' 00:19:37.081 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:37.340 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:37.340 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:37.340 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:37.340 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:37.340 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:37.340 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.340 00:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.340 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:37.340 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.340 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.340 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:37.340 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:37.340 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:37.340 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:37.599 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:37.599 "name": "pt2", 00:19:37.599 "aliases": [ 00:19:37.599 "00000000-0000-0000-0000-000000000002" 00:19:37.599 ], 00:19:37.599 "product_name": "passthru", 00:19:37.599 "block_size": 512, 00:19:37.599 "num_blocks": 65536, 00:19:37.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:37.599 "assigned_rate_limits": { 00:19:37.599 "rw_ios_per_sec": 0, 00:19:37.599 "rw_mbytes_per_sec": 0, 00:19:37.599 "r_mbytes_per_sec": 0, 00:19:37.599 "w_mbytes_per_sec": 0 00:19:37.599 }, 00:19:37.599 "claimed": true, 00:19:37.599 "claim_type": "exclusive_write", 00:19:37.599 "zoned": false, 00:19:37.599 "supported_io_types": { 00:19:37.599 "read": true, 00:19:37.599 "write": true, 00:19:37.599 "unmap": true, 00:19:37.599 "flush": true, 00:19:37.599 "reset": true, 00:19:37.599 "nvme_admin": false, 00:19:37.599 "nvme_io": false, 00:19:37.599 "nvme_io_md": false, 00:19:37.599 "write_zeroes": true, 00:19:37.599 "zcopy": true, 00:19:37.599 "get_zone_info": false, 00:19:37.599 "zone_management": false, 00:19:37.599 "zone_append": false, 00:19:37.599 "compare": false, 00:19:37.599 "compare_and_write": false, 00:19:37.599 "abort": true, 00:19:37.599 "seek_hole": false, 00:19:37.599 "seek_data": false, 00:19:37.599 "copy": true, 00:19:37.599 "nvme_iov_md": false 00:19:37.599 }, 00:19:37.599 "memory_domains": [ 00:19:37.599 { 00:19:37.599 "dma_device_id": "system", 00:19:37.599 "dma_device_type": 1 00:19:37.599 }, 00:19:37.599 { 00:19:37.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.599 "dma_device_type": 2 00:19:37.599 } 00:19:37.599 ], 00:19:37.599 "driver_specific": { 00:19:37.599 "passthru": { 00:19:37.599 "name": "pt2", 00:19:37.599 "base_bdev_name": "malloc2" 00:19:37.599 } 00:19:37.599 } 00:19:37.599 }' 00:19:37.599 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:37.599 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:37.599 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:37.599 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:37.599 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:37.599 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:37.599 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.599 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.599 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:37.599 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.599 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.599 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:37.599 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:37.599 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:37.599 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:37.859 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:37.859 "name": "pt3", 00:19:37.859 "aliases": [ 00:19:37.859 "00000000-0000-0000-0000-000000000003" 00:19:37.859 ], 00:19:37.859 "product_name": "passthru", 00:19:37.859 "block_size": 512, 00:19:37.859 "num_blocks": 65536, 00:19:37.859 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:37.859 "assigned_rate_limits": { 00:19:37.859 "rw_ios_per_sec": 0, 00:19:37.859 "rw_mbytes_per_sec": 0, 00:19:37.859 "r_mbytes_per_sec": 0, 00:19:37.859 "w_mbytes_per_sec": 0 00:19:37.859 }, 00:19:37.859 "claimed": true, 00:19:37.859 "claim_type": "exclusive_write", 00:19:37.859 "zoned": false, 00:19:37.859 "supported_io_types": { 00:19:37.859 "read": true, 00:19:37.859 "write": true, 00:19:37.859 "unmap": true, 00:19:37.859 "flush": true, 00:19:37.859 "reset": true, 00:19:37.859 "nvme_admin": false, 00:19:37.859 "nvme_io": false, 00:19:37.859 "nvme_io_md": false, 00:19:37.859 "write_zeroes": true, 00:19:37.859 "zcopy": true, 00:19:37.859 "get_zone_info": false, 00:19:37.859 "zone_management": false, 00:19:37.859 "zone_append": false, 00:19:37.859 "compare": false, 00:19:37.859 "compare_and_write": false, 00:19:37.859 "abort": true, 00:19:37.859 "seek_hole": false, 00:19:37.859 "seek_data": false, 00:19:37.859 "copy": true, 00:19:37.859 "nvme_iov_md": false 00:19:37.859 }, 00:19:37.859 "memory_domains": [ 00:19:37.859 { 00:19:37.859 "dma_device_id": "system", 00:19:37.859 "dma_device_type": 1 00:19:37.859 }, 00:19:37.859 { 00:19:37.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.859 "dma_device_type": 2 00:19:37.859 } 00:19:37.859 ], 00:19:37.859 "driver_specific": { 00:19:37.859 "passthru": { 00:19:37.859 "name": "pt3", 00:19:37.859 "base_bdev_name": "malloc3" 00:19:37.859 } 00:19:37.859 } 00:19:37.859 }' 00:19:37.859 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:37.859 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:37.859 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:37.859 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:37.859 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:37.859 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:37.859 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.859 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.859 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:37.859 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.859 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.859 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:37.859 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:37.859 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:19:38.118 [2024-07-25 00:03:33.924921] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:38.118 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 1f20ad3d-93f9-4a59-8062-25e6612807c4 '!=' 1f20ad3d-93f9-4a59-8062-25e6612807c4 ']' 00:19:38.118 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:19:38.118 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:38.118 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:19:38.118 00:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:38.377 [2024-07-25 00:03:34.148693] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:38.377 00:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:38.377 00:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:38.377 00:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:38.377 00:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:38.377 00:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:38.377 00:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:38.377 00:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:38.377 00:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:38.377 00:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:38.377 00:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:38.377 00:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.377 00:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.636 00:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:38.636 "name": "raid_bdev1", 00:19:38.636 "uuid": "1f20ad3d-93f9-4a59-8062-25e6612807c4", 00:19:38.636 "strip_size_kb": 0, 00:19:38.636 "state": "online", 00:19:38.636 "raid_level": "raid1", 00:19:38.636 "superblock": true, 00:19:38.636 "num_base_bdevs": 3, 00:19:38.636 "num_base_bdevs_discovered": 2, 00:19:38.636 "num_base_bdevs_operational": 2, 00:19:38.636 "base_bdevs_list": [ 00:19:38.636 { 00:19:38.636 "name": null, 00:19:38.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.636 "is_configured": false, 00:19:38.636 "data_offset": 2048, 00:19:38.636 "data_size": 63488 00:19:38.636 }, 00:19:38.636 { 00:19:38.636 "name": "pt2", 00:19:38.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:38.636 "is_configured": true, 00:19:38.636 "data_offset": 2048, 00:19:38.636 "data_size": 63488 00:19:38.636 }, 00:19:38.636 { 00:19:38.636 "name": "pt3", 00:19:38.636 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:38.636 "is_configured": true, 00:19:38.636 "data_offset": 2048, 00:19:38.636 "data_size": 63488 00:19:38.636 } 00:19:38.636 ] 00:19:38.636 }' 00:19:38.636 00:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:38.636 00:03:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.895 00:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:39.153 [2024-07-25 00:03:34.980865] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:39.153 [2024-07-25 00:03:34.980902] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:39.153 [2024-07-25 00:03:34.980984] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:39.153 [2024-07-25 00:03:34.981055] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:39.153 [2024-07-25 00:03:34.981075] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009c80 name raid_bdev1, state offline 00:19:39.153 00:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.153 00:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:19:39.436 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:19:39.436 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:19:39.436 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:39.436 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:19:39.436 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:39.728 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:39.728 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:19:39.728 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:39.987 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:39.987 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:19:39.987 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:19:39.987 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:19:39.987 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:40.246 [2024-07-25 00:03:35.949107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:40.246 [2024-07-25 00:03:35.949393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.246 [2024-07-25 00:03:35.949432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:19:40.246 [2024-07-25 00:03:35.949451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.246 [2024-07-25 00:03:35.952037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.246 [2024-07-25 00:03:35.952088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:40.246 [2024-07-25 00:03:35.952191] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:40.246 [2024-07-25 00:03:35.952251] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:40.246 pt2 00:19:40.246 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:40.246 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:40.246 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:40.246 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:40.246 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:40.246 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:40.246 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:40.246 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:40.246 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:40.246 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:40.246 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.246 00:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.504 00:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:40.504 "name": "raid_bdev1", 00:19:40.504 "uuid": "1f20ad3d-93f9-4a59-8062-25e6612807c4", 00:19:40.504 "strip_size_kb": 0, 00:19:40.504 "state": "configuring", 00:19:40.504 "raid_level": "raid1", 00:19:40.504 "superblock": true, 00:19:40.504 "num_base_bdevs": 3, 00:19:40.504 "num_base_bdevs_discovered": 1, 00:19:40.504 "num_base_bdevs_operational": 2, 00:19:40.504 "base_bdevs_list": [ 00:19:40.504 { 00:19:40.504 "name": null, 00:19:40.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.504 "is_configured": false, 00:19:40.504 "data_offset": 2048, 00:19:40.504 "data_size": 63488 00:19:40.504 }, 00:19:40.504 { 00:19:40.504 "name": "pt2", 00:19:40.504 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:40.504 "is_configured": true, 00:19:40.504 "data_offset": 2048, 00:19:40.504 "data_size": 63488 00:19:40.504 }, 00:19:40.504 { 00:19:40.504 "name": null, 00:19:40.504 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:40.504 "is_configured": false, 00:19:40.504 "data_offset": 2048, 00:19:40.504 "data_size": 63488 00:19:40.504 } 00:19:40.504 ] 00:19:40.504 }' 00:19:40.504 00:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:40.504 00:03:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.763 00:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:19:40.763 00:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:19:40.763 00:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:19:40.763 00:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:41.021 [2024-07-25 00:03:36.733362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:41.021 [2024-07-25 00:03:36.733645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.021 [2024-07-25 00:03:36.733685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:19:41.021 [2024-07-25 00:03:36.733704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.021 [2024-07-25 00:03:36.734275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.021 [2024-07-25 00:03:36.734311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:41.021 [2024-07-25 00:03:36.734410] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:41.021 [2024-07-25 00:03:36.734441] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:41.021 [2024-07-25 00:03:36.734597] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ae80 00:19:41.021 [2024-07-25 00:03:36.734626] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:41.021 [2024-07-25 00:03:36.734741] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:19:41.021 [2024-07-25 00:03:36.735174] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ae80 00:19:41.021 [2024-07-25 00:03:36.735213] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ae80 00:19:41.021 [2024-07-25 00:03:36.735370] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.021 pt3 00:19:41.021 00:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:41.021 00:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:41.021 00:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:41.021 00:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:41.021 00:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:41.021 00:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:41.021 00:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:41.021 00:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:41.021 00:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:41.021 00:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:41.022 00:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.022 00:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.280 00:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:41.280 "name": "raid_bdev1", 00:19:41.280 "uuid": "1f20ad3d-93f9-4a59-8062-25e6612807c4", 00:19:41.280 "strip_size_kb": 0, 00:19:41.280 "state": "online", 00:19:41.280 "raid_level": "raid1", 00:19:41.280 "superblock": true, 00:19:41.280 "num_base_bdevs": 3, 00:19:41.280 "num_base_bdevs_discovered": 2, 00:19:41.280 "num_base_bdevs_operational": 2, 00:19:41.280 "base_bdevs_list": [ 00:19:41.280 { 00:19:41.280 "name": null, 00:19:41.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.280 "is_configured": false, 00:19:41.280 "data_offset": 2048, 00:19:41.280 "data_size": 63488 00:19:41.280 }, 00:19:41.280 { 00:19:41.280 "name": "pt2", 00:19:41.280 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:41.280 "is_configured": true, 00:19:41.280 "data_offset": 2048, 00:19:41.280 "data_size": 63488 00:19:41.280 }, 00:19:41.280 { 00:19:41.280 "name": "pt3", 00:19:41.280 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:41.280 "is_configured": true, 00:19:41.280 "data_offset": 2048, 00:19:41.280 "data_size": 63488 00:19:41.280 } 00:19:41.280 ] 00:19:41.280 }' 00:19:41.280 00:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:41.280 00:03:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.538 00:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:41.796 [2024-07-25 00:03:37.541566] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:41.797 [2024-07-25 00:03:37.541610] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:41.797 [2024-07-25 00:03:37.541687] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:41.797 [2024-07-25 00:03:37.541755] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:41.797 [2024-07-25 00:03:37.541769] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ae80 name raid_bdev1, state offline 00:19:41.797 00:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:19:41.797 00:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.055 00:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:19:42.055 00:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:19:42.055 00:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 3 -gt 2 ']' 00:19:42.055 00:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=2 00:19:42.055 00:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:42.314 00:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:42.572 [2024-07-25 00:03:38.229790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:42.572 [2024-07-25 00:03:38.229953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.572 [2024-07-25 00:03:38.229987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:19:42.572 [2024-07-25 00:03:38.230001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.572 [2024-07-25 00:03:38.232675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.572 [2024-07-25 00:03:38.232722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:42.572 [2024-07-25 00:03:38.232922] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:42.572 [2024-07-25 00:03:38.232979] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:42.572 [2024-07-25 00:03:38.233145] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:42.572 [2024-07-25 00:03:38.233168] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:42.572 [2024-07-25 00:03:38.233217] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state configuring 00:19:42.572 [2024-07-25 00:03:38.233282] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:42.572 pt1 00:19:42.572 00:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 3 -gt 2 ']' 00:19:42.572 00:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:42.572 00:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:42.572 00:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:42.572 00:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:42.572 00:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:42.572 00:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:42.572 00:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:42.572 00:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:42.572 00:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:42.572 00:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:42.572 00:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.572 00:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.830 00:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:42.830 "name": "raid_bdev1", 00:19:42.830 "uuid": "1f20ad3d-93f9-4a59-8062-25e6612807c4", 00:19:42.830 "strip_size_kb": 0, 00:19:42.830 "state": "configuring", 00:19:42.830 "raid_level": "raid1", 00:19:42.830 "superblock": true, 00:19:42.830 "num_base_bdevs": 3, 00:19:42.830 "num_base_bdevs_discovered": 1, 00:19:42.830 "num_base_bdevs_operational": 2, 00:19:42.830 "base_bdevs_list": [ 00:19:42.830 { 00:19:42.830 "name": null, 00:19:42.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.830 "is_configured": false, 00:19:42.830 "data_offset": 2048, 00:19:42.830 "data_size": 63488 00:19:42.830 }, 00:19:42.830 { 00:19:42.830 "name": "pt2", 00:19:42.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:42.830 "is_configured": true, 00:19:42.830 "data_offset": 2048, 00:19:42.830 "data_size": 63488 00:19:42.830 }, 00:19:42.830 { 00:19:42.830 "name": null, 00:19:42.830 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:42.830 "is_configured": false, 00:19:42.830 "data_offset": 2048, 00:19:42.830 "data_size": 63488 00:19:42.830 } 00:19:42.830 ] 00:19:42.830 }' 00:19:42.830 00:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:42.830 00:03:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.088 00:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:19:43.088 00:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:43.347 00:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:19:43.347 00:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:43.347 [2024-07-25 00:03:39.186087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:43.347 [2024-07-25 00:03:39.186200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.347 [2024-07-25 00:03:39.186248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:19:43.347 [2024-07-25 00:03:39.186261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.347 [2024-07-25 00:03:39.186732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.347 [2024-07-25 00:03:39.186755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:43.347 [2024-07-25 00:03:39.186925] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:43.347 [2024-07-25 00:03:39.187002] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:43.347 [2024-07-25 00:03:39.187154] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000bd80 00:19:43.347 [2024-07-25 00:03:39.187171] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:43.347 [2024-07-25 00:03:39.187337] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ba0 00:19:43.347 [2024-07-25 00:03:39.187714] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000bd80 00:19:43.347 [2024-07-25 00:03:39.187741] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000bd80 00:19:43.347 [2024-07-25 00:03:39.187947] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.347 pt3 00:19:43.347 00:03:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:43.347 00:03:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:43.347 00:03:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:43.347 00:03:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:43.347 00:03:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:43.347 00:03:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:43.347 00:03:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:43.347 00:03:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:43.347 00:03:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:43.347 00:03:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:43.347 00:03:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.347 00:03:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.914 00:03:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:43.914 "name": "raid_bdev1", 00:19:43.914 "uuid": "1f20ad3d-93f9-4a59-8062-25e6612807c4", 00:19:43.914 "strip_size_kb": 0, 00:19:43.914 "state": "online", 00:19:43.914 "raid_level": "raid1", 00:19:43.914 "superblock": true, 00:19:43.914 "num_base_bdevs": 3, 00:19:43.914 "num_base_bdevs_discovered": 2, 00:19:43.914 "num_base_bdevs_operational": 2, 00:19:43.914 "base_bdevs_list": [ 00:19:43.914 { 00:19:43.914 "name": null, 00:19:43.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.914 "is_configured": false, 00:19:43.914 "data_offset": 2048, 00:19:43.914 "data_size": 63488 00:19:43.914 }, 00:19:43.914 { 00:19:43.914 "name": "pt2", 00:19:43.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:43.914 "is_configured": true, 00:19:43.914 "data_offset": 2048, 00:19:43.914 "data_size": 63488 00:19:43.914 }, 00:19:43.914 { 00:19:43.914 "name": "pt3", 00:19:43.914 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:43.914 "is_configured": true, 00:19:43.914 "data_offset": 2048, 00:19:43.914 "data_size": 63488 00:19:43.914 } 00:19:43.914 ] 00:19:43.914 }' 00:19:43.914 00:03:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:43.914 00:03:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.172 00:03:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:19:44.172 00:03:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:44.172 00:03:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:19:44.172 00:03:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:44.172 00:03:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:19:44.431 [2024-07-25 00:03:40.294581] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:44.689 00:03:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 1f20ad3d-93f9-4a59-8062-25e6612807c4 '!=' 1f20ad3d-93f9-4a59-8062-25e6612807c4 ']' 00:19:44.689 00:03:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 86962 00:19:44.689 00:03:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 86962 ']' 00:19:44.689 00:03:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 86962 00:19:44.689 00:03:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:19:44.689 00:03:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:44.689 00:03:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86962 00:19:44.689 00:03:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:44.689 00:03:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:44.689 00:03:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86962' 00:19:44.689 killing process with pid 86962 00:19:44.689 00:03:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 86962 00:19:44.689 [2024-07-25 00:03:40.353811] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:44.689 00:03:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 86962 00:19:44.689 [2024-07-25 00:03:40.353953] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:44.689 [2024-07-25 00:03:40.354029] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:44.689 [2024-07-25 00:03:40.354049] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state offline 00:19:44.947 [2024-07-25 00:03:40.602487] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:45.882 00:03:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:19:45.882 00:19:45.882 real 0m18.814s 00:19:45.882 user 0m32.675s 00:19:45.882 sys 0m2.969s 00:19:45.882 ************************************ 00:19:45.882 END TEST raid_superblock_test 00:19:45.882 ************************************ 00:19:45.882 00:03:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:45.882 00:03:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.882 00:03:41 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:19:45.882 00:03:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:45.882 00:03:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:45.882 00:03:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:45.882 ************************************ 00:19:45.882 START TEST raid_read_error_test 00:19:45.882 ************************************ 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.wFmQ4td6ZE 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=87629 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 87629 /var/tmp/spdk-raid.sock 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 87629 ']' 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:45.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:45.882 00:03:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.141 [2024-07-25 00:03:41.778487] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:19:46.141 [2024-07-25 00:03:41.778665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87629 ] 00:19:46.141 [2024-07-25 00:03:41.952749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.400 [2024-07-25 00:03:42.227342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.658 [2024-07-25 00:03:42.408176] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:46.917 00:03:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:46.917 00:03:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:19:46.917 00:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:46.917 00:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:47.175 BaseBdev1_malloc 00:19:47.175 00:03:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:47.433 true 00:19:47.433 00:03:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:47.692 [2024-07-25 00:03:43.500566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:47.692 [2024-07-25 00:03:43.500666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.692 [2024-07-25 00:03:43.500700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:19:47.692 [2024-07-25 00:03:43.500718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.692 [2024-07-25 00:03:43.503307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.692 [2024-07-25 00:03:43.503531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:47.692 BaseBdev1 00:19:47.692 00:03:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:47.692 00:03:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:47.950 BaseBdev2_malloc 00:19:47.950 00:03:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:48.228 true 00:19:48.228 00:03:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:48.502 [2024-07-25 00:03:44.273784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:48.502 [2024-07-25 00:03:44.274063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.502 [2024-07-25 00:03:44.274220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:19:48.502 [2024-07-25 00:03:44.274350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.502 [2024-07-25 00:03:44.277046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.502 [2024-07-25 00:03:44.277263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:48.502 BaseBdev2 00:19:48.502 00:03:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:48.502 00:03:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:48.761 BaseBdev3_malloc 00:19:48.761 00:03:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:49.019 true 00:19:49.019 00:03:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:49.278 [2024-07-25 00:03:44.971983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:49.278 [2024-07-25 00:03:44.972115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:49.278 [2024-07-25 00:03:44.972146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:19:49.278 [2024-07-25 00:03:44.972163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:49.278 [2024-07-25 00:03:44.974930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:49.278 BaseBdev3 00:19:49.278 [2024-07-25 00:03:44.975149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:49.278 00:03:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:49.536 [2024-07-25 00:03:45.240147] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:49.536 [2024-07-25 00:03:45.242307] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:49.536 [2024-07-25 00:03:45.242415] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:49.537 [2024-07-25 00:03:45.242696] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:19:49.537 [2024-07-25 00:03:45.242715] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:49.537 [2024-07-25 00:03:45.242908] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:19:49.537 [2024-07-25 00:03:45.243381] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:19:49.537 [2024-07-25 00:03:45.243419] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:19:49.537 [2024-07-25 00:03:45.243605] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.537 00:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:49.537 00:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:49.537 00:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:49.537 00:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:49.537 00:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:49.537 00:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:49.537 00:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:49.537 00:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:49.537 00:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:49.537 00:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:49.537 00:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.537 00:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.795 00:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:49.795 "name": "raid_bdev1", 00:19:49.795 "uuid": "c08a8717-299e-4090-9129-c00ae018db77", 00:19:49.795 "strip_size_kb": 0, 00:19:49.795 "state": "online", 00:19:49.795 "raid_level": "raid1", 00:19:49.795 "superblock": true, 00:19:49.795 "num_base_bdevs": 3, 00:19:49.795 "num_base_bdevs_discovered": 3, 00:19:49.795 "num_base_bdevs_operational": 3, 00:19:49.795 "base_bdevs_list": [ 00:19:49.795 { 00:19:49.795 "name": "BaseBdev1", 00:19:49.795 "uuid": "0ec8eb0b-94aa-5843-8b5d-21049f95e426", 00:19:49.795 "is_configured": true, 00:19:49.795 "data_offset": 2048, 00:19:49.795 "data_size": 63488 00:19:49.795 }, 00:19:49.795 { 00:19:49.795 "name": "BaseBdev2", 00:19:49.795 "uuid": "2298d486-61f3-5b50-b454-ea6877f80e59", 00:19:49.795 "is_configured": true, 00:19:49.795 "data_offset": 2048, 00:19:49.795 "data_size": 63488 00:19:49.795 }, 00:19:49.795 { 00:19:49.795 "name": "BaseBdev3", 00:19:49.795 "uuid": "9acd4f39-26d1-5303-8168-d6f4789c4bf2", 00:19:49.795 "is_configured": true, 00:19:49.795 "data_offset": 2048, 00:19:49.795 "data_size": 63488 00:19:49.795 } 00:19:49.795 ] 00:19:49.795 }' 00:19:49.795 00:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:49.795 00:03:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.053 00:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:19:50.053 00:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:50.312 [2024-07-25 00:03:45.993650] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:19:51.248 00:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:51.508 00:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:19:51.508 00:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:19:51.508 00:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:19:51.508 00:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:19:51.508 00:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:51.508 00:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:51.508 00:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:51.508 00:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:51.508 00:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:51.508 00:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:51.508 00:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:51.508 00:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:51.508 00:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:51.508 00:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:51.508 00:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.508 00:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.767 00:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:51.767 "name": "raid_bdev1", 00:19:51.767 "uuid": "c08a8717-299e-4090-9129-c00ae018db77", 00:19:51.767 "strip_size_kb": 0, 00:19:51.767 "state": "online", 00:19:51.767 "raid_level": "raid1", 00:19:51.767 "superblock": true, 00:19:51.767 "num_base_bdevs": 3, 00:19:51.767 "num_base_bdevs_discovered": 3, 00:19:51.767 "num_base_bdevs_operational": 3, 00:19:51.767 "base_bdevs_list": [ 00:19:51.767 { 00:19:51.767 "name": "BaseBdev1", 00:19:51.767 "uuid": "0ec8eb0b-94aa-5843-8b5d-21049f95e426", 00:19:51.767 "is_configured": true, 00:19:51.767 "data_offset": 2048, 00:19:51.767 "data_size": 63488 00:19:51.767 }, 00:19:51.767 { 00:19:51.767 "name": "BaseBdev2", 00:19:51.767 "uuid": "2298d486-61f3-5b50-b454-ea6877f80e59", 00:19:51.767 "is_configured": true, 00:19:51.767 "data_offset": 2048, 00:19:51.767 "data_size": 63488 00:19:51.767 }, 00:19:51.767 { 00:19:51.767 "name": "BaseBdev3", 00:19:51.767 "uuid": "9acd4f39-26d1-5303-8168-d6f4789c4bf2", 00:19:51.767 "is_configured": true, 00:19:51.767 "data_offset": 2048, 00:19:51.767 "data_size": 63488 00:19:51.767 } 00:19:51.767 ] 00:19:51.767 }' 00:19:51.767 00:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:51.767 00:03:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.026 00:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:52.285 [2024-07-25 00:03:48.034367] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:52.285 [2024-07-25 00:03:48.034584] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:52.285 [2024-07-25 00:03:48.037705] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:52.285 [2024-07-25 00:03:48.037924] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.285 [2024-07-25 00:03:48.038168] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:52.285 [2024-07-25 00:03:48.038317] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:19:52.285 0 00:19:52.285 00:03:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 87629 00:19:52.285 00:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 87629 ']' 00:19:52.285 00:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 87629 00:19:52.285 00:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:19:52.285 00:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:52.285 00:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87629 00:19:52.285 killing process with pid 87629 00:19:52.285 00:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:52.285 00:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:52.286 00:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87629' 00:19:52.286 00:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 87629 00:19:52.286 [2024-07-25 00:03:48.088766] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:52.286 00:03:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 87629 00:19:52.545 [2024-07-25 00:03:48.263419] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:53.924 00:03:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.wFmQ4td6ZE 00:19:53.924 00:03:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:19:53.924 00:03:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:19:53.924 00:03:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:19:53.924 00:03:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:19:53.924 00:03:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:53.924 00:03:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:19:53.924 00:03:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:19:53.924 00:19:53.924 real 0m7.720s 00:19:53.924 user 0m11.499s 00:19:53.924 sys 0m0.968s 00:19:53.924 ************************************ 00:19:53.924 END TEST raid_read_error_test 00:19:53.924 ************************************ 00:19:53.924 00:03:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:53.924 00:03:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.924 00:03:49 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:19:53.925 00:03:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:53.925 00:03:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:53.925 00:03:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:53.925 ************************************ 00:19:53.925 START TEST raid_write_error_test 00:19:53.925 ************************************ 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.Ft4kmQq5fQ 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=87819 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 87819 /var/tmp/spdk-raid.sock 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 87819 ']' 00:19:53.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:53.925 00:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.925 [2024-07-25 00:03:49.550967] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:19:53.925 [2024-07-25 00:03:49.551188] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87819 ] 00:19:53.925 [2024-07-25 00:03:49.723876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.184 [2024-07-25 00:03:49.906724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.443 [2024-07-25 00:03:50.084818] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:54.702 00:03:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:54.702 00:03:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:19:54.702 00:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:54.702 00:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:54.961 BaseBdev1_malloc 00:19:54.961 00:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:55.220 true 00:19:55.220 00:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:55.479 [2024-07-25 00:03:51.290007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:55.479 [2024-07-25 00:03:51.290096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.479 [2024-07-25 00:03:51.290131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:19:55.479 [2024-07-25 00:03:51.290148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.479 [2024-07-25 00:03:51.292876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.479 [2024-07-25 00:03:51.292939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:55.479 BaseBdev1 00:19:55.479 00:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:55.479 00:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:55.738 BaseBdev2_malloc 00:19:55.738 00:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:55.997 true 00:19:55.997 00:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:56.256 [2024-07-25 00:03:52.058734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:56.256 [2024-07-25 00:03:52.058885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.256 [2024-07-25 00:03:52.058932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:19:56.256 [2024-07-25 00:03:52.058955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.256 [2024-07-25 00:03:52.061611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.256 [2024-07-25 00:03:52.061691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:56.256 BaseBdev2 00:19:56.256 00:03:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:56.256 00:03:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:56.529 BaseBdev3_malloc 00:19:56.529 00:03:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:56.800 true 00:19:56.800 00:03:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:57.058 [2024-07-25 00:03:52.801882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:57.058 [2024-07-25 00:03:52.801976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.058 [2024-07-25 00:03:52.802006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:19:57.058 [2024-07-25 00:03:52.802023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.058 [2024-07-25 00:03:52.804740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.058 [2024-07-25 00:03:52.804789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:57.058 BaseBdev3 00:19:57.058 00:03:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:57.318 [2024-07-25 00:03:53.033995] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:57.318 [2024-07-25 00:03:53.036340] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:57.318 [2024-07-25 00:03:53.036618] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:57.318 [2024-07-25 00:03:53.037082] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:19:57.318 [2024-07-25 00:03:53.037310] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:57.318 [2024-07-25 00:03:53.037499] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:19:57.318 [2024-07-25 00:03:53.038099] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:19:57.318 [2024-07-25 00:03:53.038315] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:19:57.318 [2024-07-25 00:03:53.038711] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.318 00:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:57.318 00:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:57.318 00:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:57.318 00:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:57.318 00:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:57.318 00:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:57.318 00:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:57.318 00:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:57.318 00:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:57.318 00:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:57.318 00:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.318 00:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.577 00:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:57.577 "name": "raid_bdev1", 00:19:57.577 "uuid": "7440e2c1-1647-4ac6-814c-e4c48908834f", 00:19:57.577 "strip_size_kb": 0, 00:19:57.577 "state": "online", 00:19:57.577 "raid_level": "raid1", 00:19:57.577 "superblock": true, 00:19:57.577 "num_base_bdevs": 3, 00:19:57.577 "num_base_bdevs_discovered": 3, 00:19:57.577 "num_base_bdevs_operational": 3, 00:19:57.577 "base_bdevs_list": [ 00:19:57.577 { 00:19:57.577 "name": "BaseBdev1", 00:19:57.577 "uuid": "fe0689b6-9fb3-5f84-a5de-2ddbcf18fbe3", 00:19:57.577 "is_configured": true, 00:19:57.577 "data_offset": 2048, 00:19:57.577 "data_size": 63488 00:19:57.577 }, 00:19:57.578 { 00:19:57.578 "name": "BaseBdev2", 00:19:57.578 "uuid": "92f61ee8-4e4c-5be5-bf53-8d4306e45b26", 00:19:57.578 "is_configured": true, 00:19:57.578 "data_offset": 2048, 00:19:57.578 "data_size": 63488 00:19:57.578 }, 00:19:57.578 { 00:19:57.578 "name": "BaseBdev3", 00:19:57.578 "uuid": "7366e393-41ff-503e-9829-0c096387dde0", 00:19:57.578 "is_configured": true, 00:19:57.578 "data_offset": 2048, 00:19:57.578 "data_size": 63488 00:19:57.578 } 00:19:57.578 ] 00:19:57.578 }' 00:19:57.578 00:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:57.578 00:03:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.836 00:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:19:57.836 00:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:58.096 [2024-07-25 00:03:53.756088] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:19:59.033 00:03:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:59.292 [2024-07-25 00:03:54.927077] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:19:59.292 [2024-07-25 00:03:54.927155] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:59.292 [2024-07-25 00:03:54.927425] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005ad0 00:19:59.292 00:03:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:19:59.292 00:03:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:19:59.293 00:03:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:19:59.293 00:03:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=2 00:19:59.293 00:03:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:59.293 00:03:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:59.293 00:03:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:59.293 00:03:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:59.293 00:03:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:59.293 00:03:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:59.293 00:03:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:59.293 00:03:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:59.293 00:03:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:59.293 00:03:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:59.293 00:03:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.293 00:03:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.552 00:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:59.552 "name": "raid_bdev1", 00:19:59.552 "uuid": "7440e2c1-1647-4ac6-814c-e4c48908834f", 00:19:59.552 "strip_size_kb": 0, 00:19:59.552 "state": "online", 00:19:59.552 "raid_level": "raid1", 00:19:59.552 "superblock": true, 00:19:59.552 "num_base_bdevs": 3, 00:19:59.552 "num_base_bdevs_discovered": 2, 00:19:59.552 "num_base_bdevs_operational": 2, 00:19:59.552 "base_bdevs_list": [ 00:19:59.552 { 00:19:59.552 "name": null, 00:19:59.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.552 "is_configured": false, 00:19:59.552 "data_offset": 2048, 00:19:59.552 "data_size": 63488 00:19:59.552 }, 00:19:59.552 { 00:19:59.552 "name": "BaseBdev2", 00:19:59.552 "uuid": "92f61ee8-4e4c-5be5-bf53-8d4306e45b26", 00:19:59.552 "is_configured": true, 00:19:59.552 "data_offset": 2048, 00:19:59.552 "data_size": 63488 00:19:59.552 }, 00:19:59.552 { 00:19:59.552 "name": "BaseBdev3", 00:19:59.552 "uuid": "7366e393-41ff-503e-9829-0c096387dde0", 00:19:59.552 "is_configured": true, 00:19:59.552 "data_offset": 2048, 00:19:59.552 "data_size": 63488 00:19:59.552 } 00:19:59.552 ] 00:19:59.552 }' 00:19:59.552 00:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:59.552 00:03:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.810 00:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:00.069 [2024-07-25 00:03:55.818156] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:00.069 [2024-07-25 00:03:55.818474] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:00.069 [2024-07-25 00:03:55.821546] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:00.069 0 00:20:00.069 [2024-07-25 00:03:55.821771] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.069 [2024-07-25 00:03:55.821897] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:00.069 [2024-07-25 00:03:55.821924] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:20:00.069 00:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 87819 00:20:00.069 00:03:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 87819 ']' 00:20:00.069 00:03:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 87819 00:20:00.069 00:03:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:20:00.069 00:03:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:00.069 00:03:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87819 00:20:00.069 killing process with pid 87819 00:20:00.069 00:03:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:00.069 00:03:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:00.069 00:03:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87819' 00:20:00.069 00:03:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 87819 00:20:00.069 [2024-07-25 00:03:55.872600] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:00.069 00:03:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 87819 00:20:00.328 [2024-07-25 00:03:56.051823] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:01.705 00:03:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.Ft4kmQq5fQ 00:20:01.705 00:03:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:20:01.705 00:03:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:20:01.705 00:03:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:20:01.705 00:03:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:20:01.705 00:03:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:01.705 00:03:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:01.705 00:03:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:20:01.705 ************************************ 00:20:01.705 END TEST raid_write_error_test 00:20:01.705 ************************************ 00:20:01.705 00:20:01.705 real 0m7.730s 00:20:01.705 user 0m11.503s 00:20:01.705 sys 0m0.989s 00:20:01.705 00:03:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:01.705 00:03:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.705 00:03:57 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:20:01.705 00:03:57 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:20:01.705 00:03:57 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:20:01.705 00:03:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:01.705 00:03:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:01.705 00:03:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:01.705 ************************************ 00:20:01.705 START TEST raid_state_function_test 00:20:01.705 ************************************ 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:20:01.705 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:20:01.706 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=88003 00:20:01.706 Process raid pid: 88003 00:20:01.706 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:01.706 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 88003' 00:20:01.706 00:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 88003 /var/tmp/spdk-raid.sock 00:20:01.706 00:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 88003 ']' 00:20:01.706 00:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:01.706 00:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:01.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:01.706 00:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:01.706 00:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:01.706 00:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.706 [2024-07-25 00:03:57.330494] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:20:01.706 [2024-07-25 00:03:57.330678] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.706 [2024-07-25 00:03:57.506936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.964 [2024-07-25 00:03:57.688892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.222 [2024-07-25 00:03:57.866051] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:02.482 00:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:02.482 00:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:20:02.482 00:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:02.740 [2024-07-25 00:03:58.499481] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:02.740 [2024-07-25 00:03:58.499570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:02.740 [2024-07-25 00:03:58.499586] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:02.740 [2024-07-25 00:03:58.499601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:02.740 [2024-07-25 00:03:58.499610] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:02.740 [2024-07-25 00:03:58.499623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:02.740 [2024-07-25 00:03:58.499632] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:02.740 [2024-07-25 00:03:58.499644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:02.740 00:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:02.740 00:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:02.740 00:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:02.740 00:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:02.740 00:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:02.741 00:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:02.741 00:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:02.741 00:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:02.741 00:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:02.741 00:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:02.741 00:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.741 00:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.998 00:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:02.998 "name": "Existed_Raid", 00:20:02.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.998 "strip_size_kb": 64, 00:20:02.998 "state": "configuring", 00:20:02.998 "raid_level": "raid0", 00:20:02.998 "superblock": false, 00:20:02.998 "num_base_bdevs": 4, 00:20:02.998 "num_base_bdevs_discovered": 0, 00:20:02.998 "num_base_bdevs_operational": 4, 00:20:02.998 "base_bdevs_list": [ 00:20:02.998 { 00:20:02.998 "name": "BaseBdev1", 00:20:02.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.998 "is_configured": false, 00:20:02.998 "data_offset": 0, 00:20:02.998 "data_size": 0 00:20:02.998 }, 00:20:02.998 { 00:20:02.998 "name": "BaseBdev2", 00:20:02.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.998 "is_configured": false, 00:20:02.998 "data_offset": 0, 00:20:02.998 "data_size": 0 00:20:02.998 }, 00:20:02.998 { 00:20:02.998 "name": "BaseBdev3", 00:20:02.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.998 "is_configured": false, 00:20:02.998 "data_offset": 0, 00:20:02.998 "data_size": 0 00:20:02.998 }, 00:20:02.998 { 00:20:02.998 "name": "BaseBdev4", 00:20:02.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.998 "is_configured": false, 00:20:02.998 "data_offset": 0, 00:20:02.998 "data_size": 0 00:20:02.998 } 00:20:02.998 ] 00:20:02.998 }' 00:20:02.998 00:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:02.998 00:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.256 00:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:03.514 [2024-07-25 00:03:59.315576] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:03.514 [2024-07-25 00:03:59.315904] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:20:03.514 00:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:03.773 [2024-07-25 00:03:59.527646] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:03.773 [2024-07-25 00:03:59.527725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:03.773 [2024-07-25 00:03:59.527740] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:03.773 [2024-07-25 00:03:59.527755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:03.773 [2024-07-25 00:03:59.527763] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:03.773 [2024-07-25 00:03:59.527775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:03.773 [2024-07-25 00:03:59.527783] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:03.773 [2024-07-25 00:03:59.527794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:03.773 00:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:04.031 [2024-07-25 00:03:59.829746] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:04.031 BaseBdev1 00:20:04.031 00:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:04.031 00:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:04.031 00:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:04.031 00:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:04.031 00:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:04.031 00:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:04.031 00:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:04.289 00:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:04.548 [ 00:20:04.548 { 00:20:04.548 "name": "BaseBdev1", 00:20:04.548 "aliases": [ 00:20:04.548 "2746d218-bb6b-47e5-95b6-1fa8dd69cb91" 00:20:04.548 ], 00:20:04.548 "product_name": "Malloc disk", 00:20:04.548 "block_size": 512, 00:20:04.548 "num_blocks": 65536, 00:20:04.548 "uuid": "2746d218-bb6b-47e5-95b6-1fa8dd69cb91", 00:20:04.548 "assigned_rate_limits": { 00:20:04.548 "rw_ios_per_sec": 0, 00:20:04.548 "rw_mbytes_per_sec": 0, 00:20:04.548 "r_mbytes_per_sec": 0, 00:20:04.548 "w_mbytes_per_sec": 0 00:20:04.548 }, 00:20:04.548 "claimed": true, 00:20:04.548 "claim_type": "exclusive_write", 00:20:04.548 "zoned": false, 00:20:04.548 "supported_io_types": { 00:20:04.548 "read": true, 00:20:04.548 "write": true, 00:20:04.548 "unmap": true, 00:20:04.548 "flush": true, 00:20:04.548 "reset": true, 00:20:04.548 "nvme_admin": false, 00:20:04.548 "nvme_io": false, 00:20:04.548 "nvme_io_md": false, 00:20:04.548 "write_zeroes": true, 00:20:04.548 "zcopy": true, 00:20:04.548 "get_zone_info": false, 00:20:04.548 "zone_management": false, 00:20:04.548 "zone_append": false, 00:20:04.548 "compare": false, 00:20:04.548 "compare_and_write": false, 00:20:04.548 "abort": true, 00:20:04.548 "seek_hole": false, 00:20:04.548 "seek_data": false, 00:20:04.548 "copy": true, 00:20:04.548 "nvme_iov_md": false 00:20:04.548 }, 00:20:04.548 "memory_domains": [ 00:20:04.548 { 00:20:04.548 "dma_device_id": "system", 00:20:04.549 "dma_device_type": 1 00:20:04.549 }, 00:20:04.549 { 00:20:04.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.549 "dma_device_type": 2 00:20:04.549 } 00:20:04.549 ], 00:20:04.549 "driver_specific": {} 00:20:04.549 } 00:20:04.549 ] 00:20:04.549 00:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:04.549 00:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:04.549 00:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:04.549 00:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:04.549 00:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:04.549 00:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:04.549 00:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:04.549 00:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:04.549 00:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:04.549 00:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:04.549 00:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:04.549 00:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.549 00:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.807 00:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:04.807 "name": "Existed_Raid", 00:20:04.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.807 "strip_size_kb": 64, 00:20:04.807 "state": "configuring", 00:20:04.807 "raid_level": "raid0", 00:20:04.807 "superblock": false, 00:20:04.807 "num_base_bdevs": 4, 00:20:04.807 "num_base_bdevs_discovered": 1, 00:20:04.807 "num_base_bdevs_operational": 4, 00:20:04.807 "base_bdevs_list": [ 00:20:04.807 { 00:20:04.807 "name": "BaseBdev1", 00:20:04.807 "uuid": "2746d218-bb6b-47e5-95b6-1fa8dd69cb91", 00:20:04.807 "is_configured": true, 00:20:04.807 "data_offset": 0, 00:20:04.807 "data_size": 65536 00:20:04.807 }, 00:20:04.807 { 00:20:04.807 "name": "BaseBdev2", 00:20:04.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.807 "is_configured": false, 00:20:04.807 "data_offset": 0, 00:20:04.807 "data_size": 0 00:20:04.807 }, 00:20:04.807 { 00:20:04.807 "name": "BaseBdev3", 00:20:04.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.807 "is_configured": false, 00:20:04.807 "data_offset": 0, 00:20:04.807 "data_size": 0 00:20:04.807 }, 00:20:04.807 { 00:20:04.807 "name": "BaseBdev4", 00:20:04.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.807 "is_configured": false, 00:20:04.807 "data_offset": 0, 00:20:04.807 "data_size": 0 00:20:04.807 } 00:20:04.807 ] 00:20:04.807 }' 00:20:04.807 00:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:04.807 00:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.066 00:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:05.324 [2024-07-25 00:04:01.126254] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:05.324 [2024-07-25 00:04:01.126321] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:20:05.324 00:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:05.582 [2024-07-25 00:04:01.354391] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:05.582 [2024-07-25 00:04:01.356685] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:05.582 [2024-07-25 00:04:01.356743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:05.582 [2024-07-25 00:04:01.356759] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:05.582 [2024-07-25 00:04:01.356775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:05.582 [2024-07-25 00:04:01.356785] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:05.582 [2024-07-25 00:04:01.356813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:05.582 00:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:05.582 00:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:05.582 00:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:05.582 00:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:05.582 00:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:05.582 00:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:05.582 00:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:05.582 00:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:05.582 00:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:05.582 00:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:05.582 00:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:05.582 00:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:05.582 00:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.582 00:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.840 00:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:05.840 "name": "Existed_Raid", 00:20:05.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.840 "strip_size_kb": 64, 00:20:05.840 "state": "configuring", 00:20:05.840 "raid_level": "raid0", 00:20:05.840 "superblock": false, 00:20:05.840 "num_base_bdevs": 4, 00:20:05.840 "num_base_bdevs_discovered": 1, 00:20:05.840 "num_base_bdevs_operational": 4, 00:20:05.840 "base_bdevs_list": [ 00:20:05.840 { 00:20:05.840 "name": "BaseBdev1", 00:20:05.840 "uuid": "2746d218-bb6b-47e5-95b6-1fa8dd69cb91", 00:20:05.840 "is_configured": true, 00:20:05.840 "data_offset": 0, 00:20:05.840 "data_size": 65536 00:20:05.840 }, 00:20:05.840 { 00:20:05.841 "name": "BaseBdev2", 00:20:05.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.841 "is_configured": false, 00:20:05.841 "data_offset": 0, 00:20:05.841 "data_size": 0 00:20:05.841 }, 00:20:05.841 { 00:20:05.841 "name": "BaseBdev3", 00:20:05.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.841 "is_configured": false, 00:20:05.841 "data_offset": 0, 00:20:05.841 "data_size": 0 00:20:05.841 }, 00:20:05.841 { 00:20:05.841 "name": "BaseBdev4", 00:20:05.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.841 "is_configured": false, 00:20:05.841 "data_offset": 0, 00:20:05.841 "data_size": 0 00:20:05.841 } 00:20:05.841 ] 00:20:05.841 }' 00:20:05.841 00:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:05.841 00:04:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.099 00:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:06.358 [2024-07-25 00:04:02.217015] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:06.358 BaseBdev2 00:20:06.617 00:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:06.617 00:04:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:06.617 00:04:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:06.617 00:04:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:06.617 00:04:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:06.617 00:04:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:06.617 00:04:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:06.617 00:04:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:06.876 [ 00:20:06.876 { 00:20:06.876 "name": "BaseBdev2", 00:20:06.876 "aliases": [ 00:20:06.876 "8bbee5ea-32b9-4386-aa49-1549fa324f60" 00:20:06.876 ], 00:20:06.876 "product_name": "Malloc disk", 00:20:06.876 "block_size": 512, 00:20:06.876 "num_blocks": 65536, 00:20:06.876 "uuid": "8bbee5ea-32b9-4386-aa49-1549fa324f60", 00:20:06.876 "assigned_rate_limits": { 00:20:06.876 "rw_ios_per_sec": 0, 00:20:06.876 "rw_mbytes_per_sec": 0, 00:20:06.876 "r_mbytes_per_sec": 0, 00:20:06.876 "w_mbytes_per_sec": 0 00:20:06.876 }, 00:20:06.876 "claimed": true, 00:20:06.876 "claim_type": "exclusive_write", 00:20:06.876 "zoned": false, 00:20:06.876 "supported_io_types": { 00:20:06.876 "read": true, 00:20:06.876 "write": true, 00:20:06.876 "unmap": true, 00:20:06.876 "flush": true, 00:20:06.876 "reset": true, 00:20:06.876 "nvme_admin": false, 00:20:06.876 "nvme_io": false, 00:20:06.876 "nvme_io_md": false, 00:20:06.876 "write_zeroes": true, 00:20:06.876 "zcopy": true, 00:20:06.876 "get_zone_info": false, 00:20:06.876 "zone_management": false, 00:20:06.876 "zone_append": false, 00:20:06.876 "compare": false, 00:20:06.876 "compare_and_write": false, 00:20:06.876 "abort": true, 00:20:06.876 "seek_hole": false, 00:20:06.876 "seek_data": false, 00:20:06.876 "copy": true, 00:20:06.876 "nvme_iov_md": false 00:20:06.876 }, 00:20:06.876 "memory_domains": [ 00:20:06.876 { 00:20:06.876 "dma_device_id": "system", 00:20:06.876 "dma_device_type": 1 00:20:06.876 }, 00:20:06.876 { 00:20:06.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.876 "dma_device_type": 2 00:20:06.876 } 00:20:06.876 ], 00:20:06.876 "driver_specific": {} 00:20:06.876 } 00:20:06.876 ] 00:20:06.876 00:04:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:06.876 00:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:06.876 00:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:06.876 00:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:06.876 00:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:06.876 00:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:06.876 00:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:06.876 00:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:06.876 00:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:06.876 00:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:06.876 00:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:06.876 00:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:06.876 00:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:06.876 00:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.876 00:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:07.135 00:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:07.135 "name": "Existed_Raid", 00:20:07.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.135 "strip_size_kb": 64, 00:20:07.135 "state": "configuring", 00:20:07.135 "raid_level": "raid0", 00:20:07.135 "superblock": false, 00:20:07.135 "num_base_bdevs": 4, 00:20:07.135 "num_base_bdevs_discovered": 2, 00:20:07.135 "num_base_bdevs_operational": 4, 00:20:07.135 "base_bdevs_list": [ 00:20:07.135 { 00:20:07.135 "name": "BaseBdev1", 00:20:07.135 "uuid": "2746d218-bb6b-47e5-95b6-1fa8dd69cb91", 00:20:07.135 "is_configured": true, 00:20:07.135 "data_offset": 0, 00:20:07.135 "data_size": 65536 00:20:07.135 }, 00:20:07.135 { 00:20:07.135 "name": "BaseBdev2", 00:20:07.135 "uuid": "8bbee5ea-32b9-4386-aa49-1549fa324f60", 00:20:07.135 "is_configured": true, 00:20:07.135 "data_offset": 0, 00:20:07.135 "data_size": 65536 00:20:07.135 }, 00:20:07.135 { 00:20:07.135 "name": "BaseBdev3", 00:20:07.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.135 "is_configured": false, 00:20:07.135 "data_offset": 0, 00:20:07.135 "data_size": 0 00:20:07.135 }, 00:20:07.135 { 00:20:07.135 "name": "BaseBdev4", 00:20:07.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.135 "is_configured": false, 00:20:07.135 "data_offset": 0, 00:20:07.135 "data_size": 0 00:20:07.135 } 00:20:07.135 ] 00:20:07.135 }' 00:20:07.135 00:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:07.135 00:04:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.705 00:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:07.705 [2024-07-25 00:04:03.551818] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:07.705 BaseBdev3 00:20:07.964 00:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:07.964 00:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:07.964 00:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:07.964 00:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:07.964 00:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:07.964 00:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:07.964 00:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:07.964 00:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:08.222 [ 00:20:08.222 { 00:20:08.222 "name": "BaseBdev3", 00:20:08.222 "aliases": [ 00:20:08.222 "1f43d1da-b614-4849-aad7-ebfd1e5dea36" 00:20:08.222 ], 00:20:08.222 "product_name": "Malloc disk", 00:20:08.222 "block_size": 512, 00:20:08.222 "num_blocks": 65536, 00:20:08.222 "uuid": "1f43d1da-b614-4849-aad7-ebfd1e5dea36", 00:20:08.222 "assigned_rate_limits": { 00:20:08.222 "rw_ios_per_sec": 0, 00:20:08.222 "rw_mbytes_per_sec": 0, 00:20:08.222 "r_mbytes_per_sec": 0, 00:20:08.222 "w_mbytes_per_sec": 0 00:20:08.222 }, 00:20:08.222 "claimed": true, 00:20:08.222 "claim_type": "exclusive_write", 00:20:08.222 "zoned": false, 00:20:08.222 "supported_io_types": { 00:20:08.222 "read": true, 00:20:08.222 "write": true, 00:20:08.222 "unmap": true, 00:20:08.222 "flush": true, 00:20:08.222 "reset": true, 00:20:08.222 "nvme_admin": false, 00:20:08.222 "nvme_io": false, 00:20:08.222 "nvme_io_md": false, 00:20:08.223 "write_zeroes": true, 00:20:08.223 "zcopy": true, 00:20:08.223 "get_zone_info": false, 00:20:08.223 "zone_management": false, 00:20:08.223 "zone_append": false, 00:20:08.223 "compare": false, 00:20:08.223 "compare_and_write": false, 00:20:08.223 "abort": true, 00:20:08.223 "seek_hole": false, 00:20:08.223 "seek_data": false, 00:20:08.223 "copy": true, 00:20:08.223 "nvme_iov_md": false 00:20:08.223 }, 00:20:08.223 "memory_domains": [ 00:20:08.223 { 00:20:08.223 "dma_device_id": "system", 00:20:08.223 "dma_device_type": 1 00:20:08.223 }, 00:20:08.223 { 00:20:08.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.223 "dma_device_type": 2 00:20:08.223 } 00:20:08.223 ], 00:20:08.223 "driver_specific": {} 00:20:08.223 } 00:20:08.223 ] 00:20:08.223 00:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:08.223 00:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:08.223 00:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:08.223 00:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:08.223 00:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:08.223 00:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:08.223 00:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:08.223 00:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:08.223 00:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:08.223 00:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:08.223 00:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:08.223 00:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:08.223 00:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:08.223 00:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.223 00:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.481 00:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:08.481 "name": "Existed_Raid", 00:20:08.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.481 "strip_size_kb": 64, 00:20:08.481 "state": "configuring", 00:20:08.481 "raid_level": "raid0", 00:20:08.481 "superblock": false, 00:20:08.481 "num_base_bdevs": 4, 00:20:08.481 "num_base_bdevs_discovered": 3, 00:20:08.481 "num_base_bdevs_operational": 4, 00:20:08.481 "base_bdevs_list": [ 00:20:08.481 { 00:20:08.481 "name": "BaseBdev1", 00:20:08.481 "uuid": "2746d218-bb6b-47e5-95b6-1fa8dd69cb91", 00:20:08.481 "is_configured": true, 00:20:08.481 "data_offset": 0, 00:20:08.481 "data_size": 65536 00:20:08.481 }, 00:20:08.481 { 00:20:08.481 "name": "BaseBdev2", 00:20:08.481 "uuid": "8bbee5ea-32b9-4386-aa49-1549fa324f60", 00:20:08.481 "is_configured": true, 00:20:08.481 "data_offset": 0, 00:20:08.481 "data_size": 65536 00:20:08.481 }, 00:20:08.481 { 00:20:08.481 "name": "BaseBdev3", 00:20:08.481 "uuid": "1f43d1da-b614-4849-aad7-ebfd1e5dea36", 00:20:08.481 "is_configured": true, 00:20:08.481 "data_offset": 0, 00:20:08.481 "data_size": 65536 00:20:08.481 }, 00:20:08.481 { 00:20:08.481 "name": "BaseBdev4", 00:20:08.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.481 "is_configured": false, 00:20:08.481 "data_offset": 0, 00:20:08.481 "data_size": 0 00:20:08.481 } 00:20:08.481 ] 00:20:08.481 }' 00:20:08.481 00:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:08.481 00:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.740 00:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:09.307 [2024-07-25 00:04:04.874091] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:09.307 [2024-07-25 00:04:04.874391] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:20:09.307 [2024-07-25 00:04:04.874447] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:20:09.307 [2024-07-25 00:04:04.874771] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:20:09.307 [2024-07-25 00:04:04.875266] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:20:09.307 [2024-07-25 00:04:04.875452] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:20:09.307 [2024-07-25 00:04:04.875964] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.307 BaseBdev4 00:20:09.307 00:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:20:09.307 00:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:20:09.307 00:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:09.307 00:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:09.307 00:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:09.307 00:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:09.307 00:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:09.307 00:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:09.565 [ 00:20:09.565 { 00:20:09.565 "name": "BaseBdev4", 00:20:09.565 "aliases": [ 00:20:09.565 "7c61d1e4-7b11-46d6-b948-53cb6f9593a4" 00:20:09.565 ], 00:20:09.565 "product_name": "Malloc disk", 00:20:09.565 "block_size": 512, 00:20:09.566 "num_blocks": 65536, 00:20:09.566 "uuid": "7c61d1e4-7b11-46d6-b948-53cb6f9593a4", 00:20:09.566 "assigned_rate_limits": { 00:20:09.566 "rw_ios_per_sec": 0, 00:20:09.566 "rw_mbytes_per_sec": 0, 00:20:09.566 "r_mbytes_per_sec": 0, 00:20:09.566 "w_mbytes_per_sec": 0 00:20:09.566 }, 00:20:09.566 "claimed": true, 00:20:09.566 "claim_type": "exclusive_write", 00:20:09.566 "zoned": false, 00:20:09.566 "supported_io_types": { 00:20:09.566 "read": true, 00:20:09.566 "write": true, 00:20:09.566 "unmap": true, 00:20:09.566 "flush": true, 00:20:09.566 "reset": true, 00:20:09.566 "nvme_admin": false, 00:20:09.566 "nvme_io": false, 00:20:09.566 "nvme_io_md": false, 00:20:09.566 "write_zeroes": true, 00:20:09.566 "zcopy": true, 00:20:09.566 "get_zone_info": false, 00:20:09.566 "zone_management": false, 00:20:09.566 "zone_append": false, 00:20:09.566 "compare": false, 00:20:09.566 "compare_and_write": false, 00:20:09.566 "abort": true, 00:20:09.566 "seek_hole": false, 00:20:09.566 "seek_data": false, 00:20:09.566 "copy": true, 00:20:09.566 "nvme_iov_md": false 00:20:09.566 }, 00:20:09.566 "memory_domains": [ 00:20:09.566 { 00:20:09.566 "dma_device_id": "system", 00:20:09.566 "dma_device_type": 1 00:20:09.566 }, 00:20:09.566 { 00:20:09.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.566 "dma_device_type": 2 00:20:09.566 } 00:20:09.566 ], 00:20:09.566 "driver_specific": {} 00:20:09.566 } 00:20:09.566 ] 00:20:09.566 00:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:09.566 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:09.566 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:09.566 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:20:09.566 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:09.566 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:09.566 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:09.566 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:09.566 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:09.566 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:09.566 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:09.566 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:09.566 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:09.566 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.566 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.825 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:09.825 "name": "Existed_Raid", 00:20:09.825 "uuid": "8a01bc13-e5f9-4eda-8d61-27ea149b3f40", 00:20:09.825 "strip_size_kb": 64, 00:20:09.825 "state": "online", 00:20:09.825 "raid_level": "raid0", 00:20:09.825 "superblock": false, 00:20:09.825 "num_base_bdevs": 4, 00:20:09.825 "num_base_bdevs_discovered": 4, 00:20:09.825 "num_base_bdevs_operational": 4, 00:20:09.825 "base_bdevs_list": [ 00:20:09.825 { 00:20:09.825 "name": "BaseBdev1", 00:20:09.825 "uuid": "2746d218-bb6b-47e5-95b6-1fa8dd69cb91", 00:20:09.825 "is_configured": true, 00:20:09.825 "data_offset": 0, 00:20:09.825 "data_size": 65536 00:20:09.825 }, 00:20:09.825 { 00:20:09.825 "name": "BaseBdev2", 00:20:09.825 "uuid": "8bbee5ea-32b9-4386-aa49-1549fa324f60", 00:20:09.825 "is_configured": true, 00:20:09.825 "data_offset": 0, 00:20:09.825 "data_size": 65536 00:20:09.825 }, 00:20:09.825 { 00:20:09.825 "name": "BaseBdev3", 00:20:09.825 "uuid": "1f43d1da-b614-4849-aad7-ebfd1e5dea36", 00:20:09.825 "is_configured": true, 00:20:09.825 "data_offset": 0, 00:20:09.825 "data_size": 65536 00:20:09.825 }, 00:20:09.825 { 00:20:09.825 "name": "BaseBdev4", 00:20:09.825 "uuid": "7c61d1e4-7b11-46d6-b948-53cb6f9593a4", 00:20:09.825 "is_configured": true, 00:20:09.825 "data_offset": 0, 00:20:09.825 "data_size": 65536 00:20:09.825 } 00:20:09.825 ] 00:20:09.825 }' 00:20:09.825 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:09.825 00:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.084 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:10.084 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:10.084 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:10.084 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:10.084 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:10.084 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:10.084 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:10.084 00:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:10.342 [2024-07-25 00:04:06.167189] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:10.342 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:10.342 "name": "Existed_Raid", 00:20:10.342 "aliases": [ 00:20:10.342 "8a01bc13-e5f9-4eda-8d61-27ea149b3f40" 00:20:10.342 ], 00:20:10.342 "product_name": "Raid Volume", 00:20:10.342 "block_size": 512, 00:20:10.342 "num_blocks": 262144, 00:20:10.342 "uuid": "8a01bc13-e5f9-4eda-8d61-27ea149b3f40", 00:20:10.342 "assigned_rate_limits": { 00:20:10.342 "rw_ios_per_sec": 0, 00:20:10.342 "rw_mbytes_per_sec": 0, 00:20:10.342 "r_mbytes_per_sec": 0, 00:20:10.342 "w_mbytes_per_sec": 0 00:20:10.342 }, 00:20:10.342 "claimed": false, 00:20:10.342 "zoned": false, 00:20:10.342 "supported_io_types": { 00:20:10.342 "read": true, 00:20:10.342 "write": true, 00:20:10.342 "unmap": true, 00:20:10.342 "flush": true, 00:20:10.342 "reset": true, 00:20:10.342 "nvme_admin": false, 00:20:10.342 "nvme_io": false, 00:20:10.342 "nvme_io_md": false, 00:20:10.342 "write_zeroes": true, 00:20:10.342 "zcopy": false, 00:20:10.342 "get_zone_info": false, 00:20:10.342 "zone_management": false, 00:20:10.342 "zone_append": false, 00:20:10.342 "compare": false, 00:20:10.342 "compare_and_write": false, 00:20:10.342 "abort": false, 00:20:10.342 "seek_hole": false, 00:20:10.342 "seek_data": false, 00:20:10.342 "copy": false, 00:20:10.342 "nvme_iov_md": false 00:20:10.342 }, 00:20:10.342 "memory_domains": [ 00:20:10.342 { 00:20:10.342 "dma_device_id": "system", 00:20:10.342 "dma_device_type": 1 00:20:10.342 }, 00:20:10.342 { 00:20:10.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.342 "dma_device_type": 2 00:20:10.342 }, 00:20:10.342 { 00:20:10.343 "dma_device_id": "system", 00:20:10.343 "dma_device_type": 1 00:20:10.343 }, 00:20:10.343 { 00:20:10.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.343 "dma_device_type": 2 00:20:10.343 }, 00:20:10.343 { 00:20:10.343 "dma_device_id": "system", 00:20:10.343 "dma_device_type": 1 00:20:10.343 }, 00:20:10.343 { 00:20:10.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.343 "dma_device_type": 2 00:20:10.343 }, 00:20:10.343 { 00:20:10.343 "dma_device_id": "system", 00:20:10.343 "dma_device_type": 1 00:20:10.343 }, 00:20:10.343 { 00:20:10.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.343 "dma_device_type": 2 00:20:10.343 } 00:20:10.343 ], 00:20:10.343 "driver_specific": { 00:20:10.343 "raid": { 00:20:10.343 "uuid": "8a01bc13-e5f9-4eda-8d61-27ea149b3f40", 00:20:10.343 "strip_size_kb": 64, 00:20:10.343 "state": "online", 00:20:10.343 "raid_level": "raid0", 00:20:10.343 "superblock": false, 00:20:10.343 "num_base_bdevs": 4, 00:20:10.343 "num_base_bdevs_discovered": 4, 00:20:10.343 "num_base_bdevs_operational": 4, 00:20:10.343 "base_bdevs_list": [ 00:20:10.343 { 00:20:10.343 "name": "BaseBdev1", 00:20:10.343 "uuid": "2746d218-bb6b-47e5-95b6-1fa8dd69cb91", 00:20:10.343 "is_configured": true, 00:20:10.343 "data_offset": 0, 00:20:10.343 "data_size": 65536 00:20:10.343 }, 00:20:10.343 { 00:20:10.343 "name": "BaseBdev2", 00:20:10.343 "uuid": "8bbee5ea-32b9-4386-aa49-1549fa324f60", 00:20:10.343 "is_configured": true, 00:20:10.343 "data_offset": 0, 00:20:10.343 "data_size": 65536 00:20:10.343 }, 00:20:10.343 { 00:20:10.343 "name": "BaseBdev3", 00:20:10.343 "uuid": "1f43d1da-b614-4849-aad7-ebfd1e5dea36", 00:20:10.343 "is_configured": true, 00:20:10.343 "data_offset": 0, 00:20:10.343 "data_size": 65536 00:20:10.343 }, 00:20:10.343 { 00:20:10.343 "name": "BaseBdev4", 00:20:10.343 "uuid": "7c61d1e4-7b11-46d6-b948-53cb6f9593a4", 00:20:10.343 "is_configured": true, 00:20:10.343 "data_offset": 0, 00:20:10.343 "data_size": 65536 00:20:10.343 } 00:20:10.343 ] 00:20:10.343 } 00:20:10.343 } 00:20:10.343 }' 00:20:10.343 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:10.343 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:10.343 BaseBdev2 00:20:10.343 BaseBdev3 00:20:10.343 BaseBdev4' 00:20:10.343 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:10.343 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:10.343 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:10.920 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:10.920 "name": "BaseBdev1", 00:20:10.920 "aliases": [ 00:20:10.920 "2746d218-bb6b-47e5-95b6-1fa8dd69cb91" 00:20:10.920 ], 00:20:10.920 "product_name": "Malloc disk", 00:20:10.920 "block_size": 512, 00:20:10.920 "num_blocks": 65536, 00:20:10.920 "uuid": "2746d218-bb6b-47e5-95b6-1fa8dd69cb91", 00:20:10.920 "assigned_rate_limits": { 00:20:10.920 "rw_ios_per_sec": 0, 00:20:10.920 "rw_mbytes_per_sec": 0, 00:20:10.920 "r_mbytes_per_sec": 0, 00:20:10.920 "w_mbytes_per_sec": 0 00:20:10.920 }, 00:20:10.920 "claimed": true, 00:20:10.920 "claim_type": "exclusive_write", 00:20:10.920 "zoned": false, 00:20:10.920 "supported_io_types": { 00:20:10.920 "read": true, 00:20:10.920 "write": true, 00:20:10.920 "unmap": true, 00:20:10.920 "flush": true, 00:20:10.920 "reset": true, 00:20:10.920 "nvme_admin": false, 00:20:10.920 "nvme_io": false, 00:20:10.920 "nvme_io_md": false, 00:20:10.920 "write_zeroes": true, 00:20:10.920 "zcopy": true, 00:20:10.920 "get_zone_info": false, 00:20:10.920 "zone_management": false, 00:20:10.920 "zone_append": false, 00:20:10.920 "compare": false, 00:20:10.920 "compare_and_write": false, 00:20:10.920 "abort": true, 00:20:10.920 "seek_hole": false, 00:20:10.920 "seek_data": false, 00:20:10.920 "copy": true, 00:20:10.920 "nvme_iov_md": false 00:20:10.920 }, 00:20:10.920 "memory_domains": [ 00:20:10.920 { 00:20:10.920 "dma_device_id": "system", 00:20:10.920 "dma_device_type": 1 00:20:10.920 }, 00:20:10.920 { 00:20:10.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.920 "dma_device_type": 2 00:20:10.920 } 00:20:10.920 ], 00:20:10.920 "driver_specific": {} 00:20:10.920 }' 00:20:10.920 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.920 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.920 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:10.920 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.920 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.920 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:10.920 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:10.920 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:10.920 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:10.920 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:10.920 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:10.920 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:10.920 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:10.920 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:10.920 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:10.920 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:10.920 "name": "BaseBdev2", 00:20:10.920 "aliases": [ 00:20:10.920 "8bbee5ea-32b9-4386-aa49-1549fa324f60" 00:20:10.920 ], 00:20:10.920 "product_name": "Malloc disk", 00:20:10.920 "block_size": 512, 00:20:10.920 "num_blocks": 65536, 00:20:10.920 "uuid": "8bbee5ea-32b9-4386-aa49-1549fa324f60", 00:20:10.920 "assigned_rate_limits": { 00:20:10.920 "rw_ios_per_sec": 0, 00:20:10.920 "rw_mbytes_per_sec": 0, 00:20:10.920 "r_mbytes_per_sec": 0, 00:20:10.920 "w_mbytes_per_sec": 0 00:20:10.920 }, 00:20:10.920 "claimed": true, 00:20:10.920 "claim_type": "exclusive_write", 00:20:10.920 "zoned": false, 00:20:10.920 "supported_io_types": { 00:20:10.920 "read": true, 00:20:10.920 "write": true, 00:20:10.920 "unmap": true, 00:20:10.920 "flush": true, 00:20:10.920 "reset": true, 00:20:10.920 "nvme_admin": false, 00:20:10.920 "nvme_io": false, 00:20:10.920 "nvme_io_md": false, 00:20:10.920 "write_zeroes": true, 00:20:10.920 "zcopy": true, 00:20:10.920 "get_zone_info": false, 00:20:10.920 "zone_management": false, 00:20:10.920 "zone_append": false, 00:20:10.920 "compare": false, 00:20:10.920 "compare_and_write": false, 00:20:10.920 "abort": true, 00:20:10.920 "seek_hole": false, 00:20:10.920 "seek_data": false, 00:20:10.920 "copy": true, 00:20:10.920 "nvme_iov_md": false 00:20:10.920 }, 00:20:10.920 "memory_domains": [ 00:20:10.920 { 00:20:10.920 "dma_device_id": "system", 00:20:10.920 "dma_device_type": 1 00:20:10.920 }, 00:20:10.920 { 00:20:10.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.920 "dma_device_type": 2 00:20:10.920 } 00:20:10.920 ], 00:20:10.920 "driver_specific": {} 00:20:10.920 }' 00:20:10.920 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.178 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.178 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:11.178 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.178 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.178 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:11.178 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.178 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.178 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:11.178 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.178 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.178 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:11.178 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:11.178 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:11.178 00:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:11.435 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:11.435 "name": "BaseBdev3", 00:20:11.435 "aliases": [ 00:20:11.435 "1f43d1da-b614-4849-aad7-ebfd1e5dea36" 00:20:11.435 ], 00:20:11.435 "product_name": "Malloc disk", 00:20:11.435 "block_size": 512, 00:20:11.435 "num_blocks": 65536, 00:20:11.435 "uuid": "1f43d1da-b614-4849-aad7-ebfd1e5dea36", 00:20:11.435 "assigned_rate_limits": { 00:20:11.435 "rw_ios_per_sec": 0, 00:20:11.435 "rw_mbytes_per_sec": 0, 00:20:11.435 "r_mbytes_per_sec": 0, 00:20:11.435 "w_mbytes_per_sec": 0 00:20:11.435 }, 00:20:11.435 "claimed": true, 00:20:11.435 "claim_type": "exclusive_write", 00:20:11.435 "zoned": false, 00:20:11.435 "supported_io_types": { 00:20:11.435 "read": true, 00:20:11.435 "write": true, 00:20:11.435 "unmap": true, 00:20:11.436 "flush": true, 00:20:11.436 "reset": true, 00:20:11.436 "nvme_admin": false, 00:20:11.436 "nvme_io": false, 00:20:11.436 "nvme_io_md": false, 00:20:11.436 "write_zeroes": true, 00:20:11.436 "zcopy": true, 00:20:11.436 "get_zone_info": false, 00:20:11.436 "zone_management": false, 00:20:11.436 "zone_append": false, 00:20:11.436 "compare": false, 00:20:11.436 "compare_and_write": false, 00:20:11.436 "abort": true, 00:20:11.436 "seek_hole": false, 00:20:11.436 "seek_data": false, 00:20:11.436 "copy": true, 00:20:11.436 "nvme_iov_md": false 00:20:11.436 }, 00:20:11.436 "memory_domains": [ 00:20:11.436 { 00:20:11.436 "dma_device_id": "system", 00:20:11.436 "dma_device_type": 1 00:20:11.436 }, 00:20:11.436 { 00:20:11.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.436 "dma_device_type": 2 00:20:11.436 } 00:20:11.436 ], 00:20:11.436 "driver_specific": {} 00:20:11.436 }' 00:20:11.436 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.436 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.436 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:11.436 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.436 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.436 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:11.436 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.436 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.436 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:11.436 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.436 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.436 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:11.436 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:11.436 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:11.436 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:11.694 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:11.694 "name": "BaseBdev4", 00:20:11.694 "aliases": [ 00:20:11.694 "7c61d1e4-7b11-46d6-b948-53cb6f9593a4" 00:20:11.694 ], 00:20:11.694 "product_name": "Malloc disk", 00:20:11.694 "block_size": 512, 00:20:11.694 "num_blocks": 65536, 00:20:11.694 "uuid": "7c61d1e4-7b11-46d6-b948-53cb6f9593a4", 00:20:11.694 "assigned_rate_limits": { 00:20:11.694 "rw_ios_per_sec": 0, 00:20:11.694 "rw_mbytes_per_sec": 0, 00:20:11.694 "r_mbytes_per_sec": 0, 00:20:11.694 "w_mbytes_per_sec": 0 00:20:11.694 }, 00:20:11.694 "claimed": true, 00:20:11.694 "claim_type": "exclusive_write", 00:20:11.694 "zoned": false, 00:20:11.694 "supported_io_types": { 00:20:11.694 "read": true, 00:20:11.694 "write": true, 00:20:11.694 "unmap": true, 00:20:11.694 "flush": true, 00:20:11.694 "reset": true, 00:20:11.694 "nvme_admin": false, 00:20:11.694 "nvme_io": false, 00:20:11.694 "nvme_io_md": false, 00:20:11.694 "write_zeroes": true, 00:20:11.694 "zcopy": true, 00:20:11.694 "get_zone_info": false, 00:20:11.694 "zone_management": false, 00:20:11.694 "zone_append": false, 00:20:11.694 "compare": false, 00:20:11.694 "compare_and_write": false, 00:20:11.694 "abort": true, 00:20:11.694 "seek_hole": false, 00:20:11.694 "seek_data": false, 00:20:11.694 "copy": true, 00:20:11.694 "nvme_iov_md": false 00:20:11.694 }, 00:20:11.694 "memory_domains": [ 00:20:11.694 { 00:20:11.694 "dma_device_id": "system", 00:20:11.694 "dma_device_type": 1 00:20:11.694 }, 00:20:11.694 { 00:20:11.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.694 "dma_device_type": 2 00:20:11.694 } 00:20:11.694 ], 00:20:11.694 "driver_specific": {} 00:20:11.694 }' 00:20:11.694 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.694 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.694 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:11.694 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.694 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.694 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:11.694 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.694 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.694 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:11.694 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.694 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.694 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:11.694 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:11.952 [2024-07-25 00:04:07.743345] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:11.953 [2024-07-25 00:04:07.743689] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:11.953 [2024-07-25 00:04:07.743785] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:12.211 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:12.211 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:20:12.211 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:12.211 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:12.211 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:20:12.211 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:20:12.211 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:12.211 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:20:12.211 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:12.211 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:12.211 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:12.211 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:12.211 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:12.211 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:12.211 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:12.211 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.211 00:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.470 00:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:12.470 "name": "Existed_Raid", 00:20:12.470 "uuid": "8a01bc13-e5f9-4eda-8d61-27ea149b3f40", 00:20:12.470 "strip_size_kb": 64, 00:20:12.470 "state": "offline", 00:20:12.470 "raid_level": "raid0", 00:20:12.470 "superblock": false, 00:20:12.470 "num_base_bdevs": 4, 00:20:12.470 "num_base_bdevs_discovered": 3, 00:20:12.470 "num_base_bdevs_operational": 3, 00:20:12.470 "base_bdevs_list": [ 00:20:12.470 { 00:20:12.470 "name": null, 00:20:12.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.470 "is_configured": false, 00:20:12.470 "data_offset": 0, 00:20:12.470 "data_size": 65536 00:20:12.470 }, 00:20:12.470 { 00:20:12.470 "name": "BaseBdev2", 00:20:12.470 "uuid": "8bbee5ea-32b9-4386-aa49-1549fa324f60", 00:20:12.470 "is_configured": true, 00:20:12.470 "data_offset": 0, 00:20:12.470 "data_size": 65536 00:20:12.470 }, 00:20:12.470 { 00:20:12.470 "name": "BaseBdev3", 00:20:12.470 "uuid": "1f43d1da-b614-4849-aad7-ebfd1e5dea36", 00:20:12.470 "is_configured": true, 00:20:12.470 "data_offset": 0, 00:20:12.470 "data_size": 65536 00:20:12.470 }, 00:20:12.470 { 00:20:12.470 "name": "BaseBdev4", 00:20:12.470 "uuid": "7c61d1e4-7b11-46d6-b948-53cb6f9593a4", 00:20:12.470 "is_configured": true, 00:20:12.470 "data_offset": 0, 00:20:12.470 "data_size": 65536 00:20:12.470 } 00:20:12.470 ] 00:20:12.470 }' 00:20:12.470 00:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:12.470 00:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.729 00:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:12.729 00:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:12.729 00:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.729 00:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:12.987 00:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:12.987 00:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:12.987 00:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:13.246 [2024-07-25 00:04:08.928772] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:13.246 00:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:13.246 00:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:13.246 00:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.246 00:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:13.504 00:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:13.504 00:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:13.504 00:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:13.762 [2024-07-25 00:04:09.500631] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:13.762 00:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:13.762 00:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:13.762 00:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.762 00:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:14.021 00:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:14.021 00:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:14.021 00:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:14.279 [2024-07-25 00:04:10.042632] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:14.279 [2024-07-25 00:04:10.042696] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:20:14.279 00:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:14.279 00:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:14.279 00:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.279 00:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:14.537 00:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:14.537 00:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:14.538 00:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:20:14.538 00:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:14.538 00:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:14.538 00:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:14.796 BaseBdev2 00:20:15.054 00:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:15.054 00:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:15.054 00:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:15.054 00:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:15.054 00:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:15.054 00:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:15.054 00:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:15.054 00:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:15.313 [ 00:20:15.313 { 00:20:15.313 "name": "BaseBdev2", 00:20:15.313 "aliases": [ 00:20:15.313 "e9e0a4f3-6151-4d1f-a069-9912432092cd" 00:20:15.313 ], 00:20:15.313 "product_name": "Malloc disk", 00:20:15.313 "block_size": 512, 00:20:15.313 "num_blocks": 65536, 00:20:15.313 "uuid": "e9e0a4f3-6151-4d1f-a069-9912432092cd", 00:20:15.313 "assigned_rate_limits": { 00:20:15.313 "rw_ios_per_sec": 0, 00:20:15.313 "rw_mbytes_per_sec": 0, 00:20:15.313 "r_mbytes_per_sec": 0, 00:20:15.313 "w_mbytes_per_sec": 0 00:20:15.313 }, 00:20:15.313 "claimed": false, 00:20:15.313 "zoned": false, 00:20:15.313 "supported_io_types": { 00:20:15.313 "read": true, 00:20:15.313 "write": true, 00:20:15.313 "unmap": true, 00:20:15.313 "flush": true, 00:20:15.313 "reset": true, 00:20:15.313 "nvme_admin": false, 00:20:15.313 "nvme_io": false, 00:20:15.313 "nvme_io_md": false, 00:20:15.313 "write_zeroes": true, 00:20:15.313 "zcopy": true, 00:20:15.313 "get_zone_info": false, 00:20:15.313 "zone_management": false, 00:20:15.313 "zone_append": false, 00:20:15.313 "compare": false, 00:20:15.313 "compare_and_write": false, 00:20:15.313 "abort": true, 00:20:15.313 "seek_hole": false, 00:20:15.313 "seek_data": false, 00:20:15.313 "copy": true, 00:20:15.313 "nvme_iov_md": false 00:20:15.313 }, 00:20:15.313 "memory_domains": [ 00:20:15.313 { 00:20:15.313 "dma_device_id": "system", 00:20:15.313 "dma_device_type": 1 00:20:15.313 }, 00:20:15.313 { 00:20:15.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.313 "dma_device_type": 2 00:20:15.313 } 00:20:15.313 ], 00:20:15.313 "driver_specific": {} 00:20:15.313 } 00:20:15.313 ] 00:20:15.313 00:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:15.313 00:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:15.313 00:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:15.313 00:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:15.571 BaseBdev3 00:20:15.571 00:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:15.571 00:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:15.571 00:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:15.571 00:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:15.571 00:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:15.571 00:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:15.571 00:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:15.830 00:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:16.099 [ 00:20:16.099 { 00:20:16.099 "name": "BaseBdev3", 00:20:16.099 "aliases": [ 00:20:16.099 "08b4374d-eebc-4e26-984c-23a8e3a2dfd1" 00:20:16.099 ], 00:20:16.099 "product_name": "Malloc disk", 00:20:16.099 "block_size": 512, 00:20:16.099 "num_blocks": 65536, 00:20:16.099 "uuid": "08b4374d-eebc-4e26-984c-23a8e3a2dfd1", 00:20:16.099 "assigned_rate_limits": { 00:20:16.099 "rw_ios_per_sec": 0, 00:20:16.099 "rw_mbytes_per_sec": 0, 00:20:16.099 "r_mbytes_per_sec": 0, 00:20:16.099 "w_mbytes_per_sec": 0 00:20:16.099 }, 00:20:16.099 "claimed": false, 00:20:16.099 "zoned": false, 00:20:16.099 "supported_io_types": { 00:20:16.099 "read": true, 00:20:16.099 "write": true, 00:20:16.099 "unmap": true, 00:20:16.099 "flush": true, 00:20:16.099 "reset": true, 00:20:16.099 "nvme_admin": false, 00:20:16.099 "nvme_io": false, 00:20:16.099 "nvme_io_md": false, 00:20:16.099 "write_zeroes": true, 00:20:16.099 "zcopy": true, 00:20:16.099 "get_zone_info": false, 00:20:16.099 "zone_management": false, 00:20:16.099 "zone_append": false, 00:20:16.099 "compare": false, 00:20:16.099 "compare_and_write": false, 00:20:16.099 "abort": true, 00:20:16.099 "seek_hole": false, 00:20:16.099 "seek_data": false, 00:20:16.099 "copy": true, 00:20:16.099 "nvme_iov_md": false 00:20:16.099 }, 00:20:16.099 "memory_domains": [ 00:20:16.099 { 00:20:16.099 "dma_device_id": "system", 00:20:16.099 "dma_device_type": 1 00:20:16.099 }, 00:20:16.099 { 00:20:16.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.099 "dma_device_type": 2 00:20:16.099 } 00:20:16.099 ], 00:20:16.099 "driver_specific": {} 00:20:16.099 } 00:20:16.099 ] 00:20:16.099 00:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:16.099 00:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:16.099 00:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:16.099 00:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:16.371 BaseBdev4 00:20:16.371 00:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:20:16.371 00:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:20:16.371 00:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:16.371 00:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:16.371 00:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:16.371 00:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:16.371 00:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:16.630 00:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:16.889 [ 00:20:16.889 { 00:20:16.889 "name": "BaseBdev4", 00:20:16.889 "aliases": [ 00:20:16.889 "16268bd6-a8b9-4628-821a-8aa0660a3371" 00:20:16.889 ], 00:20:16.889 "product_name": "Malloc disk", 00:20:16.889 "block_size": 512, 00:20:16.889 "num_blocks": 65536, 00:20:16.889 "uuid": "16268bd6-a8b9-4628-821a-8aa0660a3371", 00:20:16.889 "assigned_rate_limits": { 00:20:16.889 "rw_ios_per_sec": 0, 00:20:16.889 "rw_mbytes_per_sec": 0, 00:20:16.889 "r_mbytes_per_sec": 0, 00:20:16.889 "w_mbytes_per_sec": 0 00:20:16.889 }, 00:20:16.889 "claimed": false, 00:20:16.889 "zoned": false, 00:20:16.889 "supported_io_types": { 00:20:16.889 "read": true, 00:20:16.889 "write": true, 00:20:16.889 "unmap": true, 00:20:16.889 "flush": true, 00:20:16.889 "reset": true, 00:20:16.889 "nvme_admin": false, 00:20:16.889 "nvme_io": false, 00:20:16.889 "nvme_io_md": false, 00:20:16.889 "write_zeroes": true, 00:20:16.889 "zcopy": true, 00:20:16.889 "get_zone_info": false, 00:20:16.889 "zone_management": false, 00:20:16.889 "zone_append": false, 00:20:16.889 "compare": false, 00:20:16.889 "compare_and_write": false, 00:20:16.889 "abort": true, 00:20:16.889 "seek_hole": false, 00:20:16.889 "seek_data": false, 00:20:16.889 "copy": true, 00:20:16.889 "nvme_iov_md": false 00:20:16.889 }, 00:20:16.889 "memory_domains": [ 00:20:16.889 { 00:20:16.889 "dma_device_id": "system", 00:20:16.889 "dma_device_type": 1 00:20:16.889 }, 00:20:16.889 { 00:20:16.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.889 "dma_device_type": 2 00:20:16.889 } 00:20:16.889 ], 00:20:16.889 "driver_specific": {} 00:20:16.889 } 00:20:16.889 ] 00:20:16.889 00:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:16.889 00:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:16.889 00:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:16.889 00:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:17.149 [2024-07-25 00:04:12.781581] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:17.149 [2024-07-25 00:04:12.781640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:17.149 [2024-07-25 00:04:12.781670] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:17.149 [2024-07-25 00:04:12.784061] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:17.149 [2024-07-25 00:04:12.784130] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:17.149 00:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:17.149 00:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:17.149 00:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:17.149 00:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:17.149 00:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:17.149 00:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:17.149 00:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:17.149 00:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:17.149 00:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:17.149 00:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:17.149 00:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.149 00:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.408 00:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:17.408 "name": "Existed_Raid", 00:20:17.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.408 "strip_size_kb": 64, 00:20:17.408 "state": "configuring", 00:20:17.408 "raid_level": "raid0", 00:20:17.408 "superblock": false, 00:20:17.408 "num_base_bdevs": 4, 00:20:17.408 "num_base_bdevs_discovered": 3, 00:20:17.408 "num_base_bdevs_operational": 4, 00:20:17.408 "base_bdevs_list": [ 00:20:17.408 { 00:20:17.408 "name": "BaseBdev1", 00:20:17.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.408 "is_configured": false, 00:20:17.408 "data_offset": 0, 00:20:17.408 "data_size": 0 00:20:17.408 }, 00:20:17.408 { 00:20:17.408 "name": "BaseBdev2", 00:20:17.408 "uuid": "e9e0a4f3-6151-4d1f-a069-9912432092cd", 00:20:17.408 "is_configured": true, 00:20:17.408 "data_offset": 0, 00:20:17.408 "data_size": 65536 00:20:17.408 }, 00:20:17.408 { 00:20:17.408 "name": "BaseBdev3", 00:20:17.408 "uuid": "08b4374d-eebc-4e26-984c-23a8e3a2dfd1", 00:20:17.408 "is_configured": true, 00:20:17.408 "data_offset": 0, 00:20:17.408 "data_size": 65536 00:20:17.408 }, 00:20:17.408 { 00:20:17.408 "name": "BaseBdev4", 00:20:17.408 "uuid": "16268bd6-a8b9-4628-821a-8aa0660a3371", 00:20:17.408 "is_configured": true, 00:20:17.408 "data_offset": 0, 00:20:17.408 "data_size": 65536 00:20:17.408 } 00:20:17.408 ] 00:20:17.408 }' 00:20:17.408 00:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:17.408 00:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.667 00:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:17.925 [2024-07-25 00:04:13.645868] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:17.925 00:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:17.925 00:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:17.925 00:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:17.925 00:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:17.925 00:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:17.925 00:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:17.925 00:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:17.925 00:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:17.925 00:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:17.925 00:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:17.925 00:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.925 00:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.182 00:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:18.182 "name": "Existed_Raid", 00:20:18.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.182 "strip_size_kb": 64, 00:20:18.182 "state": "configuring", 00:20:18.182 "raid_level": "raid0", 00:20:18.182 "superblock": false, 00:20:18.182 "num_base_bdevs": 4, 00:20:18.182 "num_base_bdevs_discovered": 2, 00:20:18.182 "num_base_bdevs_operational": 4, 00:20:18.182 "base_bdevs_list": [ 00:20:18.182 { 00:20:18.182 "name": "BaseBdev1", 00:20:18.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.182 "is_configured": false, 00:20:18.182 "data_offset": 0, 00:20:18.182 "data_size": 0 00:20:18.182 }, 00:20:18.182 { 00:20:18.182 "name": null, 00:20:18.182 "uuid": "e9e0a4f3-6151-4d1f-a069-9912432092cd", 00:20:18.182 "is_configured": false, 00:20:18.182 "data_offset": 0, 00:20:18.182 "data_size": 65536 00:20:18.182 }, 00:20:18.182 { 00:20:18.183 "name": "BaseBdev3", 00:20:18.183 "uuid": "08b4374d-eebc-4e26-984c-23a8e3a2dfd1", 00:20:18.183 "is_configured": true, 00:20:18.183 "data_offset": 0, 00:20:18.183 "data_size": 65536 00:20:18.183 }, 00:20:18.183 { 00:20:18.183 "name": "BaseBdev4", 00:20:18.183 "uuid": "16268bd6-a8b9-4628-821a-8aa0660a3371", 00:20:18.183 "is_configured": true, 00:20:18.183 "data_offset": 0, 00:20:18.183 "data_size": 65536 00:20:18.183 } 00:20:18.183 ] 00:20:18.183 }' 00:20:18.183 00:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:18.183 00:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.441 00:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.441 00:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:18.699 00:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:18.699 00:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:18.957 [2024-07-25 00:04:14.723572] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:18.957 BaseBdev1 00:20:18.957 00:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:18.957 00:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:18.957 00:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:18.957 00:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:18.957 00:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:18.957 00:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:18.957 00:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:19.215 00:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:19.473 [ 00:20:19.473 { 00:20:19.473 "name": "BaseBdev1", 00:20:19.473 "aliases": [ 00:20:19.473 "d6ffb6dc-410b-4625-a7a7-e6c25b152af0" 00:20:19.473 ], 00:20:19.473 "product_name": "Malloc disk", 00:20:19.473 "block_size": 512, 00:20:19.473 "num_blocks": 65536, 00:20:19.473 "uuid": "d6ffb6dc-410b-4625-a7a7-e6c25b152af0", 00:20:19.473 "assigned_rate_limits": { 00:20:19.473 "rw_ios_per_sec": 0, 00:20:19.473 "rw_mbytes_per_sec": 0, 00:20:19.473 "r_mbytes_per_sec": 0, 00:20:19.473 "w_mbytes_per_sec": 0 00:20:19.473 }, 00:20:19.473 "claimed": true, 00:20:19.473 "claim_type": "exclusive_write", 00:20:19.473 "zoned": false, 00:20:19.473 "supported_io_types": { 00:20:19.473 "read": true, 00:20:19.473 "write": true, 00:20:19.473 "unmap": true, 00:20:19.473 "flush": true, 00:20:19.473 "reset": true, 00:20:19.473 "nvme_admin": false, 00:20:19.473 "nvme_io": false, 00:20:19.473 "nvme_io_md": false, 00:20:19.473 "write_zeroes": true, 00:20:19.474 "zcopy": true, 00:20:19.474 "get_zone_info": false, 00:20:19.474 "zone_management": false, 00:20:19.474 "zone_append": false, 00:20:19.474 "compare": false, 00:20:19.474 "compare_and_write": false, 00:20:19.474 "abort": true, 00:20:19.474 "seek_hole": false, 00:20:19.474 "seek_data": false, 00:20:19.474 "copy": true, 00:20:19.474 "nvme_iov_md": false 00:20:19.474 }, 00:20:19.474 "memory_domains": [ 00:20:19.474 { 00:20:19.474 "dma_device_id": "system", 00:20:19.474 "dma_device_type": 1 00:20:19.474 }, 00:20:19.474 { 00:20:19.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.474 "dma_device_type": 2 00:20:19.474 } 00:20:19.474 ], 00:20:19.474 "driver_specific": {} 00:20:19.474 } 00:20:19.474 ] 00:20:19.474 00:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:19.474 00:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:19.474 00:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:19.474 00:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:19.474 00:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:19.474 00:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:19.474 00:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:19.474 00:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:19.474 00:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:19.474 00:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:19.474 00:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:19.474 00:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.474 00:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:19.732 00:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:19.732 "name": "Existed_Raid", 00:20:19.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.732 "strip_size_kb": 64, 00:20:19.732 "state": "configuring", 00:20:19.732 "raid_level": "raid0", 00:20:19.732 "superblock": false, 00:20:19.732 "num_base_bdevs": 4, 00:20:19.732 "num_base_bdevs_discovered": 3, 00:20:19.732 "num_base_bdevs_operational": 4, 00:20:19.732 "base_bdevs_list": [ 00:20:19.732 { 00:20:19.732 "name": "BaseBdev1", 00:20:19.732 "uuid": "d6ffb6dc-410b-4625-a7a7-e6c25b152af0", 00:20:19.732 "is_configured": true, 00:20:19.732 "data_offset": 0, 00:20:19.732 "data_size": 65536 00:20:19.732 }, 00:20:19.732 { 00:20:19.732 "name": null, 00:20:19.732 "uuid": "e9e0a4f3-6151-4d1f-a069-9912432092cd", 00:20:19.732 "is_configured": false, 00:20:19.732 "data_offset": 0, 00:20:19.732 "data_size": 65536 00:20:19.732 }, 00:20:19.732 { 00:20:19.732 "name": "BaseBdev3", 00:20:19.732 "uuid": "08b4374d-eebc-4e26-984c-23a8e3a2dfd1", 00:20:19.732 "is_configured": true, 00:20:19.732 "data_offset": 0, 00:20:19.732 "data_size": 65536 00:20:19.732 }, 00:20:19.732 { 00:20:19.732 "name": "BaseBdev4", 00:20:19.732 "uuid": "16268bd6-a8b9-4628-821a-8aa0660a3371", 00:20:19.732 "is_configured": true, 00:20:19.732 "data_offset": 0, 00:20:19.732 "data_size": 65536 00:20:19.732 } 00:20:19.732 ] 00:20:19.732 }' 00:20:19.732 00:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:19.732 00:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.990 00:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.990 00:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:20.249 00:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:20.249 00:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:20.508 [2024-07-25 00:04:16.312245] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:20.508 00:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:20.508 00:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:20.508 00:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:20.508 00:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:20.508 00:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:20.508 00:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:20.508 00:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:20.508 00:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:20.508 00:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:20.508 00:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:20.508 00:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.508 00:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.767 00:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:20.767 "name": "Existed_Raid", 00:20:20.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.767 "strip_size_kb": 64, 00:20:20.767 "state": "configuring", 00:20:20.767 "raid_level": "raid0", 00:20:20.767 "superblock": false, 00:20:20.767 "num_base_bdevs": 4, 00:20:20.767 "num_base_bdevs_discovered": 2, 00:20:20.767 "num_base_bdevs_operational": 4, 00:20:20.767 "base_bdevs_list": [ 00:20:20.767 { 00:20:20.767 "name": "BaseBdev1", 00:20:20.767 "uuid": "d6ffb6dc-410b-4625-a7a7-e6c25b152af0", 00:20:20.767 "is_configured": true, 00:20:20.767 "data_offset": 0, 00:20:20.767 "data_size": 65536 00:20:20.767 }, 00:20:20.767 { 00:20:20.767 "name": null, 00:20:20.767 "uuid": "e9e0a4f3-6151-4d1f-a069-9912432092cd", 00:20:20.767 "is_configured": false, 00:20:20.767 "data_offset": 0, 00:20:20.767 "data_size": 65536 00:20:20.767 }, 00:20:20.767 { 00:20:20.767 "name": null, 00:20:20.767 "uuid": "08b4374d-eebc-4e26-984c-23a8e3a2dfd1", 00:20:20.767 "is_configured": false, 00:20:20.767 "data_offset": 0, 00:20:20.767 "data_size": 65536 00:20:20.767 }, 00:20:20.767 { 00:20:20.767 "name": "BaseBdev4", 00:20:20.767 "uuid": "16268bd6-a8b9-4628-821a-8aa0660a3371", 00:20:20.767 "is_configured": true, 00:20:20.767 "data_offset": 0, 00:20:20.767 "data_size": 65536 00:20:20.767 } 00:20:20.767 ] 00:20:20.767 }' 00:20:20.767 00:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:20.767 00:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.335 00:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:21.335 00:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.335 00:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:21.335 00:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:21.594 [2024-07-25 00:04:17.436649] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:21.594 00:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:21.594 00:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:21.594 00:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:21.594 00:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:21.594 00:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:21.594 00:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:21.594 00:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:21.594 00:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:21.594 00:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:21.594 00:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:21.594 00:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.594 00:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.853 00:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:21.853 "name": "Existed_Raid", 00:20:21.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.853 "strip_size_kb": 64, 00:20:21.853 "state": "configuring", 00:20:21.853 "raid_level": "raid0", 00:20:21.853 "superblock": false, 00:20:21.853 "num_base_bdevs": 4, 00:20:21.853 "num_base_bdevs_discovered": 3, 00:20:21.853 "num_base_bdevs_operational": 4, 00:20:21.853 "base_bdevs_list": [ 00:20:21.853 { 00:20:21.853 "name": "BaseBdev1", 00:20:21.853 "uuid": "d6ffb6dc-410b-4625-a7a7-e6c25b152af0", 00:20:21.853 "is_configured": true, 00:20:21.853 "data_offset": 0, 00:20:21.853 "data_size": 65536 00:20:21.853 }, 00:20:21.853 { 00:20:21.853 "name": null, 00:20:21.853 "uuid": "e9e0a4f3-6151-4d1f-a069-9912432092cd", 00:20:21.853 "is_configured": false, 00:20:21.853 "data_offset": 0, 00:20:21.853 "data_size": 65536 00:20:21.853 }, 00:20:21.853 { 00:20:21.853 "name": "BaseBdev3", 00:20:21.853 "uuid": "08b4374d-eebc-4e26-984c-23a8e3a2dfd1", 00:20:21.853 "is_configured": true, 00:20:21.853 "data_offset": 0, 00:20:21.853 "data_size": 65536 00:20:21.853 }, 00:20:21.853 { 00:20:21.853 "name": "BaseBdev4", 00:20:21.853 "uuid": "16268bd6-a8b9-4628-821a-8aa0660a3371", 00:20:21.853 "is_configured": true, 00:20:21.853 "data_offset": 0, 00:20:21.853 "data_size": 65536 00:20:21.853 } 00:20:21.853 ] 00:20:21.853 }' 00:20:21.853 00:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:21.853 00:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.421 00:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.421 00:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:22.421 00:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:22.421 00:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:22.679 [2024-07-25 00:04:18.461078] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:22.938 00:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:22.938 00:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:22.938 00:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:22.938 00:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:22.938 00:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:22.938 00:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:22.938 00:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:22.938 00:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:22.938 00:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:22.938 00:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:22.938 00:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.938 00:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.197 00:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:23.197 "name": "Existed_Raid", 00:20:23.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.197 "strip_size_kb": 64, 00:20:23.197 "state": "configuring", 00:20:23.197 "raid_level": "raid0", 00:20:23.197 "superblock": false, 00:20:23.197 "num_base_bdevs": 4, 00:20:23.197 "num_base_bdevs_discovered": 2, 00:20:23.197 "num_base_bdevs_operational": 4, 00:20:23.197 "base_bdevs_list": [ 00:20:23.197 { 00:20:23.197 "name": null, 00:20:23.197 "uuid": "d6ffb6dc-410b-4625-a7a7-e6c25b152af0", 00:20:23.197 "is_configured": false, 00:20:23.197 "data_offset": 0, 00:20:23.197 "data_size": 65536 00:20:23.197 }, 00:20:23.197 { 00:20:23.197 "name": null, 00:20:23.197 "uuid": "e9e0a4f3-6151-4d1f-a069-9912432092cd", 00:20:23.197 "is_configured": false, 00:20:23.197 "data_offset": 0, 00:20:23.197 "data_size": 65536 00:20:23.197 }, 00:20:23.197 { 00:20:23.197 "name": "BaseBdev3", 00:20:23.197 "uuid": "08b4374d-eebc-4e26-984c-23a8e3a2dfd1", 00:20:23.197 "is_configured": true, 00:20:23.197 "data_offset": 0, 00:20:23.197 "data_size": 65536 00:20:23.197 }, 00:20:23.197 { 00:20:23.197 "name": "BaseBdev4", 00:20:23.197 "uuid": "16268bd6-a8b9-4628-821a-8aa0660a3371", 00:20:23.197 "is_configured": true, 00:20:23.197 "data_offset": 0, 00:20:23.197 "data_size": 65536 00:20:23.197 } 00:20:23.197 ] 00:20:23.197 }' 00:20:23.198 00:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:23.198 00:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.456 00:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:23.456 00:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.715 00:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:23.715 00:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:23.976 [2024-07-25 00:04:19.679987] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:23.976 00:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:23.976 00:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:23.976 00:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:23.976 00:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:23.976 00:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:23.976 00:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:23.976 00:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:23.976 00:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:23.976 00:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:23.976 00:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:23.976 00:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.976 00:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.237 00:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:24.237 "name": "Existed_Raid", 00:20:24.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.237 "strip_size_kb": 64, 00:20:24.237 "state": "configuring", 00:20:24.237 "raid_level": "raid0", 00:20:24.237 "superblock": false, 00:20:24.237 "num_base_bdevs": 4, 00:20:24.237 "num_base_bdevs_discovered": 3, 00:20:24.237 "num_base_bdevs_operational": 4, 00:20:24.237 "base_bdevs_list": [ 00:20:24.237 { 00:20:24.237 "name": null, 00:20:24.237 "uuid": "d6ffb6dc-410b-4625-a7a7-e6c25b152af0", 00:20:24.237 "is_configured": false, 00:20:24.237 "data_offset": 0, 00:20:24.237 "data_size": 65536 00:20:24.237 }, 00:20:24.237 { 00:20:24.237 "name": "BaseBdev2", 00:20:24.237 "uuid": "e9e0a4f3-6151-4d1f-a069-9912432092cd", 00:20:24.237 "is_configured": true, 00:20:24.237 "data_offset": 0, 00:20:24.237 "data_size": 65536 00:20:24.237 }, 00:20:24.237 { 00:20:24.237 "name": "BaseBdev3", 00:20:24.237 "uuid": "08b4374d-eebc-4e26-984c-23a8e3a2dfd1", 00:20:24.237 "is_configured": true, 00:20:24.237 "data_offset": 0, 00:20:24.237 "data_size": 65536 00:20:24.237 }, 00:20:24.237 { 00:20:24.237 "name": "BaseBdev4", 00:20:24.237 "uuid": "16268bd6-a8b9-4628-821a-8aa0660a3371", 00:20:24.237 "is_configured": true, 00:20:24.237 "data_offset": 0, 00:20:24.237 "data_size": 65536 00:20:24.237 } 00:20:24.237 ] 00:20:24.237 }' 00:20:24.237 00:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:24.237 00:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.496 00:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.496 00:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:24.755 00:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:24.755 00:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.755 00:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:25.014 00:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d6ffb6dc-410b-4625-a7a7-e6c25b152af0 00:20:25.272 [2024-07-25 00:04:21.006907] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:25.272 [2024-07-25 00:04:21.006963] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:20:25.272 [2024-07-25 00:04:21.006976] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:20:25.272 [2024-07-25 00:04:21.007140] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ee0 00:20:25.272 [2024-07-25 00:04:21.007469] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:20:25.272 [2024-07-25 00:04:21.007489] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000009380 00:20:25.272 [2024-07-25 00:04:21.007738] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.272 NewBaseBdev 00:20:25.273 00:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:25.273 00:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:20:25.273 00:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:25.273 00:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:25.273 00:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:25.273 00:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:25.273 00:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:25.531 00:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:25.790 [ 00:20:25.790 { 00:20:25.790 "name": "NewBaseBdev", 00:20:25.790 "aliases": [ 00:20:25.790 "d6ffb6dc-410b-4625-a7a7-e6c25b152af0" 00:20:25.790 ], 00:20:25.790 "product_name": "Malloc disk", 00:20:25.790 "block_size": 512, 00:20:25.790 "num_blocks": 65536, 00:20:25.790 "uuid": "d6ffb6dc-410b-4625-a7a7-e6c25b152af0", 00:20:25.790 "assigned_rate_limits": { 00:20:25.790 "rw_ios_per_sec": 0, 00:20:25.790 "rw_mbytes_per_sec": 0, 00:20:25.790 "r_mbytes_per_sec": 0, 00:20:25.790 "w_mbytes_per_sec": 0 00:20:25.790 }, 00:20:25.790 "claimed": true, 00:20:25.790 "claim_type": "exclusive_write", 00:20:25.790 "zoned": false, 00:20:25.790 "supported_io_types": { 00:20:25.790 "read": true, 00:20:25.790 "write": true, 00:20:25.790 "unmap": true, 00:20:25.790 "flush": true, 00:20:25.790 "reset": true, 00:20:25.790 "nvme_admin": false, 00:20:25.790 "nvme_io": false, 00:20:25.790 "nvme_io_md": false, 00:20:25.790 "write_zeroes": true, 00:20:25.790 "zcopy": true, 00:20:25.790 "get_zone_info": false, 00:20:25.790 "zone_management": false, 00:20:25.790 "zone_append": false, 00:20:25.790 "compare": false, 00:20:25.790 "compare_and_write": false, 00:20:25.790 "abort": true, 00:20:25.790 "seek_hole": false, 00:20:25.790 "seek_data": false, 00:20:25.790 "copy": true, 00:20:25.790 "nvme_iov_md": false 00:20:25.790 }, 00:20:25.790 "memory_domains": [ 00:20:25.790 { 00:20:25.790 "dma_device_id": "system", 00:20:25.790 "dma_device_type": 1 00:20:25.790 }, 00:20:25.790 { 00:20:25.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.790 "dma_device_type": 2 00:20:25.790 } 00:20:25.790 ], 00:20:25.790 "driver_specific": {} 00:20:25.790 } 00:20:25.790 ] 00:20:25.790 00:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:25.790 00:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:20:25.790 00:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:25.790 00:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:25.790 00:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:25.790 00:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:25.790 00:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:25.790 00:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:25.790 00:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:25.790 00:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:25.790 00:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:25.790 00:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.790 00:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.049 00:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:26.049 "name": "Existed_Raid", 00:20:26.049 "uuid": "9305a916-4da5-4ecf-88ea-b1f1b91fc4a0", 00:20:26.049 "strip_size_kb": 64, 00:20:26.049 "state": "online", 00:20:26.049 "raid_level": "raid0", 00:20:26.049 "superblock": false, 00:20:26.049 "num_base_bdevs": 4, 00:20:26.049 "num_base_bdevs_discovered": 4, 00:20:26.049 "num_base_bdevs_operational": 4, 00:20:26.049 "base_bdevs_list": [ 00:20:26.049 { 00:20:26.049 "name": "NewBaseBdev", 00:20:26.049 "uuid": "d6ffb6dc-410b-4625-a7a7-e6c25b152af0", 00:20:26.049 "is_configured": true, 00:20:26.049 "data_offset": 0, 00:20:26.049 "data_size": 65536 00:20:26.049 }, 00:20:26.049 { 00:20:26.049 "name": "BaseBdev2", 00:20:26.049 "uuid": "e9e0a4f3-6151-4d1f-a069-9912432092cd", 00:20:26.049 "is_configured": true, 00:20:26.050 "data_offset": 0, 00:20:26.050 "data_size": 65536 00:20:26.050 }, 00:20:26.050 { 00:20:26.050 "name": "BaseBdev3", 00:20:26.050 "uuid": "08b4374d-eebc-4e26-984c-23a8e3a2dfd1", 00:20:26.050 "is_configured": true, 00:20:26.050 "data_offset": 0, 00:20:26.050 "data_size": 65536 00:20:26.050 }, 00:20:26.050 { 00:20:26.050 "name": "BaseBdev4", 00:20:26.050 "uuid": "16268bd6-a8b9-4628-821a-8aa0660a3371", 00:20:26.050 "is_configured": true, 00:20:26.050 "data_offset": 0, 00:20:26.050 "data_size": 65536 00:20:26.050 } 00:20:26.050 ] 00:20:26.050 }' 00:20:26.050 00:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:26.050 00:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.311 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:26.311 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:26.311 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:26.311 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:26.311 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:26.311 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:26.312 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:26.312 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:26.577 [2024-07-25 00:04:22.323956] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:26.577 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:26.577 "name": "Existed_Raid", 00:20:26.577 "aliases": [ 00:20:26.577 "9305a916-4da5-4ecf-88ea-b1f1b91fc4a0" 00:20:26.577 ], 00:20:26.577 "product_name": "Raid Volume", 00:20:26.577 "block_size": 512, 00:20:26.577 "num_blocks": 262144, 00:20:26.577 "uuid": "9305a916-4da5-4ecf-88ea-b1f1b91fc4a0", 00:20:26.577 "assigned_rate_limits": { 00:20:26.577 "rw_ios_per_sec": 0, 00:20:26.577 "rw_mbytes_per_sec": 0, 00:20:26.577 "r_mbytes_per_sec": 0, 00:20:26.577 "w_mbytes_per_sec": 0 00:20:26.577 }, 00:20:26.577 "claimed": false, 00:20:26.577 "zoned": false, 00:20:26.577 "supported_io_types": { 00:20:26.577 "read": true, 00:20:26.577 "write": true, 00:20:26.577 "unmap": true, 00:20:26.577 "flush": true, 00:20:26.577 "reset": true, 00:20:26.577 "nvme_admin": false, 00:20:26.577 "nvme_io": false, 00:20:26.577 "nvme_io_md": false, 00:20:26.578 "write_zeroes": true, 00:20:26.578 "zcopy": false, 00:20:26.578 "get_zone_info": false, 00:20:26.578 "zone_management": false, 00:20:26.578 "zone_append": false, 00:20:26.578 "compare": false, 00:20:26.578 "compare_and_write": false, 00:20:26.578 "abort": false, 00:20:26.578 "seek_hole": false, 00:20:26.578 "seek_data": false, 00:20:26.578 "copy": false, 00:20:26.578 "nvme_iov_md": false 00:20:26.578 }, 00:20:26.578 "memory_domains": [ 00:20:26.578 { 00:20:26.578 "dma_device_id": "system", 00:20:26.578 "dma_device_type": 1 00:20:26.578 }, 00:20:26.578 { 00:20:26.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.578 "dma_device_type": 2 00:20:26.578 }, 00:20:26.578 { 00:20:26.578 "dma_device_id": "system", 00:20:26.578 "dma_device_type": 1 00:20:26.578 }, 00:20:26.578 { 00:20:26.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.578 "dma_device_type": 2 00:20:26.578 }, 00:20:26.578 { 00:20:26.578 "dma_device_id": "system", 00:20:26.578 "dma_device_type": 1 00:20:26.578 }, 00:20:26.578 { 00:20:26.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.578 "dma_device_type": 2 00:20:26.578 }, 00:20:26.578 { 00:20:26.578 "dma_device_id": "system", 00:20:26.578 "dma_device_type": 1 00:20:26.578 }, 00:20:26.578 { 00:20:26.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.578 "dma_device_type": 2 00:20:26.578 } 00:20:26.578 ], 00:20:26.578 "driver_specific": { 00:20:26.578 "raid": { 00:20:26.578 "uuid": "9305a916-4da5-4ecf-88ea-b1f1b91fc4a0", 00:20:26.578 "strip_size_kb": 64, 00:20:26.578 "state": "online", 00:20:26.578 "raid_level": "raid0", 00:20:26.578 "superblock": false, 00:20:26.578 "num_base_bdevs": 4, 00:20:26.578 "num_base_bdevs_discovered": 4, 00:20:26.578 "num_base_bdevs_operational": 4, 00:20:26.578 "base_bdevs_list": [ 00:20:26.578 { 00:20:26.578 "name": "NewBaseBdev", 00:20:26.578 "uuid": "d6ffb6dc-410b-4625-a7a7-e6c25b152af0", 00:20:26.578 "is_configured": true, 00:20:26.578 "data_offset": 0, 00:20:26.578 "data_size": 65536 00:20:26.578 }, 00:20:26.578 { 00:20:26.578 "name": "BaseBdev2", 00:20:26.578 "uuid": "e9e0a4f3-6151-4d1f-a069-9912432092cd", 00:20:26.578 "is_configured": true, 00:20:26.578 "data_offset": 0, 00:20:26.578 "data_size": 65536 00:20:26.578 }, 00:20:26.578 { 00:20:26.578 "name": "BaseBdev3", 00:20:26.578 "uuid": "08b4374d-eebc-4e26-984c-23a8e3a2dfd1", 00:20:26.578 "is_configured": true, 00:20:26.578 "data_offset": 0, 00:20:26.578 "data_size": 65536 00:20:26.578 }, 00:20:26.578 { 00:20:26.578 "name": "BaseBdev4", 00:20:26.578 "uuid": "16268bd6-a8b9-4628-821a-8aa0660a3371", 00:20:26.578 "is_configured": true, 00:20:26.578 "data_offset": 0, 00:20:26.578 "data_size": 65536 00:20:26.578 } 00:20:26.578 ] 00:20:26.578 } 00:20:26.578 } 00:20:26.578 }' 00:20:26.578 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:26.578 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:26.578 BaseBdev2 00:20:26.578 BaseBdev3 00:20:26.578 BaseBdev4' 00:20:26.578 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:26.578 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:26.578 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:26.837 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:26.837 "name": "NewBaseBdev", 00:20:26.837 "aliases": [ 00:20:26.837 "d6ffb6dc-410b-4625-a7a7-e6c25b152af0" 00:20:26.837 ], 00:20:26.837 "product_name": "Malloc disk", 00:20:26.837 "block_size": 512, 00:20:26.837 "num_blocks": 65536, 00:20:26.837 "uuid": "d6ffb6dc-410b-4625-a7a7-e6c25b152af0", 00:20:26.837 "assigned_rate_limits": { 00:20:26.837 "rw_ios_per_sec": 0, 00:20:26.837 "rw_mbytes_per_sec": 0, 00:20:26.837 "r_mbytes_per_sec": 0, 00:20:26.837 "w_mbytes_per_sec": 0 00:20:26.837 }, 00:20:26.837 "claimed": true, 00:20:26.837 "claim_type": "exclusive_write", 00:20:26.837 "zoned": false, 00:20:26.837 "supported_io_types": { 00:20:26.837 "read": true, 00:20:26.837 "write": true, 00:20:26.837 "unmap": true, 00:20:26.837 "flush": true, 00:20:26.837 "reset": true, 00:20:26.837 "nvme_admin": false, 00:20:26.837 "nvme_io": false, 00:20:26.837 "nvme_io_md": false, 00:20:26.837 "write_zeroes": true, 00:20:26.837 "zcopy": true, 00:20:26.837 "get_zone_info": false, 00:20:26.837 "zone_management": false, 00:20:26.837 "zone_append": false, 00:20:26.837 "compare": false, 00:20:26.837 "compare_and_write": false, 00:20:26.837 "abort": true, 00:20:26.837 "seek_hole": false, 00:20:26.837 "seek_data": false, 00:20:26.837 "copy": true, 00:20:26.837 "nvme_iov_md": false 00:20:26.837 }, 00:20:26.837 "memory_domains": [ 00:20:26.837 { 00:20:26.837 "dma_device_id": "system", 00:20:26.837 "dma_device_type": 1 00:20:26.837 }, 00:20:26.837 { 00:20:26.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.837 "dma_device_type": 2 00:20:26.837 } 00:20:26.837 ], 00:20:26.837 "driver_specific": {} 00:20:26.837 }' 00:20:26.837 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:26.837 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:26.837 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:26.837 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:26.837 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:26.837 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:26.837 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:26.837 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:26.837 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:26.837 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.096 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.096 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:27.096 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:27.096 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:27.096 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:27.096 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:27.096 "name": "BaseBdev2", 00:20:27.096 "aliases": [ 00:20:27.096 "e9e0a4f3-6151-4d1f-a069-9912432092cd" 00:20:27.096 ], 00:20:27.096 "product_name": "Malloc disk", 00:20:27.096 "block_size": 512, 00:20:27.096 "num_blocks": 65536, 00:20:27.096 "uuid": "e9e0a4f3-6151-4d1f-a069-9912432092cd", 00:20:27.096 "assigned_rate_limits": { 00:20:27.096 "rw_ios_per_sec": 0, 00:20:27.096 "rw_mbytes_per_sec": 0, 00:20:27.096 "r_mbytes_per_sec": 0, 00:20:27.096 "w_mbytes_per_sec": 0 00:20:27.096 }, 00:20:27.096 "claimed": true, 00:20:27.096 "claim_type": "exclusive_write", 00:20:27.096 "zoned": false, 00:20:27.096 "supported_io_types": { 00:20:27.096 "read": true, 00:20:27.096 "write": true, 00:20:27.096 "unmap": true, 00:20:27.096 "flush": true, 00:20:27.096 "reset": true, 00:20:27.096 "nvme_admin": false, 00:20:27.096 "nvme_io": false, 00:20:27.096 "nvme_io_md": false, 00:20:27.096 "write_zeroes": true, 00:20:27.096 "zcopy": true, 00:20:27.096 "get_zone_info": false, 00:20:27.096 "zone_management": false, 00:20:27.096 "zone_append": false, 00:20:27.096 "compare": false, 00:20:27.096 "compare_and_write": false, 00:20:27.096 "abort": true, 00:20:27.096 "seek_hole": false, 00:20:27.096 "seek_data": false, 00:20:27.096 "copy": true, 00:20:27.096 "nvme_iov_md": false 00:20:27.096 }, 00:20:27.096 "memory_domains": [ 00:20:27.096 { 00:20:27.096 "dma_device_id": "system", 00:20:27.096 "dma_device_type": 1 00:20:27.096 }, 00:20:27.096 { 00:20:27.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.096 "dma_device_type": 2 00:20:27.096 } 00:20:27.096 ], 00:20:27.096 "driver_specific": {} 00:20:27.096 }' 00:20:27.096 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.358 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.358 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:27.358 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.358 00:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.358 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:27.358 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.358 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.358 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:27.358 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.358 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.358 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:27.358 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:27.358 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:27.358 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:27.622 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:27.622 "name": "BaseBdev3", 00:20:27.622 "aliases": [ 00:20:27.622 "08b4374d-eebc-4e26-984c-23a8e3a2dfd1" 00:20:27.622 ], 00:20:27.622 "product_name": "Malloc disk", 00:20:27.622 "block_size": 512, 00:20:27.622 "num_blocks": 65536, 00:20:27.622 "uuid": "08b4374d-eebc-4e26-984c-23a8e3a2dfd1", 00:20:27.622 "assigned_rate_limits": { 00:20:27.622 "rw_ios_per_sec": 0, 00:20:27.622 "rw_mbytes_per_sec": 0, 00:20:27.622 "r_mbytes_per_sec": 0, 00:20:27.622 "w_mbytes_per_sec": 0 00:20:27.622 }, 00:20:27.622 "claimed": true, 00:20:27.622 "claim_type": "exclusive_write", 00:20:27.622 "zoned": false, 00:20:27.622 "supported_io_types": { 00:20:27.622 "read": true, 00:20:27.622 "write": true, 00:20:27.622 "unmap": true, 00:20:27.622 "flush": true, 00:20:27.622 "reset": true, 00:20:27.622 "nvme_admin": false, 00:20:27.622 "nvme_io": false, 00:20:27.622 "nvme_io_md": false, 00:20:27.622 "write_zeroes": true, 00:20:27.622 "zcopy": true, 00:20:27.622 "get_zone_info": false, 00:20:27.622 "zone_management": false, 00:20:27.622 "zone_append": false, 00:20:27.622 "compare": false, 00:20:27.622 "compare_and_write": false, 00:20:27.622 "abort": true, 00:20:27.622 "seek_hole": false, 00:20:27.622 "seek_data": false, 00:20:27.622 "copy": true, 00:20:27.622 "nvme_iov_md": false 00:20:27.622 }, 00:20:27.622 "memory_domains": [ 00:20:27.622 { 00:20:27.622 "dma_device_id": "system", 00:20:27.622 "dma_device_type": 1 00:20:27.622 }, 00:20:27.622 { 00:20:27.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.622 "dma_device_type": 2 00:20:27.622 } 00:20:27.622 ], 00:20:27.622 "driver_specific": {} 00:20:27.622 }' 00:20:27.622 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.622 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.622 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:27.622 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.622 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.623 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:27.623 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.623 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.623 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:27.623 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.623 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.623 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:27.623 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:27.623 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:27.623 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:27.881 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:27.881 "name": "BaseBdev4", 00:20:27.881 "aliases": [ 00:20:27.881 "16268bd6-a8b9-4628-821a-8aa0660a3371" 00:20:27.881 ], 00:20:27.881 "product_name": "Malloc disk", 00:20:27.881 "block_size": 512, 00:20:27.881 "num_blocks": 65536, 00:20:27.881 "uuid": "16268bd6-a8b9-4628-821a-8aa0660a3371", 00:20:27.881 "assigned_rate_limits": { 00:20:27.881 "rw_ios_per_sec": 0, 00:20:27.881 "rw_mbytes_per_sec": 0, 00:20:27.881 "r_mbytes_per_sec": 0, 00:20:27.881 "w_mbytes_per_sec": 0 00:20:27.881 }, 00:20:27.881 "claimed": true, 00:20:27.881 "claim_type": "exclusive_write", 00:20:27.881 "zoned": false, 00:20:27.881 "supported_io_types": { 00:20:27.881 "read": true, 00:20:27.881 "write": true, 00:20:27.881 "unmap": true, 00:20:27.881 "flush": true, 00:20:27.881 "reset": true, 00:20:27.881 "nvme_admin": false, 00:20:27.881 "nvme_io": false, 00:20:27.881 "nvme_io_md": false, 00:20:27.881 "write_zeroes": true, 00:20:27.881 "zcopy": true, 00:20:27.881 "get_zone_info": false, 00:20:27.881 "zone_management": false, 00:20:27.881 "zone_append": false, 00:20:27.881 "compare": false, 00:20:27.881 "compare_and_write": false, 00:20:27.881 "abort": true, 00:20:27.881 "seek_hole": false, 00:20:27.881 "seek_data": false, 00:20:27.881 "copy": true, 00:20:27.881 "nvme_iov_md": false 00:20:27.881 }, 00:20:27.881 "memory_domains": [ 00:20:27.881 { 00:20:27.881 "dma_device_id": "system", 00:20:27.881 "dma_device_type": 1 00:20:27.881 }, 00:20:27.881 { 00:20:27.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.881 "dma_device_type": 2 00:20:27.881 } 00:20:27.881 ], 00:20:27.881 "driver_specific": {} 00:20:27.881 }' 00:20:27.881 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.881 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.881 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:27.881 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.881 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.881 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:27.881 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.881 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.881 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:27.881 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.881 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.881 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:27.881 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:28.139 [2024-07-25 00:04:23.888035] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:28.139 [2024-07-25 00:04:23.888080] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:28.139 [2024-07-25 00:04:23.888162] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:28.139 [2024-07-25 00:04:23.888236] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:28.139 [2024-07-25 00:04:23.888250] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name Existed_Raid, state offline 00:20:28.139 00:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 88003 00:20:28.139 00:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 88003 ']' 00:20:28.139 00:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 88003 00:20:28.139 00:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:20:28.139 00:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:28.139 00:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88003 00:20:28.139 killing process with pid 88003 00:20:28.139 00:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:28.139 00:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:28.139 00:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88003' 00:20:28.140 00:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 88003 00:20:28.140 00:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 88003 00:20:28.140 [2024-07-25 00:04:23.938435] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:28.401 [2024-07-25 00:04:24.261570] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:20:29.778 00:20:29.778 real 0m28.167s 00:20:29.778 user 0m49.197s 00:20:29.778 sys 0m4.432s 00:20:29.778 ************************************ 00:20:29.778 END TEST raid_state_function_test 00:20:29.778 ************************************ 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.778 00:04:25 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:20:29.778 00:04:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:29.778 00:04:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:29.778 00:04:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:29.778 ************************************ 00:20:29.778 START TEST raid_state_function_test_sb 00:20:29.778 ************************************ 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:20:29.778 Process raid pid: 88997 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=88997 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 88997' 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 88997 /var/tmp/spdk-raid.sock 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 88997 ']' 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:29.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:29.778 00:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.778 [2024-07-25 00:04:25.562312] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:20:29.778 [2024-07-25 00:04:25.562753] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.036 [2024-07-25 00:04:25.741019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.294 [2024-07-25 00:04:25.977295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.294 [2024-07-25 00:04:26.158456] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:30.860 00:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:30.860 00:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:20:30.860 00:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:31.119 [2024-07-25 00:04:26.811760] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:31.119 [2024-07-25 00:04:26.812108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:31.119 [2024-07-25 00:04:26.812137] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:31.119 [2024-07-25 00:04:26.812158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:31.119 [2024-07-25 00:04:26.812169] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:31.119 [2024-07-25 00:04:26.812184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:31.119 [2024-07-25 00:04:26.812195] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:31.119 [2024-07-25 00:04:26.812209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:31.119 00:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:31.119 00:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:31.119 00:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:31.119 00:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:31.119 00:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:31.119 00:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:31.119 00:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:31.119 00:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:31.119 00:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:31.119 00:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:31.119 00:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.119 00:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:31.378 00:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:31.378 "name": "Existed_Raid", 00:20:31.378 "uuid": "8aee7572-4a06-4a98-8657-4b9f57425d58", 00:20:31.378 "strip_size_kb": 64, 00:20:31.378 "state": "configuring", 00:20:31.378 "raid_level": "raid0", 00:20:31.378 "superblock": true, 00:20:31.378 "num_base_bdevs": 4, 00:20:31.378 "num_base_bdevs_discovered": 0, 00:20:31.378 "num_base_bdevs_operational": 4, 00:20:31.378 "base_bdevs_list": [ 00:20:31.378 { 00:20:31.378 "name": "BaseBdev1", 00:20:31.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.378 "is_configured": false, 00:20:31.378 "data_offset": 0, 00:20:31.378 "data_size": 0 00:20:31.378 }, 00:20:31.378 { 00:20:31.378 "name": "BaseBdev2", 00:20:31.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.378 "is_configured": false, 00:20:31.378 "data_offset": 0, 00:20:31.378 "data_size": 0 00:20:31.378 }, 00:20:31.378 { 00:20:31.378 "name": "BaseBdev3", 00:20:31.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.378 "is_configured": false, 00:20:31.378 "data_offset": 0, 00:20:31.378 "data_size": 0 00:20:31.378 }, 00:20:31.378 { 00:20:31.378 "name": "BaseBdev4", 00:20:31.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.378 "is_configured": false, 00:20:31.378 "data_offset": 0, 00:20:31.378 "data_size": 0 00:20:31.378 } 00:20:31.378 ] 00:20:31.378 }' 00:20:31.378 00:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:31.378 00:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.637 00:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:31.896 [2024-07-25 00:04:27.655944] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:31.896 [2024-07-25 00:04:27.656018] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:20:31.896 00:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:32.154 [2024-07-25 00:04:27.924105] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:32.154 [2024-07-25 00:04:27.924175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:32.154 [2024-07-25 00:04:27.924191] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:32.154 [2024-07-25 00:04:27.924207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:32.154 [2024-07-25 00:04:27.924216] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:32.154 [2024-07-25 00:04:27.924230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:32.154 [2024-07-25 00:04:27.924239] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:32.154 [2024-07-25 00:04:27.924251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:32.154 00:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:32.412 [2024-07-25 00:04:28.183194] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:32.412 BaseBdev1 00:20:32.412 00:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:32.412 00:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:32.412 00:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:32.412 00:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:32.412 00:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:32.412 00:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:32.412 00:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:32.671 00:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:32.930 [ 00:20:32.930 { 00:20:32.930 "name": "BaseBdev1", 00:20:32.930 "aliases": [ 00:20:32.930 "0ec1d144-ffc4-493b-b950-b29c7ae0b795" 00:20:32.930 ], 00:20:32.930 "product_name": "Malloc disk", 00:20:32.930 "block_size": 512, 00:20:32.930 "num_blocks": 65536, 00:20:32.930 "uuid": "0ec1d144-ffc4-493b-b950-b29c7ae0b795", 00:20:32.930 "assigned_rate_limits": { 00:20:32.930 "rw_ios_per_sec": 0, 00:20:32.930 "rw_mbytes_per_sec": 0, 00:20:32.930 "r_mbytes_per_sec": 0, 00:20:32.930 "w_mbytes_per_sec": 0 00:20:32.930 }, 00:20:32.930 "claimed": true, 00:20:32.930 "claim_type": "exclusive_write", 00:20:32.930 "zoned": false, 00:20:32.930 "supported_io_types": { 00:20:32.930 "read": true, 00:20:32.930 "write": true, 00:20:32.930 "unmap": true, 00:20:32.930 "flush": true, 00:20:32.930 "reset": true, 00:20:32.930 "nvme_admin": false, 00:20:32.930 "nvme_io": false, 00:20:32.931 "nvme_io_md": false, 00:20:32.931 "write_zeroes": true, 00:20:32.931 "zcopy": true, 00:20:32.931 "get_zone_info": false, 00:20:32.931 "zone_management": false, 00:20:32.931 "zone_append": false, 00:20:32.931 "compare": false, 00:20:32.931 "compare_and_write": false, 00:20:32.931 "abort": true, 00:20:32.931 "seek_hole": false, 00:20:32.931 "seek_data": false, 00:20:32.931 "copy": true, 00:20:32.931 "nvme_iov_md": false 00:20:32.931 }, 00:20:32.931 "memory_domains": [ 00:20:32.931 { 00:20:32.931 "dma_device_id": "system", 00:20:32.931 "dma_device_type": 1 00:20:32.931 }, 00:20:32.931 { 00:20:32.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.931 "dma_device_type": 2 00:20:32.931 } 00:20:32.931 ], 00:20:32.931 "driver_specific": {} 00:20:32.931 } 00:20:32.931 ] 00:20:32.931 00:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:32.931 00:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:32.931 00:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:32.931 00:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:32.931 00:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:32.931 00:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:32.931 00:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:32.931 00:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:32.931 00:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:32.931 00:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:32.931 00:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:32.931 00:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:32.931 00:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.190 00:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:33.190 "name": "Existed_Raid", 00:20:33.190 "uuid": "52a4a787-e9ec-40dd-9a46-55e3ac45d341", 00:20:33.190 "strip_size_kb": 64, 00:20:33.190 "state": "configuring", 00:20:33.190 "raid_level": "raid0", 00:20:33.190 "superblock": true, 00:20:33.190 "num_base_bdevs": 4, 00:20:33.190 "num_base_bdevs_discovered": 1, 00:20:33.190 "num_base_bdevs_operational": 4, 00:20:33.190 "base_bdevs_list": [ 00:20:33.190 { 00:20:33.190 "name": "BaseBdev1", 00:20:33.190 "uuid": "0ec1d144-ffc4-493b-b950-b29c7ae0b795", 00:20:33.190 "is_configured": true, 00:20:33.190 "data_offset": 2048, 00:20:33.190 "data_size": 63488 00:20:33.190 }, 00:20:33.190 { 00:20:33.190 "name": "BaseBdev2", 00:20:33.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.190 "is_configured": false, 00:20:33.190 "data_offset": 0, 00:20:33.190 "data_size": 0 00:20:33.190 }, 00:20:33.190 { 00:20:33.190 "name": "BaseBdev3", 00:20:33.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.190 "is_configured": false, 00:20:33.190 "data_offset": 0, 00:20:33.190 "data_size": 0 00:20:33.190 }, 00:20:33.190 { 00:20:33.190 "name": "BaseBdev4", 00:20:33.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.190 "is_configured": false, 00:20:33.190 "data_offset": 0, 00:20:33.190 "data_size": 0 00:20:33.190 } 00:20:33.190 ] 00:20:33.190 }' 00:20:33.190 00:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:33.190 00:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.756 00:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:34.014 [2024-07-25 00:04:29.635770] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:34.015 [2024-07-25 00:04:29.636085] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:20:34.015 00:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:34.273 [2024-07-25 00:04:29.895967] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:34.273 [2024-07-25 00:04:29.898656] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:34.273 [2024-07-25 00:04:29.898878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:34.273 [2024-07-25 00:04:29.898906] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:34.273 [2024-07-25 00:04:29.898925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:34.273 [2024-07-25 00:04:29.898946] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:34.273 [2024-07-25 00:04:29.898964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:34.273 00:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:34.273 00:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:34.273 00:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:34.273 00:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:34.273 00:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:34.273 00:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:34.273 00:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:34.273 00:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:34.273 00:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:34.273 00:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:34.273 00:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:34.273 00:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:34.273 00:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.273 00:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:34.532 00:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:34.532 "name": "Existed_Raid", 00:20:34.532 "uuid": "c6a06a16-158e-4b9e-8385-fbde5a5feb9a", 00:20:34.532 "strip_size_kb": 64, 00:20:34.532 "state": "configuring", 00:20:34.532 "raid_level": "raid0", 00:20:34.532 "superblock": true, 00:20:34.532 "num_base_bdevs": 4, 00:20:34.532 "num_base_bdevs_discovered": 1, 00:20:34.532 "num_base_bdevs_operational": 4, 00:20:34.532 "base_bdevs_list": [ 00:20:34.532 { 00:20:34.532 "name": "BaseBdev1", 00:20:34.532 "uuid": "0ec1d144-ffc4-493b-b950-b29c7ae0b795", 00:20:34.532 "is_configured": true, 00:20:34.532 "data_offset": 2048, 00:20:34.532 "data_size": 63488 00:20:34.532 }, 00:20:34.532 { 00:20:34.532 "name": "BaseBdev2", 00:20:34.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.532 "is_configured": false, 00:20:34.532 "data_offset": 0, 00:20:34.532 "data_size": 0 00:20:34.532 }, 00:20:34.532 { 00:20:34.532 "name": "BaseBdev3", 00:20:34.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.532 "is_configured": false, 00:20:34.532 "data_offset": 0, 00:20:34.532 "data_size": 0 00:20:34.532 }, 00:20:34.532 { 00:20:34.532 "name": "BaseBdev4", 00:20:34.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.532 "is_configured": false, 00:20:34.532 "data_offset": 0, 00:20:34.532 "data_size": 0 00:20:34.532 } 00:20:34.532 ] 00:20:34.532 }' 00:20:34.532 00:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:34.532 00:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.791 00:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:35.050 [2024-07-25 00:04:30.727863] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:35.050 BaseBdev2 00:20:35.050 00:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:35.050 00:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:35.050 00:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:35.050 00:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:35.050 00:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:35.050 00:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:35.050 00:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:35.308 00:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:35.568 [ 00:20:35.568 { 00:20:35.568 "name": "BaseBdev2", 00:20:35.568 "aliases": [ 00:20:35.568 "00c4e272-265b-481b-a208-aca6de32cbde" 00:20:35.568 ], 00:20:35.568 "product_name": "Malloc disk", 00:20:35.568 "block_size": 512, 00:20:35.568 "num_blocks": 65536, 00:20:35.568 "uuid": "00c4e272-265b-481b-a208-aca6de32cbde", 00:20:35.568 "assigned_rate_limits": { 00:20:35.568 "rw_ios_per_sec": 0, 00:20:35.568 "rw_mbytes_per_sec": 0, 00:20:35.568 "r_mbytes_per_sec": 0, 00:20:35.568 "w_mbytes_per_sec": 0 00:20:35.568 }, 00:20:35.568 "claimed": true, 00:20:35.568 "claim_type": "exclusive_write", 00:20:35.568 "zoned": false, 00:20:35.568 "supported_io_types": { 00:20:35.568 "read": true, 00:20:35.568 "write": true, 00:20:35.568 "unmap": true, 00:20:35.568 "flush": true, 00:20:35.568 "reset": true, 00:20:35.568 "nvme_admin": false, 00:20:35.568 "nvme_io": false, 00:20:35.568 "nvme_io_md": false, 00:20:35.568 "write_zeroes": true, 00:20:35.568 "zcopy": true, 00:20:35.568 "get_zone_info": false, 00:20:35.568 "zone_management": false, 00:20:35.568 "zone_append": false, 00:20:35.568 "compare": false, 00:20:35.568 "compare_and_write": false, 00:20:35.568 "abort": true, 00:20:35.568 "seek_hole": false, 00:20:35.568 "seek_data": false, 00:20:35.568 "copy": true, 00:20:35.568 "nvme_iov_md": false 00:20:35.568 }, 00:20:35.568 "memory_domains": [ 00:20:35.568 { 00:20:35.568 "dma_device_id": "system", 00:20:35.568 "dma_device_type": 1 00:20:35.568 }, 00:20:35.568 { 00:20:35.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.568 "dma_device_type": 2 00:20:35.568 } 00:20:35.568 ], 00:20:35.568 "driver_specific": {} 00:20:35.568 } 00:20:35.568 ] 00:20:35.568 00:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:35.568 00:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:35.568 00:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:35.568 00:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:35.568 00:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:35.568 00:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:35.568 00:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:35.568 00:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:35.568 00:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:35.568 00:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:35.568 00:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:35.568 00:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:35.568 00:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:35.568 00:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.568 00:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.828 00:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:35.828 "name": "Existed_Raid", 00:20:35.828 "uuid": "c6a06a16-158e-4b9e-8385-fbde5a5feb9a", 00:20:35.828 "strip_size_kb": 64, 00:20:35.828 "state": "configuring", 00:20:35.828 "raid_level": "raid0", 00:20:35.828 "superblock": true, 00:20:35.828 "num_base_bdevs": 4, 00:20:35.828 "num_base_bdevs_discovered": 2, 00:20:35.828 "num_base_bdevs_operational": 4, 00:20:35.828 "base_bdevs_list": [ 00:20:35.828 { 00:20:35.828 "name": "BaseBdev1", 00:20:35.828 "uuid": "0ec1d144-ffc4-493b-b950-b29c7ae0b795", 00:20:35.828 "is_configured": true, 00:20:35.828 "data_offset": 2048, 00:20:35.828 "data_size": 63488 00:20:35.828 }, 00:20:35.828 { 00:20:35.828 "name": "BaseBdev2", 00:20:35.828 "uuid": "00c4e272-265b-481b-a208-aca6de32cbde", 00:20:35.828 "is_configured": true, 00:20:35.828 "data_offset": 2048, 00:20:35.828 "data_size": 63488 00:20:35.828 }, 00:20:35.828 { 00:20:35.828 "name": "BaseBdev3", 00:20:35.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.828 "is_configured": false, 00:20:35.828 "data_offset": 0, 00:20:35.828 "data_size": 0 00:20:35.828 }, 00:20:35.828 { 00:20:35.828 "name": "BaseBdev4", 00:20:35.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.828 "is_configured": false, 00:20:35.828 "data_offset": 0, 00:20:35.828 "data_size": 0 00:20:35.828 } 00:20:35.828 ] 00:20:35.828 }' 00:20:35.828 00:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:35.828 00:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.087 00:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:36.347 [2024-07-25 00:04:32.176752] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:36.347 BaseBdev3 00:20:36.347 00:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:36.347 00:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:36.347 00:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:36.347 00:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:36.347 00:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:36.347 00:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:36.347 00:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:36.606 00:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:36.866 [ 00:20:36.866 { 00:20:36.866 "name": "BaseBdev3", 00:20:36.866 "aliases": [ 00:20:36.866 "5acecf4e-e51e-481b-a26d-3fa94e280aea" 00:20:36.866 ], 00:20:36.866 "product_name": "Malloc disk", 00:20:36.866 "block_size": 512, 00:20:36.866 "num_blocks": 65536, 00:20:36.866 "uuid": "5acecf4e-e51e-481b-a26d-3fa94e280aea", 00:20:36.866 "assigned_rate_limits": { 00:20:36.866 "rw_ios_per_sec": 0, 00:20:36.866 "rw_mbytes_per_sec": 0, 00:20:36.866 "r_mbytes_per_sec": 0, 00:20:36.866 "w_mbytes_per_sec": 0 00:20:36.866 }, 00:20:36.866 "claimed": true, 00:20:36.866 "claim_type": "exclusive_write", 00:20:36.866 "zoned": false, 00:20:36.866 "supported_io_types": { 00:20:36.866 "read": true, 00:20:36.866 "write": true, 00:20:36.866 "unmap": true, 00:20:36.866 "flush": true, 00:20:36.866 "reset": true, 00:20:36.866 "nvme_admin": false, 00:20:36.866 "nvme_io": false, 00:20:36.866 "nvme_io_md": false, 00:20:36.866 "write_zeroes": true, 00:20:36.866 "zcopy": true, 00:20:36.866 "get_zone_info": false, 00:20:36.866 "zone_management": false, 00:20:36.866 "zone_append": false, 00:20:36.866 "compare": false, 00:20:36.866 "compare_and_write": false, 00:20:36.866 "abort": true, 00:20:36.866 "seek_hole": false, 00:20:36.866 "seek_data": false, 00:20:36.866 "copy": true, 00:20:36.866 "nvme_iov_md": false 00:20:36.866 }, 00:20:36.866 "memory_domains": [ 00:20:36.866 { 00:20:36.866 "dma_device_id": "system", 00:20:36.866 "dma_device_type": 1 00:20:36.866 }, 00:20:36.866 { 00:20:36.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.866 "dma_device_type": 2 00:20:36.866 } 00:20:36.866 ], 00:20:36.866 "driver_specific": {} 00:20:36.866 } 00:20:36.866 ] 00:20:37.129 00:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:37.129 00:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:37.129 00:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:37.129 00:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:37.129 00:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:37.129 00:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:37.129 00:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:37.129 00:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:37.129 00:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:37.129 00:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:37.129 00:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:37.129 00:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:37.129 00:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:37.129 00:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.129 00:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.129 00:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:37.129 "name": "Existed_Raid", 00:20:37.129 "uuid": "c6a06a16-158e-4b9e-8385-fbde5a5feb9a", 00:20:37.129 "strip_size_kb": 64, 00:20:37.129 "state": "configuring", 00:20:37.129 "raid_level": "raid0", 00:20:37.129 "superblock": true, 00:20:37.129 "num_base_bdevs": 4, 00:20:37.129 "num_base_bdevs_discovered": 3, 00:20:37.129 "num_base_bdevs_operational": 4, 00:20:37.129 "base_bdevs_list": [ 00:20:37.129 { 00:20:37.129 "name": "BaseBdev1", 00:20:37.129 "uuid": "0ec1d144-ffc4-493b-b950-b29c7ae0b795", 00:20:37.129 "is_configured": true, 00:20:37.129 "data_offset": 2048, 00:20:37.129 "data_size": 63488 00:20:37.129 }, 00:20:37.129 { 00:20:37.129 "name": "BaseBdev2", 00:20:37.129 "uuid": "00c4e272-265b-481b-a208-aca6de32cbde", 00:20:37.129 "is_configured": true, 00:20:37.129 "data_offset": 2048, 00:20:37.129 "data_size": 63488 00:20:37.129 }, 00:20:37.129 { 00:20:37.129 "name": "BaseBdev3", 00:20:37.129 "uuid": "5acecf4e-e51e-481b-a26d-3fa94e280aea", 00:20:37.129 "is_configured": true, 00:20:37.129 "data_offset": 2048, 00:20:37.129 "data_size": 63488 00:20:37.129 }, 00:20:37.129 { 00:20:37.129 "name": "BaseBdev4", 00:20:37.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.129 "is_configured": false, 00:20:37.129 "data_offset": 0, 00:20:37.129 "data_size": 0 00:20:37.129 } 00:20:37.129 ] 00:20:37.129 }' 00:20:37.129 00:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:37.129 00:04:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.699 00:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:37.699 [2024-07-25 00:04:33.562972] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:37.699 [2024-07-25 00:04:33.563558] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:20:37.699 [2024-07-25 00:04:33.563583] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:37.699 [2024-07-25 00:04:33.563702] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:20:37.699 [2024-07-25 00:04:33.564096] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:20:37.699 [2024-07-25 00:04:33.564118] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:20:37.699 [2024-07-25 00:04:33.564282] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.699 BaseBdev4 00:20:37.959 00:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:20:37.959 00:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:20:37.959 00:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:37.959 00:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:37.959 00:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:37.959 00:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:37.959 00:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:38.217 00:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:38.217 [ 00:20:38.217 { 00:20:38.217 "name": "BaseBdev4", 00:20:38.217 "aliases": [ 00:20:38.217 "082d9303-4e82-470b-b974-8758ef3039fc" 00:20:38.217 ], 00:20:38.217 "product_name": "Malloc disk", 00:20:38.217 "block_size": 512, 00:20:38.217 "num_blocks": 65536, 00:20:38.217 "uuid": "082d9303-4e82-470b-b974-8758ef3039fc", 00:20:38.217 "assigned_rate_limits": { 00:20:38.217 "rw_ios_per_sec": 0, 00:20:38.217 "rw_mbytes_per_sec": 0, 00:20:38.217 "r_mbytes_per_sec": 0, 00:20:38.217 "w_mbytes_per_sec": 0 00:20:38.217 }, 00:20:38.217 "claimed": true, 00:20:38.217 "claim_type": "exclusive_write", 00:20:38.217 "zoned": false, 00:20:38.217 "supported_io_types": { 00:20:38.217 "read": true, 00:20:38.217 "write": true, 00:20:38.217 "unmap": true, 00:20:38.217 "flush": true, 00:20:38.217 "reset": true, 00:20:38.217 "nvme_admin": false, 00:20:38.217 "nvme_io": false, 00:20:38.217 "nvme_io_md": false, 00:20:38.217 "write_zeroes": true, 00:20:38.217 "zcopy": true, 00:20:38.217 "get_zone_info": false, 00:20:38.217 "zone_management": false, 00:20:38.217 "zone_append": false, 00:20:38.217 "compare": false, 00:20:38.217 "compare_and_write": false, 00:20:38.217 "abort": true, 00:20:38.217 "seek_hole": false, 00:20:38.217 "seek_data": false, 00:20:38.217 "copy": true, 00:20:38.217 "nvme_iov_md": false 00:20:38.217 }, 00:20:38.217 "memory_domains": [ 00:20:38.217 { 00:20:38.217 "dma_device_id": "system", 00:20:38.217 "dma_device_type": 1 00:20:38.217 }, 00:20:38.217 { 00:20:38.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.217 "dma_device_type": 2 00:20:38.217 } 00:20:38.217 ], 00:20:38.217 "driver_specific": {} 00:20:38.217 } 00:20:38.217 ] 00:20:38.217 00:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:38.217 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:38.217 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:38.217 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:20:38.217 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:38.217 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:38.218 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:38.218 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:38.218 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:38.218 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:38.218 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:38.218 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:38.218 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:38.218 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.218 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.476 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:38.476 "name": "Existed_Raid", 00:20:38.476 "uuid": "c6a06a16-158e-4b9e-8385-fbde5a5feb9a", 00:20:38.476 "strip_size_kb": 64, 00:20:38.476 "state": "online", 00:20:38.476 "raid_level": "raid0", 00:20:38.476 "superblock": true, 00:20:38.476 "num_base_bdevs": 4, 00:20:38.476 "num_base_bdevs_discovered": 4, 00:20:38.476 "num_base_bdevs_operational": 4, 00:20:38.476 "base_bdevs_list": [ 00:20:38.476 { 00:20:38.476 "name": "BaseBdev1", 00:20:38.476 "uuid": "0ec1d144-ffc4-493b-b950-b29c7ae0b795", 00:20:38.476 "is_configured": true, 00:20:38.476 "data_offset": 2048, 00:20:38.476 "data_size": 63488 00:20:38.476 }, 00:20:38.476 { 00:20:38.476 "name": "BaseBdev2", 00:20:38.476 "uuid": "00c4e272-265b-481b-a208-aca6de32cbde", 00:20:38.476 "is_configured": true, 00:20:38.476 "data_offset": 2048, 00:20:38.476 "data_size": 63488 00:20:38.476 }, 00:20:38.476 { 00:20:38.476 "name": "BaseBdev3", 00:20:38.476 "uuid": "5acecf4e-e51e-481b-a26d-3fa94e280aea", 00:20:38.476 "is_configured": true, 00:20:38.476 "data_offset": 2048, 00:20:38.476 "data_size": 63488 00:20:38.476 }, 00:20:38.476 { 00:20:38.476 "name": "BaseBdev4", 00:20:38.476 "uuid": "082d9303-4e82-470b-b974-8758ef3039fc", 00:20:38.476 "is_configured": true, 00:20:38.476 "data_offset": 2048, 00:20:38.476 "data_size": 63488 00:20:38.476 } 00:20:38.476 ] 00:20:38.476 }' 00:20:38.476 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:38.476 00:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.043 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:39.043 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:39.043 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:39.043 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:39.043 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:39.043 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:39.043 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:39.044 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:39.044 [2024-07-25 00:04:34.859923] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:39.044 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:39.044 "name": "Existed_Raid", 00:20:39.044 "aliases": [ 00:20:39.044 "c6a06a16-158e-4b9e-8385-fbde5a5feb9a" 00:20:39.044 ], 00:20:39.044 "product_name": "Raid Volume", 00:20:39.044 "block_size": 512, 00:20:39.044 "num_blocks": 253952, 00:20:39.044 "uuid": "c6a06a16-158e-4b9e-8385-fbde5a5feb9a", 00:20:39.044 "assigned_rate_limits": { 00:20:39.044 "rw_ios_per_sec": 0, 00:20:39.044 "rw_mbytes_per_sec": 0, 00:20:39.044 "r_mbytes_per_sec": 0, 00:20:39.044 "w_mbytes_per_sec": 0 00:20:39.044 }, 00:20:39.044 "claimed": false, 00:20:39.044 "zoned": false, 00:20:39.044 "supported_io_types": { 00:20:39.044 "read": true, 00:20:39.044 "write": true, 00:20:39.044 "unmap": true, 00:20:39.044 "flush": true, 00:20:39.044 "reset": true, 00:20:39.044 "nvme_admin": false, 00:20:39.044 "nvme_io": false, 00:20:39.044 "nvme_io_md": false, 00:20:39.044 "write_zeroes": true, 00:20:39.044 "zcopy": false, 00:20:39.044 "get_zone_info": false, 00:20:39.044 "zone_management": false, 00:20:39.044 "zone_append": false, 00:20:39.044 "compare": false, 00:20:39.044 "compare_and_write": false, 00:20:39.044 "abort": false, 00:20:39.044 "seek_hole": false, 00:20:39.044 "seek_data": false, 00:20:39.044 "copy": false, 00:20:39.044 "nvme_iov_md": false 00:20:39.044 }, 00:20:39.044 "memory_domains": [ 00:20:39.044 { 00:20:39.044 "dma_device_id": "system", 00:20:39.044 "dma_device_type": 1 00:20:39.044 }, 00:20:39.044 { 00:20:39.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.044 "dma_device_type": 2 00:20:39.044 }, 00:20:39.044 { 00:20:39.044 "dma_device_id": "system", 00:20:39.044 "dma_device_type": 1 00:20:39.044 }, 00:20:39.044 { 00:20:39.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.044 "dma_device_type": 2 00:20:39.044 }, 00:20:39.044 { 00:20:39.044 "dma_device_id": "system", 00:20:39.044 "dma_device_type": 1 00:20:39.044 }, 00:20:39.044 { 00:20:39.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.044 "dma_device_type": 2 00:20:39.044 }, 00:20:39.044 { 00:20:39.044 "dma_device_id": "system", 00:20:39.044 "dma_device_type": 1 00:20:39.044 }, 00:20:39.044 { 00:20:39.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.044 "dma_device_type": 2 00:20:39.044 } 00:20:39.044 ], 00:20:39.044 "driver_specific": { 00:20:39.044 "raid": { 00:20:39.044 "uuid": "c6a06a16-158e-4b9e-8385-fbde5a5feb9a", 00:20:39.044 "strip_size_kb": 64, 00:20:39.044 "state": "online", 00:20:39.044 "raid_level": "raid0", 00:20:39.044 "superblock": true, 00:20:39.044 "num_base_bdevs": 4, 00:20:39.044 "num_base_bdevs_discovered": 4, 00:20:39.044 "num_base_bdevs_operational": 4, 00:20:39.044 "base_bdevs_list": [ 00:20:39.044 { 00:20:39.044 "name": "BaseBdev1", 00:20:39.044 "uuid": "0ec1d144-ffc4-493b-b950-b29c7ae0b795", 00:20:39.044 "is_configured": true, 00:20:39.044 "data_offset": 2048, 00:20:39.044 "data_size": 63488 00:20:39.044 }, 00:20:39.044 { 00:20:39.044 "name": "BaseBdev2", 00:20:39.044 "uuid": "00c4e272-265b-481b-a208-aca6de32cbde", 00:20:39.044 "is_configured": true, 00:20:39.044 "data_offset": 2048, 00:20:39.044 "data_size": 63488 00:20:39.044 }, 00:20:39.044 { 00:20:39.044 "name": "BaseBdev3", 00:20:39.044 "uuid": "5acecf4e-e51e-481b-a26d-3fa94e280aea", 00:20:39.044 "is_configured": true, 00:20:39.044 "data_offset": 2048, 00:20:39.044 "data_size": 63488 00:20:39.044 }, 00:20:39.044 { 00:20:39.044 "name": "BaseBdev4", 00:20:39.044 "uuid": "082d9303-4e82-470b-b974-8758ef3039fc", 00:20:39.044 "is_configured": true, 00:20:39.044 "data_offset": 2048, 00:20:39.044 "data_size": 63488 00:20:39.044 } 00:20:39.044 ] 00:20:39.044 } 00:20:39.044 } 00:20:39.044 }' 00:20:39.044 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:39.044 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:39.044 BaseBdev2 00:20:39.044 BaseBdev3 00:20:39.044 BaseBdev4' 00:20:39.044 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:39.044 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:39.044 00:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:39.303 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:39.303 "name": "BaseBdev1", 00:20:39.303 "aliases": [ 00:20:39.303 "0ec1d144-ffc4-493b-b950-b29c7ae0b795" 00:20:39.303 ], 00:20:39.303 "product_name": "Malloc disk", 00:20:39.303 "block_size": 512, 00:20:39.303 "num_blocks": 65536, 00:20:39.303 "uuid": "0ec1d144-ffc4-493b-b950-b29c7ae0b795", 00:20:39.303 "assigned_rate_limits": { 00:20:39.303 "rw_ios_per_sec": 0, 00:20:39.303 "rw_mbytes_per_sec": 0, 00:20:39.303 "r_mbytes_per_sec": 0, 00:20:39.303 "w_mbytes_per_sec": 0 00:20:39.303 }, 00:20:39.303 "claimed": true, 00:20:39.303 "claim_type": "exclusive_write", 00:20:39.303 "zoned": false, 00:20:39.303 "supported_io_types": { 00:20:39.303 "read": true, 00:20:39.303 "write": true, 00:20:39.303 "unmap": true, 00:20:39.303 "flush": true, 00:20:39.303 "reset": true, 00:20:39.303 "nvme_admin": false, 00:20:39.303 "nvme_io": false, 00:20:39.303 "nvme_io_md": false, 00:20:39.303 "write_zeroes": true, 00:20:39.303 "zcopy": true, 00:20:39.303 "get_zone_info": false, 00:20:39.303 "zone_management": false, 00:20:39.303 "zone_append": false, 00:20:39.303 "compare": false, 00:20:39.303 "compare_and_write": false, 00:20:39.303 "abort": true, 00:20:39.303 "seek_hole": false, 00:20:39.303 "seek_data": false, 00:20:39.303 "copy": true, 00:20:39.303 "nvme_iov_md": false 00:20:39.303 }, 00:20:39.303 "memory_domains": [ 00:20:39.303 { 00:20:39.303 "dma_device_id": "system", 00:20:39.303 "dma_device_type": 1 00:20:39.303 }, 00:20:39.303 { 00:20:39.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.303 "dma_device_type": 2 00:20:39.303 } 00:20:39.303 ], 00:20:39.303 "driver_specific": {} 00:20:39.303 }' 00:20:39.303 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:39.562 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:39.562 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:39.562 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.562 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.562 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:39.562 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.562 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.562 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:39.562 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:39.562 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:39.562 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:39.562 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:39.562 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:39.562 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:39.821 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:39.821 "name": "BaseBdev2", 00:20:39.821 "aliases": [ 00:20:39.821 "00c4e272-265b-481b-a208-aca6de32cbde" 00:20:39.821 ], 00:20:39.821 "product_name": "Malloc disk", 00:20:39.821 "block_size": 512, 00:20:39.821 "num_blocks": 65536, 00:20:39.821 "uuid": "00c4e272-265b-481b-a208-aca6de32cbde", 00:20:39.821 "assigned_rate_limits": { 00:20:39.821 "rw_ios_per_sec": 0, 00:20:39.821 "rw_mbytes_per_sec": 0, 00:20:39.821 "r_mbytes_per_sec": 0, 00:20:39.821 "w_mbytes_per_sec": 0 00:20:39.821 }, 00:20:39.821 "claimed": true, 00:20:39.821 "claim_type": "exclusive_write", 00:20:39.821 "zoned": false, 00:20:39.821 "supported_io_types": { 00:20:39.821 "read": true, 00:20:39.821 "write": true, 00:20:39.821 "unmap": true, 00:20:39.821 "flush": true, 00:20:39.821 "reset": true, 00:20:39.821 "nvme_admin": false, 00:20:39.821 "nvme_io": false, 00:20:39.821 "nvme_io_md": false, 00:20:39.821 "write_zeroes": true, 00:20:39.821 "zcopy": true, 00:20:39.821 "get_zone_info": false, 00:20:39.821 "zone_management": false, 00:20:39.821 "zone_append": false, 00:20:39.821 "compare": false, 00:20:39.821 "compare_and_write": false, 00:20:39.821 "abort": true, 00:20:39.821 "seek_hole": false, 00:20:39.821 "seek_data": false, 00:20:39.821 "copy": true, 00:20:39.821 "nvme_iov_md": false 00:20:39.821 }, 00:20:39.821 "memory_domains": [ 00:20:39.821 { 00:20:39.821 "dma_device_id": "system", 00:20:39.821 "dma_device_type": 1 00:20:39.821 }, 00:20:39.821 { 00:20:39.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.821 "dma_device_type": 2 00:20:39.821 } 00:20:39.821 ], 00:20:39.821 "driver_specific": {} 00:20:39.821 }' 00:20:39.821 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:39.821 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:39.821 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:39.821 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.821 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.821 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:39.821 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.821 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.821 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:39.821 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:39.821 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:39.821 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:39.821 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:39.821 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:39.821 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:40.081 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:40.081 "name": "BaseBdev3", 00:20:40.081 "aliases": [ 00:20:40.081 "5acecf4e-e51e-481b-a26d-3fa94e280aea" 00:20:40.081 ], 00:20:40.081 "product_name": "Malloc disk", 00:20:40.081 "block_size": 512, 00:20:40.081 "num_blocks": 65536, 00:20:40.081 "uuid": "5acecf4e-e51e-481b-a26d-3fa94e280aea", 00:20:40.081 "assigned_rate_limits": { 00:20:40.081 "rw_ios_per_sec": 0, 00:20:40.081 "rw_mbytes_per_sec": 0, 00:20:40.081 "r_mbytes_per_sec": 0, 00:20:40.081 "w_mbytes_per_sec": 0 00:20:40.081 }, 00:20:40.081 "claimed": true, 00:20:40.081 "claim_type": "exclusive_write", 00:20:40.081 "zoned": false, 00:20:40.081 "supported_io_types": { 00:20:40.081 "read": true, 00:20:40.081 "write": true, 00:20:40.081 "unmap": true, 00:20:40.081 "flush": true, 00:20:40.081 "reset": true, 00:20:40.081 "nvme_admin": false, 00:20:40.081 "nvme_io": false, 00:20:40.081 "nvme_io_md": false, 00:20:40.081 "write_zeroes": true, 00:20:40.081 "zcopy": true, 00:20:40.081 "get_zone_info": false, 00:20:40.081 "zone_management": false, 00:20:40.081 "zone_append": false, 00:20:40.081 "compare": false, 00:20:40.081 "compare_and_write": false, 00:20:40.081 "abort": true, 00:20:40.081 "seek_hole": false, 00:20:40.081 "seek_data": false, 00:20:40.081 "copy": true, 00:20:40.081 "nvme_iov_md": false 00:20:40.081 }, 00:20:40.081 "memory_domains": [ 00:20:40.081 { 00:20:40.081 "dma_device_id": "system", 00:20:40.081 "dma_device_type": 1 00:20:40.081 }, 00:20:40.081 { 00:20:40.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.081 "dma_device_type": 2 00:20:40.081 } 00:20:40.081 ], 00:20:40.081 "driver_specific": {} 00:20:40.081 }' 00:20:40.081 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.081 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.081 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:40.081 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.081 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.081 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:40.081 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:40.081 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:40.081 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:40.081 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:40.081 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:40.081 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:40.081 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:40.081 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:40.081 00:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:40.340 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:40.340 "name": "BaseBdev4", 00:20:40.340 "aliases": [ 00:20:40.340 "082d9303-4e82-470b-b974-8758ef3039fc" 00:20:40.340 ], 00:20:40.340 "product_name": "Malloc disk", 00:20:40.340 "block_size": 512, 00:20:40.340 "num_blocks": 65536, 00:20:40.340 "uuid": "082d9303-4e82-470b-b974-8758ef3039fc", 00:20:40.340 "assigned_rate_limits": { 00:20:40.340 "rw_ios_per_sec": 0, 00:20:40.340 "rw_mbytes_per_sec": 0, 00:20:40.340 "r_mbytes_per_sec": 0, 00:20:40.340 "w_mbytes_per_sec": 0 00:20:40.340 }, 00:20:40.340 "claimed": true, 00:20:40.340 "claim_type": "exclusive_write", 00:20:40.340 "zoned": false, 00:20:40.340 "supported_io_types": { 00:20:40.340 "read": true, 00:20:40.340 "write": true, 00:20:40.340 "unmap": true, 00:20:40.340 "flush": true, 00:20:40.340 "reset": true, 00:20:40.340 "nvme_admin": false, 00:20:40.340 "nvme_io": false, 00:20:40.340 "nvme_io_md": false, 00:20:40.340 "write_zeroes": true, 00:20:40.340 "zcopy": true, 00:20:40.340 "get_zone_info": false, 00:20:40.340 "zone_management": false, 00:20:40.340 "zone_append": false, 00:20:40.340 "compare": false, 00:20:40.340 "compare_and_write": false, 00:20:40.340 "abort": true, 00:20:40.340 "seek_hole": false, 00:20:40.340 "seek_data": false, 00:20:40.340 "copy": true, 00:20:40.340 "nvme_iov_md": false 00:20:40.340 }, 00:20:40.340 "memory_domains": [ 00:20:40.340 { 00:20:40.340 "dma_device_id": "system", 00:20:40.340 "dma_device_type": 1 00:20:40.340 }, 00:20:40.340 { 00:20:40.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.340 "dma_device_type": 2 00:20:40.340 } 00:20:40.340 ], 00:20:40.340 "driver_specific": {} 00:20:40.340 }' 00:20:40.340 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.340 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.340 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:40.340 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.340 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.340 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:40.340 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:40.340 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:40.340 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:40.340 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:40.340 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:40.340 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:40.340 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:40.599 [2024-07-25 00:04:36.456046] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:40.599 [2024-07-25 00:04:36.456321] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:40.599 [2024-07-25 00:04:36.456577] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:40.857 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:40.857 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:20:40.857 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:40.857 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:20:40.857 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:20:40.857 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:20:40.857 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:40.857 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:20:40.857 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:40.857 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:40.857 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:40.857 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:40.857 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:40.857 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:40.857 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:40.857 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.857 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.115 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:41.115 "name": "Existed_Raid", 00:20:41.115 "uuid": "c6a06a16-158e-4b9e-8385-fbde5a5feb9a", 00:20:41.115 "strip_size_kb": 64, 00:20:41.115 "state": "offline", 00:20:41.115 "raid_level": "raid0", 00:20:41.115 "superblock": true, 00:20:41.115 "num_base_bdevs": 4, 00:20:41.115 "num_base_bdevs_discovered": 3, 00:20:41.115 "num_base_bdevs_operational": 3, 00:20:41.115 "base_bdevs_list": [ 00:20:41.115 { 00:20:41.115 "name": null, 00:20:41.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.115 "is_configured": false, 00:20:41.115 "data_offset": 2048, 00:20:41.115 "data_size": 63488 00:20:41.115 }, 00:20:41.115 { 00:20:41.115 "name": "BaseBdev2", 00:20:41.115 "uuid": "00c4e272-265b-481b-a208-aca6de32cbde", 00:20:41.115 "is_configured": true, 00:20:41.115 "data_offset": 2048, 00:20:41.115 "data_size": 63488 00:20:41.115 }, 00:20:41.115 { 00:20:41.115 "name": "BaseBdev3", 00:20:41.115 "uuid": "5acecf4e-e51e-481b-a26d-3fa94e280aea", 00:20:41.115 "is_configured": true, 00:20:41.115 "data_offset": 2048, 00:20:41.115 "data_size": 63488 00:20:41.115 }, 00:20:41.115 { 00:20:41.115 "name": "BaseBdev4", 00:20:41.115 "uuid": "082d9303-4e82-470b-b974-8758ef3039fc", 00:20:41.115 "is_configured": true, 00:20:41.115 "data_offset": 2048, 00:20:41.115 "data_size": 63488 00:20:41.115 } 00:20:41.115 ] 00:20:41.115 }' 00:20:41.115 00:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:41.115 00:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.373 00:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:41.373 00:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:41.373 00:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.373 00:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:41.632 00:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:41.632 00:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:41.632 00:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:41.890 [2024-07-25 00:04:37.696581] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:42.149 00:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:42.149 00:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:42.149 00:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:42.149 00:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.407 00:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:42.407 00:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:42.407 00:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:42.407 [2024-07-25 00:04:38.220672] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:42.666 00:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:42.666 00:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:42.666 00:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.666 00:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:42.925 00:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:42.925 00:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:42.925 00:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:43.183 [2024-07-25 00:04:38.795745] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:43.183 [2024-07-25 00:04:38.796061] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:20:43.183 00:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:43.183 00:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:43.183 00:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:43.183 00:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.441 00:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:43.441 00:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:43.441 00:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:20:43.441 00:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:43.441 00:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:43.441 00:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:43.699 BaseBdev2 00:20:43.699 00:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:43.699 00:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:43.699 00:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:43.699 00:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:43.699 00:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:43.699 00:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:43.699 00:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:43.957 00:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:44.216 [ 00:20:44.216 { 00:20:44.216 "name": "BaseBdev2", 00:20:44.216 "aliases": [ 00:20:44.216 "14707f0a-3214-4f98-bac8-eb499c2f53db" 00:20:44.216 ], 00:20:44.216 "product_name": "Malloc disk", 00:20:44.216 "block_size": 512, 00:20:44.216 "num_blocks": 65536, 00:20:44.216 "uuid": "14707f0a-3214-4f98-bac8-eb499c2f53db", 00:20:44.216 "assigned_rate_limits": { 00:20:44.216 "rw_ios_per_sec": 0, 00:20:44.216 "rw_mbytes_per_sec": 0, 00:20:44.216 "r_mbytes_per_sec": 0, 00:20:44.216 "w_mbytes_per_sec": 0 00:20:44.216 }, 00:20:44.216 "claimed": false, 00:20:44.216 "zoned": false, 00:20:44.216 "supported_io_types": { 00:20:44.216 "read": true, 00:20:44.216 "write": true, 00:20:44.216 "unmap": true, 00:20:44.216 "flush": true, 00:20:44.216 "reset": true, 00:20:44.216 "nvme_admin": false, 00:20:44.216 "nvme_io": false, 00:20:44.216 "nvme_io_md": false, 00:20:44.216 "write_zeroes": true, 00:20:44.216 "zcopy": true, 00:20:44.216 "get_zone_info": false, 00:20:44.216 "zone_management": false, 00:20:44.217 "zone_append": false, 00:20:44.217 "compare": false, 00:20:44.217 "compare_and_write": false, 00:20:44.217 "abort": true, 00:20:44.217 "seek_hole": false, 00:20:44.217 "seek_data": false, 00:20:44.217 "copy": true, 00:20:44.217 "nvme_iov_md": false 00:20:44.217 }, 00:20:44.217 "memory_domains": [ 00:20:44.217 { 00:20:44.217 "dma_device_id": "system", 00:20:44.217 "dma_device_type": 1 00:20:44.217 }, 00:20:44.217 { 00:20:44.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.217 "dma_device_type": 2 00:20:44.217 } 00:20:44.217 ], 00:20:44.217 "driver_specific": {} 00:20:44.217 } 00:20:44.217 ] 00:20:44.217 00:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:44.217 00:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:44.217 00:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:44.217 00:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:44.478 BaseBdev3 00:20:44.478 00:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:44.478 00:04:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:44.478 00:04:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:44.478 00:04:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:44.478 00:04:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:44.478 00:04:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:44.478 00:04:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:44.744 00:04:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:44.744 [ 00:20:44.744 { 00:20:44.744 "name": "BaseBdev3", 00:20:44.744 "aliases": [ 00:20:44.744 "797f76cd-da14-42f3-8117-c33be5bb271a" 00:20:44.744 ], 00:20:44.744 "product_name": "Malloc disk", 00:20:44.744 "block_size": 512, 00:20:44.744 "num_blocks": 65536, 00:20:44.744 "uuid": "797f76cd-da14-42f3-8117-c33be5bb271a", 00:20:44.744 "assigned_rate_limits": { 00:20:44.744 "rw_ios_per_sec": 0, 00:20:44.744 "rw_mbytes_per_sec": 0, 00:20:44.744 "r_mbytes_per_sec": 0, 00:20:44.744 "w_mbytes_per_sec": 0 00:20:44.744 }, 00:20:44.744 "claimed": false, 00:20:44.744 "zoned": false, 00:20:44.744 "supported_io_types": { 00:20:44.744 "read": true, 00:20:44.744 "write": true, 00:20:44.744 "unmap": true, 00:20:44.744 "flush": true, 00:20:44.744 "reset": true, 00:20:44.744 "nvme_admin": false, 00:20:44.744 "nvme_io": false, 00:20:44.744 "nvme_io_md": false, 00:20:44.744 "write_zeroes": true, 00:20:44.744 "zcopy": true, 00:20:44.744 "get_zone_info": false, 00:20:44.744 "zone_management": false, 00:20:44.744 "zone_append": false, 00:20:44.744 "compare": false, 00:20:44.744 "compare_and_write": false, 00:20:44.744 "abort": true, 00:20:44.744 "seek_hole": false, 00:20:44.744 "seek_data": false, 00:20:44.744 "copy": true, 00:20:44.744 "nvme_iov_md": false 00:20:44.744 }, 00:20:44.744 "memory_domains": [ 00:20:44.744 { 00:20:44.744 "dma_device_id": "system", 00:20:44.744 "dma_device_type": 1 00:20:44.744 }, 00:20:44.744 { 00:20:44.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.744 "dma_device_type": 2 00:20:44.744 } 00:20:44.744 ], 00:20:44.744 "driver_specific": {} 00:20:44.744 } 00:20:44.744 ] 00:20:44.744 00:04:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:44.744 00:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:44.744 00:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:44.744 00:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:45.002 BaseBdev4 00:20:45.002 00:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:20:45.002 00:04:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:20:45.002 00:04:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:45.002 00:04:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:45.002 00:04:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:45.002 00:04:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:45.003 00:04:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:45.262 00:04:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:45.520 [ 00:20:45.520 { 00:20:45.520 "name": "BaseBdev4", 00:20:45.520 "aliases": [ 00:20:45.520 "4c115d7c-3a55-485e-a66a-3615ab7375c9" 00:20:45.520 ], 00:20:45.520 "product_name": "Malloc disk", 00:20:45.520 "block_size": 512, 00:20:45.520 "num_blocks": 65536, 00:20:45.520 "uuid": "4c115d7c-3a55-485e-a66a-3615ab7375c9", 00:20:45.520 "assigned_rate_limits": { 00:20:45.520 "rw_ios_per_sec": 0, 00:20:45.520 "rw_mbytes_per_sec": 0, 00:20:45.520 "r_mbytes_per_sec": 0, 00:20:45.520 "w_mbytes_per_sec": 0 00:20:45.520 }, 00:20:45.520 "claimed": false, 00:20:45.520 "zoned": false, 00:20:45.520 "supported_io_types": { 00:20:45.520 "read": true, 00:20:45.520 "write": true, 00:20:45.520 "unmap": true, 00:20:45.520 "flush": true, 00:20:45.520 "reset": true, 00:20:45.520 "nvme_admin": false, 00:20:45.520 "nvme_io": false, 00:20:45.520 "nvme_io_md": false, 00:20:45.520 "write_zeroes": true, 00:20:45.520 "zcopy": true, 00:20:45.520 "get_zone_info": false, 00:20:45.520 "zone_management": false, 00:20:45.520 "zone_append": false, 00:20:45.520 "compare": false, 00:20:45.520 "compare_and_write": false, 00:20:45.520 "abort": true, 00:20:45.520 "seek_hole": false, 00:20:45.520 "seek_data": false, 00:20:45.520 "copy": true, 00:20:45.520 "nvme_iov_md": false 00:20:45.520 }, 00:20:45.520 "memory_domains": [ 00:20:45.520 { 00:20:45.520 "dma_device_id": "system", 00:20:45.520 "dma_device_type": 1 00:20:45.520 }, 00:20:45.520 { 00:20:45.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.520 "dma_device_type": 2 00:20:45.520 } 00:20:45.520 ], 00:20:45.520 "driver_specific": {} 00:20:45.520 } 00:20:45.520 ] 00:20:45.520 00:04:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:45.520 00:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:45.520 00:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:45.520 00:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:45.778 [2024-07-25 00:04:41.577782] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:45.778 [2024-07-25 00:04:41.577890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:45.778 [2024-07-25 00:04:41.577926] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:45.778 [2024-07-25 00:04:41.580121] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:45.778 [2024-07-25 00:04:41.580203] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:45.778 00:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:45.778 00:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:45.778 00:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:45.778 00:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:45.778 00:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:45.778 00:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:45.778 00:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:45.778 00:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:45.778 00:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:45.778 00:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:45.778 00:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.778 00:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.036 00:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:46.036 "name": "Existed_Raid", 00:20:46.036 "uuid": "f82da58f-f178-40c8-a069-d1164af08a24", 00:20:46.036 "strip_size_kb": 64, 00:20:46.037 "state": "configuring", 00:20:46.037 "raid_level": "raid0", 00:20:46.037 "superblock": true, 00:20:46.037 "num_base_bdevs": 4, 00:20:46.037 "num_base_bdevs_discovered": 3, 00:20:46.037 "num_base_bdevs_operational": 4, 00:20:46.037 "base_bdevs_list": [ 00:20:46.037 { 00:20:46.037 "name": "BaseBdev1", 00:20:46.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.037 "is_configured": false, 00:20:46.037 "data_offset": 0, 00:20:46.037 "data_size": 0 00:20:46.037 }, 00:20:46.037 { 00:20:46.037 "name": "BaseBdev2", 00:20:46.037 "uuid": "14707f0a-3214-4f98-bac8-eb499c2f53db", 00:20:46.037 "is_configured": true, 00:20:46.037 "data_offset": 2048, 00:20:46.037 "data_size": 63488 00:20:46.037 }, 00:20:46.037 { 00:20:46.037 "name": "BaseBdev3", 00:20:46.037 "uuid": "797f76cd-da14-42f3-8117-c33be5bb271a", 00:20:46.037 "is_configured": true, 00:20:46.037 "data_offset": 2048, 00:20:46.037 "data_size": 63488 00:20:46.037 }, 00:20:46.037 { 00:20:46.037 "name": "BaseBdev4", 00:20:46.037 "uuid": "4c115d7c-3a55-485e-a66a-3615ab7375c9", 00:20:46.037 "is_configured": true, 00:20:46.037 "data_offset": 2048, 00:20:46.037 "data_size": 63488 00:20:46.037 } 00:20:46.037 ] 00:20:46.037 }' 00:20:46.037 00:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:46.037 00:04:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.602 00:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:46.602 [2024-07-25 00:04:42.422053] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:46.602 00:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:46.602 00:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:46.602 00:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:46.602 00:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:46.602 00:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:46.602 00:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:46.602 00:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:46.602 00:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:46.602 00:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:46.602 00:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:46.602 00:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.602 00:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.861 00:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:46.861 "name": "Existed_Raid", 00:20:46.861 "uuid": "f82da58f-f178-40c8-a069-d1164af08a24", 00:20:46.861 "strip_size_kb": 64, 00:20:46.861 "state": "configuring", 00:20:46.861 "raid_level": "raid0", 00:20:46.861 "superblock": true, 00:20:46.861 "num_base_bdevs": 4, 00:20:46.861 "num_base_bdevs_discovered": 2, 00:20:46.861 "num_base_bdevs_operational": 4, 00:20:46.861 "base_bdevs_list": [ 00:20:46.861 { 00:20:46.861 "name": "BaseBdev1", 00:20:46.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.861 "is_configured": false, 00:20:46.861 "data_offset": 0, 00:20:46.861 "data_size": 0 00:20:46.861 }, 00:20:46.861 { 00:20:46.861 "name": null, 00:20:46.861 "uuid": "14707f0a-3214-4f98-bac8-eb499c2f53db", 00:20:46.861 "is_configured": false, 00:20:46.861 "data_offset": 2048, 00:20:46.861 "data_size": 63488 00:20:46.861 }, 00:20:46.861 { 00:20:46.861 "name": "BaseBdev3", 00:20:46.861 "uuid": "797f76cd-da14-42f3-8117-c33be5bb271a", 00:20:46.861 "is_configured": true, 00:20:46.861 "data_offset": 2048, 00:20:46.861 "data_size": 63488 00:20:46.861 }, 00:20:46.861 { 00:20:46.861 "name": "BaseBdev4", 00:20:46.861 "uuid": "4c115d7c-3a55-485e-a66a-3615ab7375c9", 00:20:46.861 "is_configured": true, 00:20:46.861 "data_offset": 2048, 00:20:46.861 "data_size": 63488 00:20:46.861 } 00:20:46.861 ] 00:20:46.861 }' 00:20:46.861 00:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:46.861 00:04:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.428 00:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:47.428 00:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.428 00:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:47.428 00:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:47.685 [2024-07-25 00:04:43.503211] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:47.685 BaseBdev1 00:20:47.685 00:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:47.685 00:04:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:47.685 00:04:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:47.685 00:04:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:47.685 00:04:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:47.685 00:04:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:47.685 00:04:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:47.941 00:04:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:48.199 [ 00:20:48.199 { 00:20:48.199 "name": "BaseBdev1", 00:20:48.199 "aliases": [ 00:20:48.199 "9fd00588-b291-4ecd-95e8-b77823aa4bff" 00:20:48.199 ], 00:20:48.199 "product_name": "Malloc disk", 00:20:48.199 "block_size": 512, 00:20:48.199 "num_blocks": 65536, 00:20:48.200 "uuid": "9fd00588-b291-4ecd-95e8-b77823aa4bff", 00:20:48.200 "assigned_rate_limits": { 00:20:48.200 "rw_ios_per_sec": 0, 00:20:48.200 "rw_mbytes_per_sec": 0, 00:20:48.200 "r_mbytes_per_sec": 0, 00:20:48.200 "w_mbytes_per_sec": 0 00:20:48.200 }, 00:20:48.200 "claimed": true, 00:20:48.200 "claim_type": "exclusive_write", 00:20:48.200 "zoned": false, 00:20:48.200 "supported_io_types": { 00:20:48.200 "read": true, 00:20:48.200 "write": true, 00:20:48.200 "unmap": true, 00:20:48.200 "flush": true, 00:20:48.200 "reset": true, 00:20:48.200 "nvme_admin": false, 00:20:48.200 "nvme_io": false, 00:20:48.200 "nvme_io_md": false, 00:20:48.200 "write_zeroes": true, 00:20:48.200 "zcopy": true, 00:20:48.200 "get_zone_info": false, 00:20:48.200 "zone_management": false, 00:20:48.200 "zone_append": false, 00:20:48.200 "compare": false, 00:20:48.200 "compare_and_write": false, 00:20:48.200 "abort": true, 00:20:48.200 "seek_hole": false, 00:20:48.200 "seek_data": false, 00:20:48.200 "copy": true, 00:20:48.200 "nvme_iov_md": false 00:20:48.200 }, 00:20:48.200 "memory_domains": [ 00:20:48.200 { 00:20:48.200 "dma_device_id": "system", 00:20:48.200 "dma_device_type": 1 00:20:48.200 }, 00:20:48.200 { 00:20:48.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.200 "dma_device_type": 2 00:20:48.200 } 00:20:48.200 ], 00:20:48.200 "driver_specific": {} 00:20:48.200 } 00:20:48.200 ] 00:20:48.200 00:04:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:48.200 00:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:48.200 00:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:48.200 00:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:48.200 00:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:48.200 00:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:48.200 00:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:48.200 00:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:48.200 00:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:48.200 00:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:48.200 00:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:48.200 00:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.200 00:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.458 00:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:48.458 "name": "Existed_Raid", 00:20:48.458 "uuid": "f82da58f-f178-40c8-a069-d1164af08a24", 00:20:48.458 "strip_size_kb": 64, 00:20:48.458 "state": "configuring", 00:20:48.458 "raid_level": "raid0", 00:20:48.458 "superblock": true, 00:20:48.458 "num_base_bdevs": 4, 00:20:48.458 "num_base_bdevs_discovered": 3, 00:20:48.458 "num_base_bdevs_operational": 4, 00:20:48.458 "base_bdevs_list": [ 00:20:48.458 { 00:20:48.458 "name": "BaseBdev1", 00:20:48.458 "uuid": "9fd00588-b291-4ecd-95e8-b77823aa4bff", 00:20:48.458 "is_configured": true, 00:20:48.458 "data_offset": 2048, 00:20:48.458 "data_size": 63488 00:20:48.458 }, 00:20:48.458 { 00:20:48.458 "name": null, 00:20:48.458 "uuid": "14707f0a-3214-4f98-bac8-eb499c2f53db", 00:20:48.458 "is_configured": false, 00:20:48.458 "data_offset": 2048, 00:20:48.458 "data_size": 63488 00:20:48.458 }, 00:20:48.458 { 00:20:48.458 "name": "BaseBdev3", 00:20:48.458 "uuid": "797f76cd-da14-42f3-8117-c33be5bb271a", 00:20:48.458 "is_configured": true, 00:20:48.458 "data_offset": 2048, 00:20:48.458 "data_size": 63488 00:20:48.458 }, 00:20:48.458 { 00:20:48.458 "name": "BaseBdev4", 00:20:48.458 "uuid": "4c115d7c-3a55-485e-a66a-3615ab7375c9", 00:20:48.458 "is_configured": true, 00:20:48.458 "data_offset": 2048, 00:20:48.458 "data_size": 63488 00:20:48.458 } 00:20:48.458 ] 00:20:48.458 }' 00:20:48.458 00:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:48.458 00:04:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.025 00:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.025 00:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:49.025 00:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:49.025 00:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:49.284 [2024-07-25 00:04:45.063767] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:49.284 00:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:49.284 00:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:49.284 00:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:49.284 00:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:49.284 00:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:49.284 00:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:49.284 00:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:49.284 00:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:49.284 00:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:49.284 00:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.284 00:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:49.284 00:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.542 00:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:49.542 "name": "Existed_Raid", 00:20:49.542 "uuid": "f82da58f-f178-40c8-a069-d1164af08a24", 00:20:49.542 "strip_size_kb": 64, 00:20:49.542 "state": "configuring", 00:20:49.542 "raid_level": "raid0", 00:20:49.542 "superblock": true, 00:20:49.542 "num_base_bdevs": 4, 00:20:49.542 "num_base_bdevs_discovered": 2, 00:20:49.542 "num_base_bdevs_operational": 4, 00:20:49.542 "base_bdevs_list": [ 00:20:49.542 { 00:20:49.542 "name": "BaseBdev1", 00:20:49.542 "uuid": "9fd00588-b291-4ecd-95e8-b77823aa4bff", 00:20:49.542 "is_configured": true, 00:20:49.542 "data_offset": 2048, 00:20:49.542 "data_size": 63488 00:20:49.542 }, 00:20:49.542 { 00:20:49.542 "name": null, 00:20:49.542 "uuid": "14707f0a-3214-4f98-bac8-eb499c2f53db", 00:20:49.542 "is_configured": false, 00:20:49.542 "data_offset": 2048, 00:20:49.542 "data_size": 63488 00:20:49.542 }, 00:20:49.542 { 00:20:49.542 "name": null, 00:20:49.542 "uuid": "797f76cd-da14-42f3-8117-c33be5bb271a", 00:20:49.542 "is_configured": false, 00:20:49.542 "data_offset": 2048, 00:20:49.542 "data_size": 63488 00:20:49.542 }, 00:20:49.542 { 00:20:49.542 "name": "BaseBdev4", 00:20:49.542 "uuid": "4c115d7c-3a55-485e-a66a-3615ab7375c9", 00:20:49.542 "is_configured": true, 00:20:49.543 "data_offset": 2048, 00:20:49.543 "data_size": 63488 00:20:49.543 } 00:20:49.543 ] 00:20:49.543 }' 00:20:49.543 00:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:49.543 00:04:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.801 00:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:49.801 00:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.059 00:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:50.059 00:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:50.318 [2024-07-25 00:04:46.128093] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:50.318 00:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:50.318 00:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:50.318 00:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:50.318 00:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:50.318 00:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:50.318 00:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:50.318 00:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:50.318 00:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:50.318 00:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:50.318 00:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:50.318 00:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.318 00:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:50.577 00:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:50.577 "name": "Existed_Raid", 00:20:50.577 "uuid": "f82da58f-f178-40c8-a069-d1164af08a24", 00:20:50.577 "strip_size_kb": 64, 00:20:50.577 "state": "configuring", 00:20:50.577 "raid_level": "raid0", 00:20:50.577 "superblock": true, 00:20:50.577 "num_base_bdevs": 4, 00:20:50.577 "num_base_bdevs_discovered": 3, 00:20:50.577 "num_base_bdevs_operational": 4, 00:20:50.577 "base_bdevs_list": [ 00:20:50.577 { 00:20:50.577 "name": "BaseBdev1", 00:20:50.577 "uuid": "9fd00588-b291-4ecd-95e8-b77823aa4bff", 00:20:50.577 "is_configured": true, 00:20:50.577 "data_offset": 2048, 00:20:50.577 "data_size": 63488 00:20:50.577 }, 00:20:50.577 { 00:20:50.577 "name": null, 00:20:50.577 "uuid": "14707f0a-3214-4f98-bac8-eb499c2f53db", 00:20:50.577 "is_configured": false, 00:20:50.577 "data_offset": 2048, 00:20:50.577 "data_size": 63488 00:20:50.577 }, 00:20:50.577 { 00:20:50.577 "name": "BaseBdev3", 00:20:50.577 "uuid": "797f76cd-da14-42f3-8117-c33be5bb271a", 00:20:50.577 "is_configured": true, 00:20:50.577 "data_offset": 2048, 00:20:50.577 "data_size": 63488 00:20:50.577 }, 00:20:50.577 { 00:20:50.577 "name": "BaseBdev4", 00:20:50.577 "uuid": "4c115d7c-3a55-485e-a66a-3615ab7375c9", 00:20:50.577 "is_configured": true, 00:20:50.577 "data_offset": 2048, 00:20:50.577 "data_size": 63488 00:20:50.577 } 00:20:50.577 ] 00:20:50.577 }' 00:20:50.577 00:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:50.577 00:04:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.144 00:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.144 00:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:51.403 00:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:51.403 00:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:51.662 [2024-07-25 00:04:47.312498] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:51.662 00:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:51.662 00:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:51.662 00:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:51.662 00:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:51.662 00:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:51.662 00:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:51.662 00:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:51.662 00:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:51.662 00:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:51.662 00:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:51.662 00:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.662 00:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:51.921 00:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:51.921 "name": "Existed_Raid", 00:20:51.921 "uuid": "f82da58f-f178-40c8-a069-d1164af08a24", 00:20:51.921 "strip_size_kb": 64, 00:20:51.921 "state": "configuring", 00:20:51.921 "raid_level": "raid0", 00:20:51.921 "superblock": true, 00:20:51.921 "num_base_bdevs": 4, 00:20:51.921 "num_base_bdevs_discovered": 2, 00:20:51.921 "num_base_bdevs_operational": 4, 00:20:51.921 "base_bdevs_list": [ 00:20:51.921 { 00:20:51.921 "name": null, 00:20:51.921 "uuid": "9fd00588-b291-4ecd-95e8-b77823aa4bff", 00:20:51.921 "is_configured": false, 00:20:51.921 "data_offset": 2048, 00:20:51.921 "data_size": 63488 00:20:51.921 }, 00:20:51.921 { 00:20:51.921 "name": null, 00:20:51.921 "uuid": "14707f0a-3214-4f98-bac8-eb499c2f53db", 00:20:51.921 "is_configured": false, 00:20:51.921 "data_offset": 2048, 00:20:51.921 "data_size": 63488 00:20:51.921 }, 00:20:51.921 { 00:20:51.921 "name": "BaseBdev3", 00:20:51.921 "uuid": "797f76cd-da14-42f3-8117-c33be5bb271a", 00:20:51.921 "is_configured": true, 00:20:51.921 "data_offset": 2048, 00:20:51.921 "data_size": 63488 00:20:51.921 }, 00:20:51.921 { 00:20:51.921 "name": "BaseBdev4", 00:20:51.921 "uuid": "4c115d7c-3a55-485e-a66a-3615ab7375c9", 00:20:51.922 "is_configured": true, 00:20:51.922 "data_offset": 2048, 00:20:51.922 "data_size": 63488 00:20:51.922 } 00:20:51.922 ] 00:20:51.922 }' 00:20:51.922 00:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:51.922 00:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.180 00:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.180 00:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:52.448 00:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:52.448 00:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:52.746 [2024-07-25 00:04:48.492727] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:52.746 00:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:52.746 00:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:52.746 00:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:52.746 00:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:52.746 00:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:52.746 00:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:52.746 00:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:52.746 00:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:52.746 00:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:52.746 00:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:52.746 00:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.746 00:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.021 00:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:53.021 "name": "Existed_Raid", 00:20:53.021 "uuid": "f82da58f-f178-40c8-a069-d1164af08a24", 00:20:53.021 "strip_size_kb": 64, 00:20:53.021 "state": "configuring", 00:20:53.021 "raid_level": "raid0", 00:20:53.021 "superblock": true, 00:20:53.021 "num_base_bdevs": 4, 00:20:53.021 "num_base_bdevs_discovered": 3, 00:20:53.021 "num_base_bdevs_operational": 4, 00:20:53.021 "base_bdevs_list": [ 00:20:53.021 { 00:20:53.021 "name": null, 00:20:53.021 "uuid": "9fd00588-b291-4ecd-95e8-b77823aa4bff", 00:20:53.021 "is_configured": false, 00:20:53.022 "data_offset": 2048, 00:20:53.022 "data_size": 63488 00:20:53.022 }, 00:20:53.022 { 00:20:53.022 "name": "BaseBdev2", 00:20:53.022 "uuid": "14707f0a-3214-4f98-bac8-eb499c2f53db", 00:20:53.022 "is_configured": true, 00:20:53.022 "data_offset": 2048, 00:20:53.022 "data_size": 63488 00:20:53.022 }, 00:20:53.022 { 00:20:53.022 "name": "BaseBdev3", 00:20:53.022 "uuid": "797f76cd-da14-42f3-8117-c33be5bb271a", 00:20:53.022 "is_configured": true, 00:20:53.022 "data_offset": 2048, 00:20:53.022 "data_size": 63488 00:20:53.022 }, 00:20:53.022 { 00:20:53.022 "name": "BaseBdev4", 00:20:53.022 "uuid": "4c115d7c-3a55-485e-a66a-3615ab7375c9", 00:20:53.022 "is_configured": true, 00:20:53.022 "data_offset": 2048, 00:20:53.022 "data_size": 63488 00:20:53.022 } 00:20:53.022 ] 00:20:53.022 }' 00:20:53.022 00:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:53.022 00:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.280 00:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.280 00:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:53.539 00:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:53.539 00:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.539 00:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:53.797 00:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 9fd00588-b291-4ecd-95e8-b77823aa4bff 00:20:54.056 [2024-07-25 00:04:49.817849] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:54.056 [2024-07-25 00:04:49.818104] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:20:54.056 [2024-07-25 00:04:49.818124] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:54.056 [2024-07-25 00:04:49.818264] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ee0 00:20:54.056 [2024-07-25 00:04:49.818637] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:20:54.056 [2024-07-25 00:04:49.818661] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000009380 00:20:54.056 [2024-07-25 00:04:49.818872] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.056 NewBaseBdev 00:20:54.056 00:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:54.056 00:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:20:54.056 00:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:54.056 00:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:54.056 00:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:54.056 00:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:54.056 00:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:54.315 00:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:54.573 [ 00:20:54.573 { 00:20:54.573 "name": "NewBaseBdev", 00:20:54.573 "aliases": [ 00:20:54.573 "9fd00588-b291-4ecd-95e8-b77823aa4bff" 00:20:54.573 ], 00:20:54.573 "product_name": "Malloc disk", 00:20:54.573 "block_size": 512, 00:20:54.573 "num_blocks": 65536, 00:20:54.573 "uuid": "9fd00588-b291-4ecd-95e8-b77823aa4bff", 00:20:54.573 "assigned_rate_limits": { 00:20:54.573 "rw_ios_per_sec": 0, 00:20:54.573 "rw_mbytes_per_sec": 0, 00:20:54.573 "r_mbytes_per_sec": 0, 00:20:54.573 "w_mbytes_per_sec": 0 00:20:54.573 }, 00:20:54.573 "claimed": true, 00:20:54.573 "claim_type": "exclusive_write", 00:20:54.573 "zoned": false, 00:20:54.573 "supported_io_types": { 00:20:54.573 "read": true, 00:20:54.573 "write": true, 00:20:54.573 "unmap": true, 00:20:54.573 "flush": true, 00:20:54.573 "reset": true, 00:20:54.573 "nvme_admin": false, 00:20:54.573 "nvme_io": false, 00:20:54.573 "nvme_io_md": false, 00:20:54.573 "write_zeroes": true, 00:20:54.573 "zcopy": true, 00:20:54.573 "get_zone_info": false, 00:20:54.573 "zone_management": false, 00:20:54.573 "zone_append": false, 00:20:54.573 "compare": false, 00:20:54.573 "compare_and_write": false, 00:20:54.573 "abort": true, 00:20:54.573 "seek_hole": false, 00:20:54.573 "seek_data": false, 00:20:54.573 "copy": true, 00:20:54.573 "nvme_iov_md": false 00:20:54.573 }, 00:20:54.573 "memory_domains": [ 00:20:54.573 { 00:20:54.573 "dma_device_id": "system", 00:20:54.573 "dma_device_type": 1 00:20:54.573 }, 00:20:54.573 { 00:20:54.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.573 "dma_device_type": 2 00:20:54.573 } 00:20:54.573 ], 00:20:54.573 "driver_specific": {} 00:20:54.573 } 00:20:54.573 ] 00:20:54.573 00:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:54.573 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:20:54.573 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:54.573 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:54.573 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:54.573 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:54.573 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:54.573 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:54.573 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:54.573 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:54.573 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:54.574 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.574 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.832 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:54.832 "name": "Existed_Raid", 00:20:54.832 "uuid": "f82da58f-f178-40c8-a069-d1164af08a24", 00:20:54.832 "strip_size_kb": 64, 00:20:54.832 "state": "online", 00:20:54.832 "raid_level": "raid0", 00:20:54.832 "superblock": true, 00:20:54.832 "num_base_bdevs": 4, 00:20:54.832 "num_base_bdevs_discovered": 4, 00:20:54.832 "num_base_bdevs_operational": 4, 00:20:54.832 "base_bdevs_list": [ 00:20:54.832 { 00:20:54.832 "name": "NewBaseBdev", 00:20:54.832 "uuid": "9fd00588-b291-4ecd-95e8-b77823aa4bff", 00:20:54.832 "is_configured": true, 00:20:54.832 "data_offset": 2048, 00:20:54.832 "data_size": 63488 00:20:54.832 }, 00:20:54.832 { 00:20:54.832 "name": "BaseBdev2", 00:20:54.832 "uuid": "14707f0a-3214-4f98-bac8-eb499c2f53db", 00:20:54.832 "is_configured": true, 00:20:54.832 "data_offset": 2048, 00:20:54.832 "data_size": 63488 00:20:54.832 }, 00:20:54.832 { 00:20:54.832 "name": "BaseBdev3", 00:20:54.832 "uuid": "797f76cd-da14-42f3-8117-c33be5bb271a", 00:20:54.832 "is_configured": true, 00:20:54.832 "data_offset": 2048, 00:20:54.832 "data_size": 63488 00:20:54.832 }, 00:20:54.832 { 00:20:54.832 "name": "BaseBdev4", 00:20:54.832 "uuid": "4c115d7c-3a55-485e-a66a-3615ab7375c9", 00:20:54.832 "is_configured": true, 00:20:54.832 "data_offset": 2048, 00:20:54.832 "data_size": 63488 00:20:54.832 } 00:20:54.832 ] 00:20:54.832 }' 00:20:54.832 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:54.832 00:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.091 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:55.091 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:55.091 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:55.091 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:55.091 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:55.091 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:55.091 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:55.091 00:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:55.350 [2024-07-25 00:04:51.130654] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:55.350 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:55.350 "name": "Existed_Raid", 00:20:55.350 "aliases": [ 00:20:55.350 "f82da58f-f178-40c8-a069-d1164af08a24" 00:20:55.350 ], 00:20:55.350 "product_name": "Raid Volume", 00:20:55.350 "block_size": 512, 00:20:55.350 "num_blocks": 253952, 00:20:55.350 "uuid": "f82da58f-f178-40c8-a069-d1164af08a24", 00:20:55.350 "assigned_rate_limits": { 00:20:55.350 "rw_ios_per_sec": 0, 00:20:55.350 "rw_mbytes_per_sec": 0, 00:20:55.350 "r_mbytes_per_sec": 0, 00:20:55.350 "w_mbytes_per_sec": 0 00:20:55.350 }, 00:20:55.350 "claimed": false, 00:20:55.350 "zoned": false, 00:20:55.350 "supported_io_types": { 00:20:55.350 "read": true, 00:20:55.350 "write": true, 00:20:55.350 "unmap": true, 00:20:55.350 "flush": true, 00:20:55.350 "reset": true, 00:20:55.350 "nvme_admin": false, 00:20:55.350 "nvme_io": false, 00:20:55.350 "nvme_io_md": false, 00:20:55.350 "write_zeroes": true, 00:20:55.350 "zcopy": false, 00:20:55.350 "get_zone_info": false, 00:20:55.350 "zone_management": false, 00:20:55.350 "zone_append": false, 00:20:55.350 "compare": false, 00:20:55.350 "compare_and_write": false, 00:20:55.350 "abort": false, 00:20:55.350 "seek_hole": false, 00:20:55.350 "seek_data": false, 00:20:55.350 "copy": false, 00:20:55.350 "nvme_iov_md": false 00:20:55.350 }, 00:20:55.350 "memory_domains": [ 00:20:55.350 { 00:20:55.350 "dma_device_id": "system", 00:20:55.350 "dma_device_type": 1 00:20:55.350 }, 00:20:55.350 { 00:20:55.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.350 "dma_device_type": 2 00:20:55.350 }, 00:20:55.350 { 00:20:55.350 "dma_device_id": "system", 00:20:55.350 "dma_device_type": 1 00:20:55.350 }, 00:20:55.350 { 00:20:55.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.350 "dma_device_type": 2 00:20:55.350 }, 00:20:55.350 { 00:20:55.350 "dma_device_id": "system", 00:20:55.350 "dma_device_type": 1 00:20:55.350 }, 00:20:55.350 { 00:20:55.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.350 "dma_device_type": 2 00:20:55.350 }, 00:20:55.350 { 00:20:55.350 "dma_device_id": "system", 00:20:55.350 "dma_device_type": 1 00:20:55.350 }, 00:20:55.350 { 00:20:55.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.350 "dma_device_type": 2 00:20:55.350 } 00:20:55.350 ], 00:20:55.350 "driver_specific": { 00:20:55.350 "raid": { 00:20:55.350 "uuid": "f82da58f-f178-40c8-a069-d1164af08a24", 00:20:55.350 "strip_size_kb": 64, 00:20:55.350 "state": "online", 00:20:55.350 "raid_level": "raid0", 00:20:55.350 "superblock": true, 00:20:55.350 "num_base_bdevs": 4, 00:20:55.350 "num_base_bdevs_discovered": 4, 00:20:55.350 "num_base_bdevs_operational": 4, 00:20:55.350 "base_bdevs_list": [ 00:20:55.350 { 00:20:55.350 "name": "NewBaseBdev", 00:20:55.350 "uuid": "9fd00588-b291-4ecd-95e8-b77823aa4bff", 00:20:55.350 "is_configured": true, 00:20:55.350 "data_offset": 2048, 00:20:55.350 "data_size": 63488 00:20:55.350 }, 00:20:55.350 { 00:20:55.350 "name": "BaseBdev2", 00:20:55.350 "uuid": "14707f0a-3214-4f98-bac8-eb499c2f53db", 00:20:55.350 "is_configured": true, 00:20:55.350 "data_offset": 2048, 00:20:55.350 "data_size": 63488 00:20:55.350 }, 00:20:55.350 { 00:20:55.350 "name": "BaseBdev3", 00:20:55.350 "uuid": "797f76cd-da14-42f3-8117-c33be5bb271a", 00:20:55.350 "is_configured": true, 00:20:55.350 "data_offset": 2048, 00:20:55.350 "data_size": 63488 00:20:55.350 }, 00:20:55.350 { 00:20:55.350 "name": "BaseBdev4", 00:20:55.350 "uuid": "4c115d7c-3a55-485e-a66a-3615ab7375c9", 00:20:55.350 "is_configured": true, 00:20:55.350 "data_offset": 2048, 00:20:55.350 "data_size": 63488 00:20:55.350 } 00:20:55.350 ] 00:20:55.350 } 00:20:55.350 } 00:20:55.350 }' 00:20:55.350 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:55.350 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:55.350 BaseBdev2 00:20:55.350 BaseBdev3 00:20:55.350 BaseBdev4' 00:20:55.350 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:55.350 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:55.350 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:55.609 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:55.609 "name": "NewBaseBdev", 00:20:55.609 "aliases": [ 00:20:55.609 "9fd00588-b291-4ecd-95e8-b77823aa4bff" 00:20:55.609 ], 00:20:55.609 "product_name": "Malloc disk", 00:20:55.609 "block_size": 512, 00:20:55.609 "num_blocks": 65536, 00:20:55.609 "uuid": "9fd00588-b291-4ecd-95e8-b77823aa4bff", 00:20:55.609 "assigned_rate_limits": { 00:20:55.609 "rw_ios_per_sec": 0, 00:20:55.609 "rw_mbytes_per_sec": 0, 00:20:55.609 "r_mbytes_per_sec": 0, 00:20:55.609 "w_mbytes_per_sec": 0 00:20:55.609 }, 00:20:55.609 "claimed": true, 00:20:55.609 "claim_type": "exclusive_write", 00:20:55.609 "zoned": false, 00:20:55.609 "supported_io_types": { 00:20:55.609 "read": true, 00:20:55.609 "write": true, 00:20:55.609 "unmap": true, 00:20:55.609 "flush": true, 00:20:55.609 "reset": true, 00:20:55.609 "nvme_admin": false, 00:20:55.609 "nvme_io": false, 00:20:55.609 "nvme_io_md": false, 00:20:55.609 "write_zeroes": true, 00:20:55.609 "zcopy": true, 00:20:55.609 "get_zone_info": false, 00:20:55.609 "zone_management": false, 00:20:55.609 "zone_append": false, 00:20:55.609 "compare": false, 00:20:55.609 "compare_and_write": false, 00:20:55.609 "abort": true, 00:20:55.609 "seek_hole": false, 00:20:55.609 "seek_data": false, 00:20:55.609 "copy": true, 00:20:55.609 "nvme_iov_md": false 00:20:55.609 }, 00:20:55.609 "memory_domains": [ 00:20:55.609 { 00:20:55.609 "dma_device_id": "system", 00:20:55.609 "dma_device_type": 1 00:20:55.609 }, 00:20:55.609 { 00:20:55.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.609 "dma_device_type": 2 00:20:55.609 } 00:20:55.609 ], 00:20:55.609 "driver_specific": {} 00:20:55.609 }' 00:20:55.609 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:55.609 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:55.609 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:55.609 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:55.609 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:55.609 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:55.609 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:55.609 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:55.609 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:55.609 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:55.868 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:55.868 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:55.868 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:55.868 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:55.868 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:56.127 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:56.127 "name": "BaseBdev2", 00:20:56.127 "aliases": [ 00:20:56.127 "14707f0a-3214-4f98-bac8-eb499c2f53db" 00:20:56.127 ], 00:20:56.127 "product_name": "Malloc disk", 00:20:56.127 "block_size": 512, 00:20:56.127 "num_blocks": 65536, 00:20:56.127 "uuid": "14707f0a-3214-4f98-bac8-eb499c2f53db", 00:20:56.127 "assigned_rate_limits": { 00:20:56.127 "rw_ios_per_sec": 0, 00:20:56.127 "rw_mbytes_per_sec": 0, 00:20:56.127 "r_mbytes_per_sec": 0, 00:20:56.127 "w_mbytes_per_sec": 0 00:20:56.127 }, 00:20:56.127 "claimed": true, 00:20:56.127 "claim_type": "exclusive_write", 00:20:56.127 "zoned": false, 00:20:56.127 "supported_io_types": { 00:20:56.127 "read": true, 00:20:56.127 "write": true, 00:20:56.127 "unmap": true, 00:20:56.127 "flush": true, 00:20:56.127 "reset": true, 00:20:56.127 "nvme_admin": false, 00:20:56.127 "nvme_io": false, 00:20:56.127 "nvme_io_md": false, 00:20:56.127 "write_zeroes": true, 00:20:56.127 "zcopy": true, 00:20:56.127 "get_zone_info": false, 00:20:56.127 "zone_management": false, 00:20:56.127 "zone_append": false, 00:20:56.127 "compare": false, 00:20:56.127 "compare_and_write": false, 00:20:56.127 "abort": true, 00:20:56.127 "seek_hole": false, 00:20:56.127 "seek_data": false, 00:20:56.127 "copy": true, 00:20:56.127 "nvme_iov_md": false 00:20:56.127 }, 00:20:56.127 "memory_domains": [ 00:20:56.127 { 00:20:56.127 "dma_device_id": "system", 00:20:56.127 "dma_device_type": 1 00:20:56.127 }, 00:20:56.127 { 00:20:56.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.127 "dma_device_type": 2 00:20:56.127 } 00:20:56.127 ], 00:20:56.127 "driver_specific": {} 00:20:56.127 }' 00:20:56.127 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.127 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.127 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:56.127 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.127 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.127 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:56.127 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.127 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.127 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:56.127 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.127 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.127 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:56.127 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:56.127 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:56.127 00:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:56.386 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:56.386 "name": "BaseBdev3", 00:20:56.386 "aliases": [ 00:20:56.386 "797f76cd-da14-42f3-8117-c33be5bb271a" 00:20:56.386 ], 00:20:56.386 "product_name": "Malloc disk", 00:20:56.386 "block_size": 512, 00:20:56.386 "num_blocks": 65536, 00:20:56.386 "uuid": "797f76cd-da14-42f3-8117-c33be5bb271a", 00:20:56.386 "assigned_rate_limits": { 00:20:56.386 "rw_ios_per_sec": 0, 00:20:56.386 "rw_mbytes_per_sec": 0, 00:20:56.386 "r_mbytes_per_sec": 0, 00:20:56.386 "w_mbytes_per_sec": 0 00:20:56.386 }, 00:20:56.386 "claimed": true, 00:20:56.386 "claim_type": "exclusive_write", 00:20:56.386 "zoned": false, 00:20:56.386 "supported_io_types": { 00:20:56.386 "read": true, 00:20:56.386 "write": true, 00:20:56.386 "unmap": true, 00:20:56.386 "flush": true, 00:20:56.386 "reset": true, 00:20:56.386 "nvme_admin": false, 00:20:56.386 "nvme_io": false, 00:20:56.386 "nvme_io_md": false, 00:20:56.386 "write_zeroes": true, 00:20:56.386 "zcopy": true, 00:20:56.386 "get_zone_info": false, 00:20:56.386 "zone_management": false, 00:20:56.386 "zone_append": false, 00:20:56.386 "compare": false, 00:20:56.386 "compare_and_write": false, 00:20:56.386 "abort": true, 00:20:56.386 "seek_hole": false, 00:20:56.386 "seek_data": false, 00:20:56.386 "copy": true, 00:20:56.386 "nvme_iov_md": false 00:20:56.386 }, 00:20:56.386 "memory_domains": [ 00:20:56.386 { 00:20:56.386 "dma_device_id": "system", 00:20:56.386 "dma_device_type": 1 00:20:56.386 }, 00:20:56.386 { 00:20:56.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.386 "dma_device_type": 2 00:20:56.386 } 00:20:56.386 ], 00:20:56.386 "driver_specific": {} 00:20:56.386 }' 00:20:56.386 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.386 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.386 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:56.386 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.387 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.387 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:56.387 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.387 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.387 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:56.387 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.387 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.387 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:56.387 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:56.387 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:56.387 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:56.645 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:56.645 "name": "BaseBdev4", 00:20:56.645 "aliases": [ 00:20:56.645 "4c115d7c-3a55-485e-a66a-3615ab7375c9" 00:20:56.645 ], 00:20:56.645 "product_name": "Malloc disk", 00:20:56.645 "block_size": 512, 00:20:56.645 "num_blocks": 65536, 00:20:56.645 "uuid": "4c115d7c-3a55-485e-a66a-3615ab7375c9", 00:20:56.645 "assigned_rate_limits": { 00:20:56.645 "rw_ios_per_sec": 0, 00:20:56.645 "rw_mbytes_per_sec": 0, 00:20:56.645 "r_mbytes_per_sec": 0, 00:20:56.645 "w_mbytes_per_sec": 0 00:20:56.645 }, 00:20:56.645 "claimed": true, 00:20:56.645 "claim_type": "exclusive_write", 00:20:56.645 "zoned": false, 00:20:56.645 "supported_io_types": { 00:20:56.645 "read": true, 00:20:56.645 "write": true, 00:20:56.645 "unmap": true, 00:20:56.645 "flush": true, 00:20:56.645 "reset": true, 00:20:56.645 "nvme_admin": false, 00:20:56.645 "nvme_io": false, 00:20:56.645 "nvme_io_md": false, 00:20:56.645 "write_zeroes": true, 00:20:56.645 "zcopy": true, 00:20:56.645 "get_zone_info": false, 00:20:56.645 "zone_management": false, 00:20:56.645 "zone_append": false, 00:20:56.645 "compare": false, 00:20:56.645 "compare_and_write": false, 00:20:56.645 "abort": true, 00:20:56.645 "seek_hole": false, 00:20:56.645 "seek_data": false, 00:20:56.645 "copy": true, 00:20:56.645 "nvme_iov_md": false 00:20:56.645 }, 00:20:56.645 "memory_domains": [ 00:20:56.645 { 00:20:56.645 "dma_device_id": "system", 00:20:56.645 "dma_device_type": 1 00:20:56.645 }, 00:20:56.645 { 00:20:56.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.645 "dma_device_type": 2 00:20:56.645 } 00:20:56.645 ], 00:20:56.645 "driver_specific": {} 00:20:56.645 }' 00:20:56.645 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.645 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.645 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:56.645 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.645 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.904 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:56.904 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.904 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.904 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:56.904 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.904 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.904 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:56.904 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:57.163 [2024-07-25 00:04:52.810897] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:57.163 [2024-07-25 00:04:52.810946] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:57.163 [2024-07-25 00:04:52.811049] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:57.163 [2024-07-25 00:04:52.811132] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:57.163 [2024-07-25 00:04:52.811148] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name Existed_Raid, state offline 00:20:57.163 00:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 88997 00:20:57.163 00:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 88997 ']' 00:20:57.163 00:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 88997 00:20:57.163 00:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:20:57.163 00:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:57.163 00:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88997 00:20:57.163 killing process with pid 88997 00:20:57.163 00:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:57.163 00:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:57.163 00:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88997' 00:20:57.163 00:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 88997 00:20:57.163 00:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 88997 00:20:57.163 [2024-07-25 00:04:52.871504] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:57.422 [2024-07-25 00:04:53.204157] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:58.796 00:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:20:58.796 00:20:58.796 real 0m28.830s 00:20:58.796 user 0m50.533s 00:20:58.796 sys 0m4.442s 00:20:58.796 00:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:58.796 00:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.796 ************************************ 00:20:58.796 END TEST raid_state_function_test_sb 00:20:58.796 ************************************ 00:20:58.796 00:04:54 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:20:58.796 00:04:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:58.796 00:04:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:58.796 00:04:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:58.796 ************************************ 00:20:58.796 START TEST raid_superblock_test 00:20:58.796 ************************************ 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=90004 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 90004 /var/tmp/spdk-raid.sock 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 90004 ']' 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:58.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:58.796 00:04:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.796 [2024-07-25 00:04:54.434521] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:20:58.796 [2024-07-25 00:04:54.434711] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90004 ] 00:20:58.796 [2024-07-25 00:04:54.600288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.055 [2024-07-25 00:04:54.829097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.313 [2024-07-25 00:04:54.998318] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:59.571 00:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:59.571 00:04:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:20:59.572 00:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:20:59.572 00:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:59.572 00:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:20:59.572 00:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:20:59.572 00:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:59.572 00:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:59.572 00:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:59.572 00:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:59.572 00:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:59.830 malloc1 00:20:59.830 00:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:00.088 [2024-07-25 00:04:55.851907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:00.088 [2024-07-25 00:04:55.852025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.088 [2024-07-25 00:04:55.852062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:21:00.088 [2024-07-25 00:04:55.852079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.088 [2024-07-25 00:04:55.854623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.088 [2024-07-25 00:04:55.854702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:00.088 pt1 00:21:00.088 00:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:21:00.088 00:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:00.088 00:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:21:00.088 00:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:21:00.088 00:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:00.088 00:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:00.088 00:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:21:00.088 00:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:00.088 00:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:00.371 malloc2 00:21:00.371 00:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:00.629 [2024-07-25 00:04:56.314758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:00.629 [2024-07-25 00:04:56.314856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.629 [2024-07-25 00:04:56.314902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:21:00.629 [2024-07-25 00:04:56.314917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.629 [2024-07-25 00:04:56.317528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.629 [2024-07-25 00:04:56.317573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:00.629 pt2 00:21:00.629 00:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:21:00.629 00:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:00.629 00:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:21:00.629 00:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:21:00.629 00:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:00.629 00:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:00.629 00:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:21:00.629 00:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:00.629 00:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:00.891 malloc3 00:21:00.891 00:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:00.891 [2024-07-25 00:04:56.752746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:00.891 [2024-07-25 00:04:56.752854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.891 [2024-07-25 00:04:56.752887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:21:00.891 [2024-07-25 00:04:56.752900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.891 [2024-07-25 00:04:56.755518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.891 [2024-07-25 00:04:56.755555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:00.891 pt3 00:21:01.148 00:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:21:01.148 00:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:01.148 00:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:21:01.148 00:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:21:01.148 00:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:01.148 00:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:01.148 00:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:21:01.148 00:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:01.148 00:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:21:01.148 malloc4 00:21:01.406 00:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:01.664 [2024-07-25 00:04:57.323102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:01.664 [2024-07-25 00:04:57.323198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.664 [2024-07-25 00:04:57.323237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:21:01.664 [2024-07-25 00:04:57.323253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.664 [2024-07-25 00:04:57.325733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.664 [2024-07-25 00:04:57.325777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:01.664 pt4 00:21:01.664 00:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:21:01.664 00:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:01.664 00:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:21:01.922 [2024-07-25 00:04:57.567259] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:01.922 [2024-07-25 00:04:57.569446] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:01.922 [2024-07-25 00:04:57.569548] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:01.922 [2024-07-25 00:04:57.569635] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:01.922 [2024-07-25 00:04:57.569946] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009680 00:21:01.922 [2024-07-25 00:04:57.569963] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:01.922 [2024-07-25 00:04:57.570093] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:21:01.922 [2024-07-25 00:04:57.570486] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009680 00:21:01.922 [2024-07-25 00:04:57.570511] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009680 00:21:01.922 [2024-07-25 00:04:57.570702] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:01.922 00:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:01.922 00:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:01.922 00:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:01.922 00:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:01.922 00:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:01.922 00:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:01.922 00:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:01.922 00:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:01.922 00:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:01.922 00:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:01.922 00:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.922 00:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.179 00:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:02.179 "name": "raid_bdev1", 00:21:02.179 "uuid": "54e42ad3-673b-40c8-8729-ad78580a3ccc", 00:21:02.179 "strip_size_kb": 64, 00:21:02.179 "state": "online", 00:21:02.179 "raid_level": "raid0", 00:21:02.179 "superblock": true, 00:21:02.179 "num_base_bdevs": 4, 00:21:02.179 "num_base_bdevs_discovered": 4, 00:21:02.179 "num_base_bdevs_operational": 4, 00:21:02.179 "base_bdevs_list": [ 00:21:02.179 { 00:21:02.179 "name": "pt1", 00:21:02.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:02.179 "is_configured": true, 00:21:02.179 "data_offset": 2048, 00:21:02.179 "data_size": 63488 00:21:02.179 }, 00:21:02.179 { 00:21:02.179 "name": "pt2", 00:21:02.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:02.179 "is_configured": true, 00:21:02.179 "data_offset": 2048, 00:21:02.179 "data_size": 63488 00:21:02.179 }, 00:21:02.179 { 00:21:02.179 "name": "pt3", 00:21:02.179 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:02.179 "is_configured": true, 00:21:02.179 "data_offset": 2048, 00:21:02.179 "data_size": 63488 00:21:02.179 }, 00:21:02.179 { 00:21:02.179 "name": "pt4", 00:21:02.179 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:02.179 "is_configured": true, 00:21:02.179 "data_offset": 2048, 00:21:02.179 "data_size": 63488 00:21:02.179 } 00:21:02.179 ] 00:21:02.179 }' 00:21:02.179 00:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:02.179 00:04:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.436 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:21:02.436 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:02.436 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:02.436 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:02.436 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:02.436 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:02.436 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:02.436 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:02.694 [2024-07-25 00:04:58.339801] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:02.694 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:02.694 "name": "raid_bdev1", 00:21:02.694 "aliases": [ 00:21:02.694 "54e42ad3-673b-40c8-8729-ad78580a3ccc" 00:21:02.694 ], 00:21:02.694 "product_name": "Raid Volume", 00:21:02.694 "block_size": 512, 00:21:02.694 "num_blocks": 253952, 00:21:02.694 "uuid": "54e42ad3-673b-40c8-8729-ad78580a3ccc", 00:21:02.694 "assigned_rate_limits": { 00:21:02.694 "rw_ios_per_sec": 0, 00:21:02.694 "rw_mbytes_per_sec": 0, 00:21:02.694 "r_mbytes_per_sec": 0, 00:21:02.694 "w_mbytes_per_sec": 0 00:21:02.694 }, 00:21:02.694 "claimed": false, 00:21:02.694 "zoned": false, 00:21:02.694 "supported_io_types": { 00:21:02.694 "read": true, 00:21:02.694 "write": true, 00:21:02.694 "unmap": true, 00:21:02.694 "flush": true, 00:21:02.694 "reset": true, 00:21:02.694 "nvme_admin": false, 00:21:02.694 "nvme_io": false, 00:21:02.694 "nvme_io_md": false, 00:21:02.694 "write_zeroes": true, 00:21:02.694 "zcopy": false, 00:21:02.694 "get_zone_info": false, 00:21:02.694 "zone_management": false, 00:21:02.694 "zone_append": false, 00:21:02.694 "compare": false, 00:21:02.694 "compare_and_write": false, 00:21:02.694 "abort": false, 00:21:02.694 "seek_hole": false, 00:21:02.694 "seek_data": false, 00:21:02.694 "copy": false, 00:21:02.694 "nvme_iov_md": false 00:21:02.694 }, 00:21:02.694 "memory_domains": [ 00:21:02.694 { 00:21:02.694 "dma_device_id": "system", 00:21:02.694 "dma_device_type": 1 00:21:02.694 }, 00:21:02.694 { 00:21:02.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.694 "dma_device_type": 2 00:21:02.694 }, 00:21:02.694 { 00:21:02.694 "dma_device_id": "system", 00:21:02.694 "dma_device_type": 1 00:21:02.694 }, 00:21:02.694 { 00:21:02.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.694 "dma_device_type": 2 00:21:02.694 }, 00:21:02.694 { 00:21:02.694 "dma_device_id": "system", 00:21:02.694 "dma_device_type": 1 00:21:02.694 }, 00:21:02.694 { 00:21:02.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.694 "dma_device_type": 2 00:21:02.694 }, 00:21:02.694 { 00:21:02.694 "dma_device_id": "system", 00:21:02.694 "dma_device_type": 1 00:21:02.694 }, 00:21:02.694 { 00:21:02.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.694 "dma_device_type": 2 00:21:02.694 } 00:21:02.694 ], 00:21:02.694 "driver_specific": { 00:21:02.694 "raid": { 00:21:02.694 "uuid": "54e42ad3-673b-40c8-8729-ad78580a3ccc", 00:21:02.694 "strip_size_kb": 64, 00:21:02.694 "state": "online", 00:21:02.694 "raid_level": "raid0", 00:21:02.694 "superblock": true, 00:21:02.694 "num_base_bdevs": 4, 00:21:02.694 "num_base_bdevs_discovered": 4, 00:21:02.694 "num_base_bdevs_operational": 4, 00:21:02.694 "base_bdevs_list": [ 00:21:02.694 { 00:21:02.694 "name": "pt1", 00:21:02.694 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:02.694 "is_configured": true, 00:21:02.694 "data_offset": 2048, 00:21:02.694 "data_size": 63488 00:21:02.694 }, 00:21:02.694 { 00:21:02.694 "name": "pt2", 00:21:02.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:02.694 "is_configured": true, 00:21:02.694 "data_offset": 2048, 00:21:02.694 "data_size": 63488 00:21:02.694 }, 00:21:02.694 { 00:21:02.694 "name": "pt3", 00:21:02.694 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:02.694 "is_configured": true, 00:21:02.694 "data_offset": 2048, 00:21:02.694 "data_size": 63488 00:21:02.694 }, 00:21:02.694 { 00:21:02.694 "name": "pt4", 00:21:02.694 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:02.694 "is_configured": true, 00:21:02.694 "data_offset": 2048, 00:21:02.694 "data_size": 63488 00:21:02.694 } 00:21:02.694 ] 00:21:02.694 } 00:21:02.694 } 00:21:02.694 }' 00:21:02.694 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:02.694 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:02.694 pt2 00:21:02.694 pt3 00:21:02.694 pt4' 00:21:02.694 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:02.694 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:02.694 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:02.952 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:02.952 "name": "pt1", 00:21:02.952 "aliases": [ 00:21:02.952 "00000000-0000-0000-0000-000000000001" 00:21:02.952 ], 00:21:02.952 "product_name": "passthru", 00:21:02.952 "block_size": 512, 00:21:02.952 "num_blocks": 65536, 00:21:02.952 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:02.952 "assigned_rate_limits": { 00:21:02.952 "rw_ios_per_sec": 0, 00:21:02.952 "rw_mbytes_per_sec": 0, 00:21:02.952 "r_mbytes_per_sec": 0, 00:21:02.952 "w_mbytes_per_sec": 0 00:21:02.952 }, 00:21:02.952 "claimed": true, 00:21:02.952 "claim_type": "exclusive_write", 00:21:02.952 "zoned": false, 00:21:02.952 "supported_io_types": { 00:21:02.952 "read": true, 00:21:02.952 "write": true, 00:21:02.952 "unmap": true, 00:21:02.953 "flush": true, 00:21:02.953 "reset": true, 00:21:02.953 "nvme_admin": false, 00:21:02.953 "nvme_io": false, 00:21:02.953 "nvme_io_md": false, 00:21:02.953 "write_zeroes": true, 00:21:02.953 "zcopy": true, 00:21:02.953 "get_zone_info": false, 00:21:02.953 "zone_management": false, 00:21:02.953 "zone_append": false, 00:21:02.953 "compare": false, 00:21:02.953 "compare_and_write": false, 00:21:02.953 "abort": true, 00:21:02.953 "seek_hole": false, 00:21:02.953 "seek_data": false, 00:21:02.953 "copy": true, 00:21:02.953 "nvme_iov_md": false 00:21:02.953 }, 00:21:02.953 "memory_domains": [ 00:21:02.953 { 00:21:02.953 "dma_device_id": "system", 00:21:02.953 "dma_device_type": 1 00:21:02.953 }, 00:21:02.953 { 00:21:02.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.953 "dma_device_type": 2 00:21:02.953 } 00:21:02.953 ], 00:21:02.953 "driver_specific": { 00:21:02.953 "passthru": { 00:21:02.953 "name": "pt1", 00:21:02.953 "base_bdev_name": "malloc1" 00:21:02.953 } 00:21:02.953 } 00:21:02.953 }' 00:21:02.953 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:02.953 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:02.953 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:02.953 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:02.953 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:02.953 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:02.953 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:02.953 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:02.953 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:02.953 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:02.953 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:02.953 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:02.953 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:02.953 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:02.953 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:03.211 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:03.211 "name": "pt2", 00:21:03.211 "aliases": [ 00:21:03.211 "00000000-0000-0000-0000-000000000002" 00:21:03.211 ], 00:21:03.211 "product_name": "passthru", 00:21:03.211 "block_size": 512, 00:21:03.211 "num_blocks": 65536, 00:21:03.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:03.211 "assigned_rate_limits": { 00:21:03.211 "rw_ios_per_sec": 0, 00:21:03.211 "rw_mbytes_per_sec": 0, 00:21:03.211 "r_mbytes_per_sec": 0, 00:21:03.211 "w_mbytes_per_sec": 0 00:21:03.211 }, 00:21:03.211 "claimed": true, 00:21:03.211 "claim_type": "exclusive_write", 00:21:03.211 "zoned": false, 00:21:03.211 "supported_io_types": { 00:21:03.211 "read": true, 00:21:03.211 "write": true, 00:21:03.211 "unmap": true, 00:21:03.211 "flush": true, 00:21:03.211 "reset": true, 00:21:03.211 "nvme_admin": false, 00:21:03.211 "nvme_io": false, 00:21:03.211 "nvme_io_md": false, 00:21:03.211 "write_zeroes": true, 00:21:03.211 "zcopy": true, 00:21:03.211 "get_zone_info": false, 00:21:03.211 "zone_management": false, 00:21:03.211 "zone_append": false, 00:21:03.211 "compare": false, 00:21:03.211 "compare_and_write": false, 00:21:03.211 "abort": true, 00:21:03.211 "seek_hole": false, 00:21:03.211 "seek_data": false, 00:21:03.211 "copy": true, 00:21:03.211 "nvme_iov_md": false 00:21:03.211 }, 00:21:03.211 "memory_domains": [ 00:21:03.211 { 00:21:03.211 "dma_device_id": "system", 00:21:03.211 "dma_device_type": 1 00:21:03.211 }, 00:21:03.211 { 00:21:03.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.211 "dma_device_type": 2 00:21:03.211 } 00:21:03.211 ], 00:21:03.211 "driver_specific": { 00:21:03.211 "passthru": { 00:21:03.211 "name": "pt2", 00:21:03.211 "base_bdev_name": "malloc2" 00:21:03.211 } 00:21:03.211 } 00:21:03.211 }' 00:21:03.211 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:03.211 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:03.211 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:03.211 00:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:03.211 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:03.211 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:03.211 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:03.211 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:03.211 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:03.211 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:03.211 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:03.211 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:03.211 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:03.211 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:03.211 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:03.469 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:03.469 "name": "pt3", 00:21:03.469 "aliases": [ 00:21:03.469 "00000000-0000-0000-0000-000000000003" 00:21:03.469 ], 00:21:03.469 "product_name": "passthru", 00:21:03.469 "block_size": 512, 00:21:03.469 "num_blocks": 65536, 00:21:03.469 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:03.469 "assigned_rate_limits": { 00:21:03.469 "rw_ios_per_sec": 0, 00:21:03.469 "rw_mbytes_per_sec": 0, 00:21:03.469 "r_mbytes_per_sec": 0, 00:21:03.469 "w_mbytes_per_sec": 0 00:21:03.469 }, 00:21:03.469 "claimed": true, 00:21:03.469 "claim_type": "exclusive_write", 00:21:03.469 "zoned": false, 00:21:03.469 "supported_io_types": { 00:21:03.469 "read": true, 00:21:03.469 "write": true, 00:21:03.469 "unmap": true, 00:21:03.469 "flush": true, 00:21:03.469 "reset": true, 00:21:03.470 "nvme_admin": false, 00:21:03.470 "nvme_io": false, 00:21:03.470 "nvme_io_md": false, 00:21:03.470 "write_zeroes": true, 00:21:03.470 "zcopy": true, 00:21:03.470 "get_zone_info": false, 00:21:03.470 "zone_management": false, 00:21:03.470 "zone_append": false, 00:21:03.470 "compare": false, 00:21:03.470 "compare_and_write": false, 00:21:03.470 "abort": true, 00:21:03.470 "seek_hole": false, 00:21:03.470 "seek_data": false, 00:21:03.470 "copy": true, 00:21:03.470 "nvme_iov_md": false 00:21:03.470 }, 00:21:03.470 "memory_domains": [ 00:21:03.470 { 00:21:03.470 "dma_device_id": "system", 00:21:03.470 "dma_device_type": 1 00:21:03.470 }, 00:21:03.470 { 00:21:03.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.470 "dma_device_type": 2 00:21:03.470 } 00:21:03.470 ], 00:21:03.470 "driver_specific": { 00:21:03.470 "passthru": { 00:21:03.470 "name": "pt3", 00:21:03.470 "base_bdev_name": "malloc3" 00:21:03.470 } 00:21:03.470 } 00:21:03.470 }' 00:21:03.470 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:03.728 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:03.728 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:03.728 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:03.728 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:03.728 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:03.728 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:03.728 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:03.728 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:03.728 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:03.728 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:03.728 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:03.728 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:03.728 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:21:03.728 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:03.989 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:03.989 "name": "pt4", 00:21:03.989 "aliases": [ 00:21:03.989 "00000000-0000-0000-0000-000000000004" 00:21:03.989 ], 00:21:03.989 "product_name": "passthru", 00:21:03.989 "block_size": 512, 00:21:03.989 "num_blocks": 65536, 00:21:03.989 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:03.989 "assigned_rate_limits": { 00:21:03.989 "rw_ios_per_sec": 0, 00:21:03.989 "rw_mbytes_per_sec": 0, 00:21:03.989 "r_mbytes_per_sec": 0, 00:21:03.989 "w_mbytes_per_sec": 0 00:21:03.989 }, 00:21:03.989 "claimed": true, 00:21:03.989 "claim_type": "exclusive_write", 00:21:03.989 "zoned": false, 00:21:03.989 "supported_io_types": { 00:21:03.989 "read": true, 00:21:03.989 "write": true, 00:21:03.989 "unmap": true, 00:21:03.989 "flush": true, 00:21:03.989 "reset": true, 00:21:03.989 "nvme_admin": false, 00:21:03.989 "nvme_io": false, 00:21:03.989 "nvme_io_md": false, 00:21:03.989 "write_zeroes": true, 00:21:03.989 "zcopy": true, 00:21:03.989 "get_zone_info": false, 00:21:03.989 "zone_management": false, 00:21:03.989 "zone_append": false, 00:21:03.989 "compare": false, 00:21:03.989 "compare_and_write": false, 00:21:03.989 "abort": true, 00:21:03.989 "seek_hole": false, 00:21:03.989 "seek_data": false, 00:21:03.989 "copy": true, 00:21:03.989 "nvme_iov_md": false 00:21:03.989 }, 00:21:03.989 "memory_domains": [ 00:21:03.989 { 00:21:03.989 "dma_device_id": "system", 00:21:03.989 "dma_device_type": 1 00:21:03.989 }, 00:21:03.989 { 00:21:03.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.989 "dma_device_type": 2 00:21:03.989 } 00:21:03.989 ], 00:21:03.989 "driver_specific": { 00:21:03.989 "passthru": { 00:21:03.989 "name": "pt4", 00:21:03.989 "base_bdev_name": "malloc4" 00:21:03.989 } 00:21:03.989 } 00:21:03.989 }' 00:21:03.989 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:03.989 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:03.989 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:03.989 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:03.989 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:03.989 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:03.989 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:03.989 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:03.989 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:03.989 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:03.989 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:03.989 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:03.989 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:03.989 00:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:21:04.247 [2024-07-25 00:05:00.028294] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:04.247 00:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=54e42ad3-673b-40c8-8729-ad78580a3ccc 00:21:04.247 00:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 54e42ad3-673b-40c8-8729-ad78580a3ccc ']' 00:21:04.247 00:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:04.506 [2024-07-25 00:05:00.280037] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:04.506 [2024-07-25 00:05:00.280077] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:04.506 [2024-07-25 00:05:00.280161] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:04.506 [2024-07-25 00:05:00.280234] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:04.506 [2024-07-25 00:05:00.280251] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009680 name raid_bdev1, state offline 00:21:04.506 00:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.506 00:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:21:04.764 00:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:21:04.764 00:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:21:04.764 00:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:21:04.764 00:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:05.022 00:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:21:05.022 00:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:05.280 00:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:21:05.280 00:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:05.539 00:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:21:05.539 00:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:05.797 00:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:05.797 00:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:06.055 00:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:21:06.055 00:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:06.055 00:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:21:06.055 00:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:06.055 00:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:06.055 00:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:06.055 00:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:06.055 00:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:06.055 00:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:06.055 00:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:06.055 00:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:06.055 00:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:06.055 00:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:06.349 [2024-07-25 00:05:02.000460] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:06.349 [2024-07-25 00:05:02.002585] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:06.349 [2024-07-25 00:05:02.002684] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:06.349 [2024-07-25 00:05:02.002743] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:06.349 [2024-07-25 00:05:02.002810] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:06.349 [2024-07-25 00:05:02.002895] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:06.349 [2024-07-25 00:05:02.002928] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:06.349 [2024-07-25 00:05:02.002959] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:21:06.349 [2024-07-25 00:05:02.002981] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:06.349 [2024-07-25 00:05:02.003003] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009c80 name raid_bdev1, state configuring 00:21:06.349 request: 00:21:06.349 { 00:21:06.349 "name": "raid_bdev1", 00:21:06.349 "raid_level": "raid0", 00:21:06.349 "base_bdevs": [ 00:21:06.349 "malloc1", 00:21:06.349 "malloc2", 00:21:06.349 "malloc3", 00:21:06.349 "malloc4" 00:21:06.349 ], 00:21:06.349 "strip_size_kb": 64, 00:21:06.349 "superblock": false, 00:21:06.349 "method": "bdev_raid_create", 00:21:06.349 "req_id": 1 00:21:06.349 } 00:21:06.349 Got JSON-RPC error response 00:21:06.349 response: 00:21:06.349 { 00:21:06.349 "code": -17, 00:21:06.349 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:06.349 } 00:21:06.349 00:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:21:06.349 00:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:06.349 00:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:06.349 00:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:06.349 00:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.349 00:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:21:06.608 00:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:21:06.608 00:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:21:06.608 00:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:06.866 [2024-07-25 00:05:02.524474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:06.866 [2024-07-25 00:05:02.524564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.866 [2024-07-25 00:05:02.524590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:21:06.866 [2024-07-25 00:05:02.524605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.866 [2024-07-25 00:05:02.527216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.866 pt1 00:21:06.866 [2024-07-25 00:05:02.527424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:06.866 [2024-07-25 00:05:02.527540] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:06.866 [2024-07-25 00:05:02.527626] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:06.866 00:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:21:06.866 00:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:06.866 00:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:06.866 00:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:06.866 00:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:06.866 00:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:06.866 00:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:06.866 00:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:06.866 00:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:06.866 00:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:06.866 00:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.866 00:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.124 00:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:07.124 "name": "raid_bdev1", 00:21:07.124 "uuid": "54e42ad3-673b-40c8-8729-ad78580a3ccc", 00:21:07.124 "strip_size_kb": 64, 00:21:07.124 "state": "configuring", 00:21:07.124 "raid_level": "raid0", 00:21:07.124 "superblock": true, 00:21:07.124 "num_base_bdevs": 4, 00:21:07.124 "num_base_bdevs_discovered": 1, 00:21:07.124 "num_base_bdevs_operational": 4, 00:21:07.124 "base_bdevs_list": [ 00:21:07.124 { 00:21:07.124 "name": "pt1", 00:21:07.124 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:07.124 "is_configured": true, 00:21:07.124 "data_offset": 2048, 00:21:07.124 "data_size": 63488 00:21:07.124 }, 00:21:07.124 { 00:21:07.124 "name": null, 00:21:07.124 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:07.124 "is_configured": false, 00:21:07.124 "data_offset": 2048, 00:21:07.124 "data_size": 63488 00:21:07.124 }, 00:21:07.124 { 00:21:07.124 "name": null, 00:21:07.124 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:07.124 "is_configured": false, 00:21:07.124 "data_offset": 2048, 00:21:07.124 "data_size": 63488 00:21:07.124 }, 00:21:07.124 { 00:21:07.124 "name": null, 00:21:07.124 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:07.124 "is_configured": false, 00:21:07.124 "data_offset": 2048, 00:21:07.124 "data_size": 63488 00:21:07.124 } 00:21:07.124 ] 00:21:07.124 }' 00:21:07.124 00:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:07.124 00:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.382 00:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:21:07.382 00:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:07.650 [2024-07-25 00:05:03.388684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:07.650 [2024-07-25 00:05:03.388780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.650 [2024-07-25 00:05:03.388809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:21:07.650 [2024-07-25 00:05:03.388857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.650 [2024-07-25 00:05:03.389395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.650 [2024-07-25 00:05:03.389424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:07.650 [2024-07-25 00:05:03.389522] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:07.650 [2024-07-25 00:05:03.389555] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:07.650 pt2 00:21:07.650 00:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:07.935 [2024-07-25 00:05:03.627214] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:07.935 00:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:21:07.935 00:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:07.935 00:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:07.935 00:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:07.935 00:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:07.935 00:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:07.935 00:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:07.935 00:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:07.935 00:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:07.935 00:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:07.935 00:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.935 00:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.193 00:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:08.193 "name": "raid_bdev1", 00:21:08.193 "uuid": "54e42ad3-673b-40c8-8729-ad78580a3ccc", 00:21:08.193 "strip_size_kb": 64, 00:21:08.193 "state": "configuring", 00:21:08.193 "raid_level": "raid0", 00:21:08.193 "superblock": true, 00:21:08.193 "num_base_bdevs": 4, 00:21:08.193 "num_base_bdevs_discovered": 1, 00:21:08.193 "num_base_bdevs_operational": 4, 00:21:08.193 "base_bdevs_list": [ 00:21:08.194 { 00:21:08.194 "name": "pt1", 00:21:08.194 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:08.194 "is_configured": true, 00:21:08.194 "data_offset": 2048, 00:21:08.194 "data_size": 63488 00:21:08.194 }, 00:21:08.194 { 00:21:08.194 "name": null, 00:21:08.194 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:08.194 "is_configured": false, 00:21:08.194 "data_offset": 2048, 00:21:08.194 "data_size": 63488 00:21:08.194 }, 00:21:08.194 { 00:21:08.194 "name": null, 00:21:08.194 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:08.194 "is_configured": false, 00:21:08.194 "data_offset": 2048, 00:21:08.194 "data_size": 63488 00:21:08.194 }, 00:21:08.194 { 00:21:08.194 "name": null, 00:21:08.194 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:08.194 "is_configured": false, 00:21:08.194 "data_offset": 2048, 00:21:08.194 "data_size": 63488 00:21:08.194 } 00:21:08.194 ] 00:21:08.194 }' 00:21:08.194 00:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:08.194 00:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.452 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:21:08.452 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:21:08.452 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:08.710 [2024-07-25 00:05:04.423444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:08.710 [2024-07-25 00:05:04.423697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.710 [2024-07-25 00:05:04.423743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:21:08.710 [2024-07-25 00:05:04.423759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.710 [2024-07-25 00:05:04.424313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.710 [2024-07-25 00:05:04.424340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:08.710 [2024-07-25 00:05:04.424457] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:08.710 [2024-07-25 00:05:04.424489] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:08.710 pt2 00:21:08.710 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:21:08.710 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:21:08.710 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:08.969 [2024-07-25 00:05:04.687519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:08.969 [2024-07-25 00:05:04.687785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.969 [2024-07-25 00:05:04.687874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:21:08.969 [2024-07-25 00:05:04.687896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.969 [2024-07-25 00:05:04.688436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.969 [2024-07-25 00:05:04.688468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:08.969 [2024-07-25 00:05:04.688588] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:08.969 [2024-07-25 00:05:04.688618] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:08.969 pt3 00:21:08.969 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:21:08.969 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:21:08.969 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:09.228 [2024-07-25 00:05:04.951560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:09.228 [2024-07-25 00:05:04.951966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.228 [2024-07-25 00:05:04.952179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:21:09.229 [2024-07-25 00:05:04.952334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.229 [2024-07-25 00:05:04.953003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.229 [2024-07-25 00:05:04.953185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:09.229 [2024-07-25 00:05:04.953427] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:09.229 [2024-07-25 00:05:04.953467] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:09.229 [2024-07-25 00:05:04.953669] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a880 00:21:09.229 [2024-07-25 00:05:04.953686] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:09.229 [2024-07-25 00:05:04.953802] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:21:09.229 [2024-07-25 00:05:04.954198] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a880 00:21:09.229 [2024-07-25 00:05:04.954229] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a880 00:21:09.229 [2024-07-25 00:05:04.954387] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.229 pt4 00:21:09.229 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:21:09.229 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:21:09.229 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:09.229 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:09.229 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:09.229 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:09.229 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:09.229 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:09.229 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:09.229 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:09.229 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:09.229 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:09.229 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.229 00:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.488 00:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:09.488 "name": "raid_bdev1", 00:21:09.488 "uuid": "54e42ad3-673b-40c8-8729-ad78580a3ccc", 00:21:09.488 "strip_size_kb": 64, 00:21:09.488 "state": "online", 00:21:09.488 "raid_level": "raid0", 00:21:09.488 "superblock": true, 00:21:09.488 "num_base_bdevs": 4, 00:21:09.488 "num_base_bdevs_discovered": 4, 00:21:09.488 "num_base_bdevs_operational": 4, 00:21:09.488 "base_bdevs_list": [ 00:21:09.488 { 00:21:09.488 "name": "pt1", 00:21:09.488 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:09.488 "is_configured": true, 00:21:09.488 "data_offset": 2048, 00:21:09.488 "data_size": 63488 00:21:09.488 }, 00:21:09.488 { 00:21:09.488 "name": "pt2", 00:21:09.488 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:09.488 "is_configured": true, 00:21:09.488 "data_offset": 2048, 00:21:09.488 "data_size": 63488 00:21:09.488 }, 00:21:09.488 { 00:21:09.488 "name": "pt3", 00:21:09.488 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:09.488 "is_configured": true, 00:21:09.488 "data_offset": 2048, 00:21:09.488 "data_size": 63488 00:21:09.488 }, 00:21:09.488 { 00:21:09.488 "name": "pt4", 00:21:09.488 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:09.488 "is_configured": true, 00:21:09.488 "data_offset": 2048, 00:21:09.488 "data_size": 63488 00:21:09.488 } 00:21:09.488 ] 00:21:09.488 }' 00:21:09.488 00:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:09.488 00:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.747 00:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:21:09.747 00:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:09.747 00:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:09.747 00:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:09.747 00:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:09.747 00:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:09.747 00:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:09.747 00:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:10.006 [2024-07-25 00:05:05.736117] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:10.006 00:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:10.006 "name": "raid_bdev1", 00:21:10.006 "aliases": [ 00:21:10.006 "54e42ad3-673b-40c8-8729-ad78580a3ccc" 00:21:10.006 ], 00:21:10.006 "product_name": "Raid Volume", 00:21:10.006 "block_size": 512, 00:21:10.006 "num_blocks": 253952, 00:21:10.006 "uuid": "54e42ad3-673b-40c8-8729-ad78580a3ccc", 00:21:10.006 "assigned_rate_limits": { 00:21:10.006 "rw_ios_per_sec": 0, 00:21:10.006 "rw_mbytes_per_sec": 0, 00:21:10.006 "r_mbytes_per_sec": 0, 00:21:10.006 "w_mbytes_per_sec": 0 00:21:10.006 }, 00:21:10.006 "claimed": false, 00:21:10.006 "zoned": false, 00:21:10.006 "supported_io_types": { 00:21:10.006 "read": true, 00:21:10.006 "write": true, 00:21:10.006 "unmap": true, 00:21:10.006 "flush": true, 00:21:10.006 "reset": true, 00:21:10.006 "nvme_admin": false, 00:21:10.006 "nvme_io": false, 00:21:10.006 "nvme_io_md": false, 00:21:10.006 "write_zeroes": true, 00:21:10.006 "zcopy": false, 00:21:10.006 "get_zone_info": false, 00:21:10.006 "zone_management": false, 00:21:10.006 "zone_append": false, 00:21:10.006 "compare": false, 00:21:10.006 "compare_and_write": false, 00:21:10.006 "abort": false, 00:21:10.006 "seek_hole": false, 00:21:10.006 "seek_data": false, 00:21:10.006 "copy": false, 00:21:10.006 "nvme_iov_md": false 00:21:10.006 }, 00:21:10.006 "memory_domains": [ 00:21:10.006 { 00:21:10.006 "dma_device_id": "system", 00:21:10.006 "dma_device_type": 1 00:21:10.006 }, 00:21:10.006 { 00:21:10.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.006 "dma_device_type": 2 00:21:10.006 }, 00:21:10.006 { 00:21:10.006 "dma_device_id": "system", 00:21:10.006 "dma_device_type": 1 00:21:10.006 }, 00:21:10.006 { 00:21:10.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.006 "dma_device_type": 2 00:21:10.006 }, 00:21:10.006 { 00:21:10.006 "dma_device_id": "system", 00:21:10.006 "dma_device_type": 1 00:21:10.006 }, 00:21:10.006 { 00:21:10.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.006 "dma_device_type": 2 00:21:10.006 }, 00:21:10.006 { 00:21:10.006 "dma_device_id": "system", 00:21:10.006 "dma_device_type": 1 00:21:10.006 }, 00:21:10.006 { 00:21:10.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.006 "dma_device_type": 2 00:21:10.006 } 00:21:10.006 ], 00:21:10.006 "driver_specific": { 00:21:10.006 "raid": { 00:21:10.006 "uuid": "54e42ad3-673b-40c8-8729-ad78580a3ccc", 00:21:10.006 "strip_size_kb": 64, 00:21:10.006 "state": "online", 00:21:10.006 "raid_level": "raid0", 00:21:10.006 "superblock": true, 00:21:10.006 "num_base_bdevs": 4, 00:21:10.006 "num_base_bdevs_discovered": 4, 00:21:10.006 "num_base_bdevs_operational": 4, 00:21:10.006 "base_bdevs_list": [ 00:21:10.006 { 00:21:10.006 "name": "pt1", 00:21:10.006 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:10.006 "is_configured": true, 00:21:10.006 "data_offset": 2048, 00:21:10.006 "data_size": 63488 00:21:10.006 }, 00:21:10.006 { 00:21:10.006 "name": "pt2", 00:21:10.006 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:10.006 "is_configured": true, 00:21:10.006 "data_offset": 2048, 00:21:10.006 "data_size": 63488 00:21:10.006 }, 00:21:10.006 { 00:21:10.006 "name": "pt3", 00:21:10.006 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:10.006 "is_configured": true, 00:21:10.006 "data_offset": 2048, 00:21:10.006 "data_size": 63488 00:21:10.006 }, 00:21:10.006 { 00:21:10.006 "name": "pt4", 00:21:10.006 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:10.006 "is_configured": true, 00:21:10.006 "data_offset": 2048, 00:21:10.006 "data_size": 63488 00:21:10.006 } 00:21:10.006 ] 00:21:10.006 } 00:21:10.006 } 00:21:10.006 }' 00:21:10.006 00:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:10.006 00:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:10.006 pt2 00:21:10.006 pt3 00:21:10.006 pt4' 00:21:10.006 00:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:10.006 00:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:10.006 00:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:10.265 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:10.265 "name": "pt1", 00:21:10.265 "aliases": [ 00:21:10.265 "00000000-0000-0000-0000-000000000001" 00:21:10.265 ], 00:21:10.265 "product_name": "passthru", 00:21:10.265 "block_size": 512, 00:21:10.265 "num_blocks": 65536, 00:21:10.265 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:10.265 "assigned_rate_limits": { 00:21:10.265 "rw_ios_per_sec": 0, 00:21:10.265 "rw_mbytes_per_sec": 0, 00:21:10.265 "r_mbytes_per_sec": 0, 00:21:10.265 "w_mbytes_per_sec": 0 00:21:10.265 }, 00:21:10.265 "claimed": true, 00:21:10.265 "claim_type": "exclusive_write", 00:21:10.265 "zoned": false, 00:21:10.265 "supported_io_types": { 00:21:10.265 "read": true, 00:21:10.265 "write": true, 00:21:10.265 "unmap": true, 00:21:10.265 "flush": true, 00:21:10.265 "reset": true, 00:21:10.265 "nvme_admin": false, 00:21:10.265 "nvme_io": false, 00:21:10.265 "nvme_io_md": false, 00:21:10.265 "write_zeroes": true, 00:21:10.265 "zcopy": true, 00:21:10.265 "get_zone_info": false, 00:21:10.265 "zone_management": false, 00:21:10.265 "zone_append": false, 00:21:10.265 "compare": false, 00:21:10.265 "compare_and_write": false, 00:21:10.265 "abort": true, 00:21:10.265 "seek_hole": false, 00:21:10.265 "seek_data": false, 00:21:10.265 "copy": true, 00:21:10.265 "nvme_iov_md": false 00:21:10.265 }, 00:21:10.265 "memory_domains": [ 00:21:10.265 { 00:21:10.265 "dma_device_id": "system", 00:21:10.265 "dma_device_type": 1 00:21:10.265 }, 00:21:10.265 { 00:21:10.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.265 "dma_device_type": 2 00:21:10.265 } 00:21:10.265 ], 00:21:10.265 "driver_specific": { 00:21:10.265 "passthru": { 00:21:10.265 "name": "pt1", 00:21:10.265 "base_bdev_name": "malloc1" 00:21:10.265 } 00:21:10.265 } 00:21:10.265 }' 00:21:10.265 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:10.265 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:10.265 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:10.265 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:10.265 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:10.265 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:10.265 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:10.265 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:10.265 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:10.265 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:10.265 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:10.524 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:10.524 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:10.524 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:10.524 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:10.783 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:10.783 "name": "pt2", 00:21:10.783 "aliases": [ 00:21:10.783 "00000000-0000-0000-0000-000000000002" 00:21:10.783 ], 00:21:10.783 "product_name": "passthru", 00:21:10.783 "block_size": 512, 00:21:10.783 "num_blocks": 65536, 00:21:10.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:10.783 "assigned_rate_limits": { 00:21:10.783 "rw_ios_per_sec": 0, 00:21:10.783 "rw_mbytes_per_sec": 0, 00:21:10.783 "r_mbytes_per_sec": 0, 00:21:10.783 "w_mbytes_per_sec": 0 00:21:10.783 }, 00:21:10.783 "claimed": true, 00:21:10.783 "claim_type": "exclusive_write", 00:21:10.783 "zoned": false, 00:21:10.783 "supported_io_types": { 00:21:10.783 "read": true, 00:21:10.783 "write": true, 00:21:10.783 "unmap": true, 00:21:10.783 "flush": true, 00:21:10.783 "reset": true, 00:21:10.783 "nvme_admin": false, 00:21:10.783 "nvme_io": false, 00:21:10.783 "nvme_io_md": false, 00:21:10.783 "write_zeroes": true, 00:21:10.783 "zcopy": true, 00:21:10.783 "get_zone_info": false, 00:21:10.783 "zone_management": false, 00:21:10.783 "zone_append": false, 00:21:10.783 "compare": false, 00:21:10.783 "compare_and_write": false, 00:21:10.783 "abort": true, 00:21:10.783 "seek_hole": false, 00:21:10.783 "seek_data": false, 00:21:10.783 "copy": true, 00:21:10.783 "nvme_iov_md": false 00:21:10.783 }, 00:21:10.783 "memory_domains": [ 00:21:10.783 { 00:21:10.783 "dma_device_id": "system", 00:21:10.783 "dma_device_type": 1 00:21:10.783 }, 00:21:10.783 { 00:21:10.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.783 "dma_device_type": 2 00:21:10.783 } 00:21:10.783 ], 00:21:10.783 "driver_specific": { 00:21:10.783 "passthru": { 00:21:10.783 "name": "pt2", 00:21:10.783 "base_bdev_name": "malloc2" 00:21:10.783 } 00:21:10.783 } 00:21:10.783 }' 00:21:10.783 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:10.783 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:10.783 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:10.783 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:10.783 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:10.783 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:10.783 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:10.783 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:10.783 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:10.783 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:10.783 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:10.783 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:10.783 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:10.783 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:10.783 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:11.042 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:11.042 "name": "pt3", 00:21:11.042 "aliases": [ 00:21:11.042 "00000000-0000-0000-0000-000000000003" 00:21:11.042 ], 00:21:11.042 "product_name": "passthru", 00:21:11.042 "block_size": 512, 00:21:11.042 "num_blocks": 65536, 00:21:11.042 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:11.042 "assigned_rate_limits": { 00:21:11.042 "rw_ios_per_sec": 0, 00:21:11.042 "rw_mbytes_per_sec": 0, 00:21:11.042 "r_mbytes_per_sec": 0, 00:21:11.042 "w_mbytes_per_sec": 0 00:21:11.042 }, 00:21:11.042 "claimed": true, 00:21:11.042 "claim_type": "exclusive_write", 00:21:11.042 "zoned": false, 00:21:11.042 "supported_io_types": { 00:21:11.042 "read": true, 00:21:11.042 "write": true, 00:21:11.042 "unmap": true, 00:21:11.042 "flush": true, 00:21:11.042 "reset": true, 00:21:11.042 "nvme_admin": false, 00:21:11.042 "nvme_io": false, 00:21:11.042 "nvme_io_md": false, 00:21:11.042 "write_zeroes": true, 00:21:11.042 "zcopy": true, 00:21:11.042 "get_zone_info": false, 00:21:11.042 "zone_management": false, 00:21:11.042 "zone_append": false, 00:21:11.042 "compare": false, 00:21:11.042 "compare_and_write": false, 00:21:11.042 "abort": true, 00:21:11.042 "seek_hole": false, 00:21:11.042 "seek_data": false, 00:21:11.042 "copy": true, 00:21:11.042 "nvme_iov_md": false 00:21:11.042 }, 00:21:11.042 "memory_domains": [ 00:21:11.042 { 00:21:11.042 "dma_device_id": "system", 00:21:11.042 "dma_device_type": 1 00:21:11.042 }, 00:21:11.042 { 00:21:11.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.042 "dma_device_type": 2 00:21:11.042 } 00:21:11.042 ], 00:21:11.042 "driver_specific": { 00:21:11.042 "passthru": { 00:21:11.042 "name": "pt3", 00:21:11.042 "base_bdev_name": "malloc3" 00:21:11.042 } 00:21:11.042 } 00:21:11.042 }' 00:21:11.042 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:11.042 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:11.042 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:11.042 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.042 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.042 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:11.042 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.042 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.042 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:11.042 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:11.042 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:11.042 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:11.042 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:11.042 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:21:11.042 00:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:11.300 00:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:11.300 "name": "pt4", 00:21:11.300 "aliases": [ 00:21:11.300 "00000000-0000-0000-0000-000000000004" 00:21:11.300 ], 00:21:11.300 "product_name": "passthru", 00:21:11.300 "block_size": 512, 00:21:11.300 "num_blocks": 65536, 00:21:11.300 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:11.300 "assigned_rate_limits": { 00:21:11.300 "rw_ios_per_sec": 0, 00:21:11.300 "rw_mbytes_per_sec": 0, 00:21:11.300 "r_mbytes_per_sec": 0, 00:21:11.300 "w_mbytes_per_sec": 0 00:21:11.300 }, 00:21:11.300 "claimed": true, 00:21:11.300 "claim_type": "exclusive_write", 00:21:11.300 "zoned": false, 00:21:11.301 "supported_io_types": { 00:21:11.301 "read": true, 00:21:11.301 "write": true, 00:21:11.301 "unmap": true, 00:21:11.301 "flush": true, 00:21:11.301 "reset": true, 00:21:11.301 "nvme_admin": false, 00:21:11.301 "nvme_io": false, 00:21:11.301 "nvme_io_md": false, 00:21:11.301 "write_zeroes": true, 00:21:11.301 "zcopy": true, 00:21:11.301 "get_zone_info": false, 00:21:11.301 "zone_management": false, 00:21:11.301 "zone_append": false, 00:21:11.301 "compare": false, 00:21:11.301 "compare_and_write": false, 00:21:11.301 "abort": true, 00:21:11.301 "seek_hole": false, 00:21:11.301 "seek_data": false, 00:21:11.301 "copy": true, 00:21:11.301 "nvme_iov_md": false 00:21:11.301 }, 00:21:11.301 "memory_domains": [ 00:21:11.301 { 00:21:11.301 "dma_device_id": "system", 00:21:11.301 "dma_device_type": 1 00:21:11.301 }, 00:21:11.301 { 00:21:11.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.301 "dma_device_type": 2 00:21:11.301 } 00:21:11.301 ], 00:21:11.301 "driver_specific": { 00:21:11.301 "passthru": { 00:21:11.301 "name": "pt4", 00:21:11.301 "base_bdev_name": "malloc4" 00:21:11.301 } 00:21:11.301 } 00:21:11.301 }' 00:21:11.301 00:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:11.301 00:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:11.301 00:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:11.301 00:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.301 00:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.301 00:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:11.301 00:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.301 00:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.301 00:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:11.301 00:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:11.301 00:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:11.301 00:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:11.301 00:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:11.301 00:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:21:11.560 [2024-07-25 00:05:07.336531] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:11.560 00:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 54e42ad3-673b-40c8-8729-ad78580a3ccc '!=' 54e42ad3-673b-40c8-8729-ad78580a3ccc ']' 00:21:11.560 00:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:21:11.560 00:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:11.560 00:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:11.560 00:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 90004 00:21:11.560 00:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 90004 ']' 00:21:11.560 00:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 90004 00:21:11.560 00:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:21:11.560 00:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:11.560 00:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90004 00:21:11.560 killing process with pid 90004 00:21:11.560 00:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:11.560 00:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:11.560 00:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90004' 00:21:11.560 00:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 90004 00:21:11.560 [2024-07-25 00:05:07.393827] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:11.560 [2024-07-25 00:05:07.393941] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:11.560 00:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 90004 00:21:11.560 [2024-07-25 00:05:07.394042] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:11.560 [2024-07-25 00:05:07.394057] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state offline 00:21:12.128 [2024-07-25 00:05:07.703215] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:13.064 00:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:21:13.064 00:21:13.064 real 0m14.406s 00:21:13.064 user 0m24.465s 00:21:13.064 sys 0m2.174s 00:21:13.064 00:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:13.064 ************************************ 00:21:13.064 END TEST raid_superblock_test 00:21:13.064 ************************************ 00:21:13.064 00:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.064 00:05:08 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:21:13.064 00:05:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:13.064 00:05:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:13.064 00:05:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:13.064 ************************************ 00:21:13.064 START TEST raid_read_error_test 00:21:13.064 ************************************ 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.zcfJqaTPF6 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=90501 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 90501 /var/tmp/spdk-raid.sock 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 90501 ']' 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:13.064 00:05:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:13.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:13.065 00:05:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:13.065 00:05:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:13.065 00:05:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.065 [2024-07-25 00:05:08.924171] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:21:13.065 [2024-07-25 00:05:08.924386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90501 ] 00:21:13.324 [2024-07-25 00:05:09.100162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.583 [2024-07-25 00:05:09.341404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.842 [2024-07-25 00:05:09.517813] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:14.101 00:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:14.101 00:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:21:14.101 00:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:14.101 00:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:14.360 BaseBdev1_malloc 00:21:14.360 00:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:14.618 true 00:21:14.618 00:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:14.877 [2024-07-25 00:05:10.531427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:14.877 [2024-07-25 00:05:10.531766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.877 [2024-07-25 00:05:10.531852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:21:14.877 [2024-07-25 00:05:10.531877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.877 [2024-07-25 00:05:10.534591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.877 [2024-07-25 00:05:10.534680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:14.877 BaseBdev1 00:21:14.877 00:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:14.877 00:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:15.136 BaseBdev2_malloc 00:21:15.136 00:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:15.395 true 00:21:15.395 00:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:15.654 [2024-07-25 00:05:11.273910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:15.654 [2024-07-25 00:05:11.274009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:15.654 [2024-07-25 00:05:11.274041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:21:15.654 [2024-07-25 00:05:11.274060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:15.654 [2024-07-25 00:05:11.276546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:15.654 [2024-07-25 00:05:11.276595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:15.654 BaseBdev2 00:21:15.654 00:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:15.654 00:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:15.913 BaseBdev3_malloc 00:21:15.913 00:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:15.913 true 00:21:15.913 00:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:16.172 [2024-07-25 00:05:11.958632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:16.172 [2024-07-25 00:05:11.958917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.172 [2024-07-25 00:05:11.958960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:21:16.172 [2024-07-25 00:05:11.958981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.172 [2024-07-25 00:05:11.961476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.172 [2024-07-25 00:05:11.961526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:16.172 BaseBdev3 00:21:16.172 00:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:16.172 00:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:16.438 BaseBdev4_malloc 00:21:16.438 00:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:21:16.713 true 00:21:16.713 00:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:21:16.971 [2024-07-25 00:05:12.631066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:21:16.972 [2024-07-25 00:05:12.631389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.972 [2024-07-25 00:05:12.631432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:21:16.972 [2024-07-25 00:05:12.631452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.972 [2024-07-25 00:05:12.633861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.972 [2024-07-25 00:05:12.633908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:16.972 BaseBdev4 00:21:16.972 00:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:21:16.972 [2024-07-25 00:05:12.839254] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:17.231 [2024-07-25 00:05:12.841410] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:17.231 [2024-07-25 00:05:12.841557] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:17.231 [2024-07-25 00:05:12.841657] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:17.231 [2024-07-25 00:05:12.842058] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a280 00:21:17.231 [2024-07-25 00:05:12.842081] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:17.231 [2024-07-25 00:05:12.842230] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:21:17.231 [2024-07-25 00:05:12.842743] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a280 00:21:17.231 [2024-07-25 00:05:12.842762] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a280 00:21:17.231 [2024-07-25 00:05:12.842960] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:17.231 00:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:17.231 00:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:17.231 00:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:17.231 00:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:17.231 00:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:17.231 00:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:17.231 00:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:17.231 00:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:17.231 00:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:17.231 00:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:17.231 00:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.231 00:05:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.490 00:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:17.490 "name": "raid_bdev1", 00:21:17.490 "uuid": "4a39811f-e135-48c7-bbd1-41d16972a98f", 00:21:17.490 "strip_size_kb": 64, 00:21:17.490 "state": "online", 00:21:17.490 "raid_level": "raid0", 00:21:17.490 "superblock": true, 00:21:17.490 "num_base_bdevs": 4, 00:21:17.490 "num_base_bdevs_discovered": 4, 00:21:17.490 "num_base_bdevs_operational": 4, 00:21:17.490 "base_bdevs_list": [ 00:21:17.490 { 00:21:17.490 "name": "BaseBdev1", 00:21:17.490 "uuid": "1c9ef544-04ad-53c3-ae28-1e5db831f1f1", 00:21:17.490 "is_configured": true, 00:21:17.490 "data_offset": 2048, 00:21:17.490 "data_size": 63488 00:21:17.490 }, 00:21:17.490 { 00:21:17.490 "name": "BaseBdev2", 00:21:17.490 "uuid": "b5dc9824-643f-53a7-8914-50dfe56c95c3", 00:21:17.490 "is_configured": true, 00:21:17.490 "data_offset": 2048, 00:21:17.490 "data_size": 63488 00:21:17.490 }, 00:21:17.490 { 00:21:17.490 "name": "BaseBdev3", 00:21:17.490 "uuid": "a0f9f7e9-553c-5197-803e-0b1391a3f61d", 00:21:17.490 "is_configured": true, 00:21:17.490 "data_offset": 2048, 00:21:17.490 "data_size": 63488 00:21:17.490 }, 00:21:17.490 { 00:21:17.490 "name": "BaseBdev4", 00:21:17.490 "uuid": "5ab844a8-aba1-5a12-b832-505d0d97660e", 00:21:17.490 "is_configured": true, 00:21:17.490 "data_offset": 2048, 00:21:17.490 "data_size": 63488 00:21:17.490 } 00:21:17.490 ] 00:21:17.490 }' 00:21:17.490 00:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:17.490 00:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.749 00:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:21:17.749 00:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:17.749 [2024-07-25 00:05:13.516594] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ba0 00:21:18.683 00:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:18.941 00:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:21:18.941 00:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:21:18.941 00:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:21:18.941 00:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:18.941 00:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:18.941 00:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:18.941 00:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:18.941 00:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:18.941 00:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:18.941 00:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:18.941 00:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:18.941 00:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:18.941 00:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:18.941 00:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.941 00:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.199 00:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:19.199 "name": "raid_bdev1", 00:21:19.199 "uuid": "4a39811f-e135-48c7-bbd1-41d16972a98f", 00:21:19.199 "strip_size_kb": 64, 00:21:19.199 "state": "online", 00:21:19.199 "raid_level": "raid0", 00:21:19.199 "superblock": true, 00:21:19.199 "num_base_bdevs": 4, 00:21:19.199 "num_base_bdevs_discovered": 4, 00:21:19.199 "num_base_bdevs_operational": 4, 00:21:19.199 "base_bdevs_list": [ 00:21:19.199 { 00:21:19.199 "name": "BaseBdev1", 00:21:19.199 "uuid": "1c9ef544-04ad-53c3-ae28-1e5db831f1f1", 00:21:19.199 "is_configured": true, 00:21:19.199 "data_offset": 2048, 00:21:19.199 "data_size": 63488 00:21:19.199 }, 00:21:19.199 { 00:21:19.199 "name": "BaseBdev2", 00:21:19.199 "uuid": "b5dc9824-643f-53a7-8914-50dfe56c95c3", 00:21:19.199 "is_configured": true, 00:21:19.199 "data_offset": 2048, 00:21:19.200 "data_size": 63488 00:21:19.200 }, 00:21:19.200 { 00:21:19.200 "name": "BaseBdev3", 00:21:19.200 "uuid": "a0f9f7e9-553c-5197-803e-0b1391a3f61d", 00:21:19.200 "is_configured": true, 00:21:19.200 "data_offset": 2048, 00:21:19.200 "data_size": 63488 00:21:19.200 }, 00:21:19.200 { 00:21:19.200 "name": "BaseBdev4", 00:21:19.200 "uuid": "5ab844a8-aba1-5a12-b832-505d0d97660e", 00:21:19.200 "is_configured": true, 00:21:19.200 "data_offset": 2048, 00:21:19.200 "data_size": 63488 00:21:19.200 } 00:21:19.200 ] 00:21:19.200 }' 00:21:19.200 00:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:19.200 00:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.458 00:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:19.716 [2024-07-25 00:05:15.398184] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:19.716 [2024-07-25 00:05:15.398227] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:19.716 [2024-07-25 00:05:15.401158] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:19.716 [2024-07-25 00:05:15.401220] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:19.716 [2024-07-25 00:05:15.401272] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:19.717 [2024-07-25 00:05:15.401291] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a280 name raid_bdev1, state offline 00:21:19.717 0 00:21:19.717 00:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 90501 00:21:19.717 00:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 90501 ']' 00:21:19.717 00:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 90501 00:21:19.717 00:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:21:19.717 00:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:19.717 00:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90501 00:21:19.717 00:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:19.717 00:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:19.717 00:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90501' 00:21:19.717 killing process with pid 90501 00:21:19.717 00:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 90501 00:21:19.717 [2024-07-25 00:05:15.456402] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:19.717 00:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 90501 00:21:19.975 [2024-07-25 00:05:15.712183] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:21.353 00:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.zcfJqaTPF6 00:21:21.353 00:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:21:21.353 00:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:21:21.353 00:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.53 00:21:21.353 00:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:21:21.353 00:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:21.353 00:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:21.353 00:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.53 != \0\.\0\0 ]] 00:21:21.353 00:21:21.353 real 0m7.969s 00:21:21.353 user 0m11.903s 00:21:21.353 sys 0m0.982s 00:21:21.353 00:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:21.353 00:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.353 ************************************ 00:21:21.353 END TEST raid_read_error_test 00:21:21.353 ************************************ 00:21:21.353 00:05:16 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:21:21.353 00:05:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:21.353 00:05:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:21.353 00:05:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:21.353 ************************************ 00:21:21.353 START TEST raid_write_error_test 00:21:21.353 ************************************ 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.w3dGnlodQQ 00:21:21.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=90694 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 90694 /var/tmp/spdk-raid.sock 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 90694 ']' 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:21.353 00:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.353 [2024-07-25 00:05:16.948461] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:21:21.353 [2024-07-25 00:05:16.948663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90694 ] 00:21:21.353 [2024-07-25 00:05:17.123205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.612 [2024-07-25 00:05:17.296621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.612 [2024-07-25 00:05:17.460262] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:22.179 00:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:22.179 00:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:21:22.179 00:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:22.179 00:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:22.438 BaseBdev1_malloc 00:21:22.438 00:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:22.696 true 00:21:22.696 00:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:22.955 [2024-07-25 00:05:18.571421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:22.955 [2024-07-25 00:05:18.571532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.955 [2024-07-25 00:05:18.571567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:21:22.955 [2024-07-25 00:05:18.571584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.955 [2024-07-25 00:05:18.574081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.955 [2024-07-25 00:05:18.574132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:22.955 BaseBdev1 00:21:22.955 00:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:22.955 00:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:23.214 BaseBdev2_malloc 00:21:23.214 00:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:23.473 true 00:21:23.473 00:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:23.473 [2024-07-25 00:05:19.299545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:23.473 [2024-07-25 00:05:19.299853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.473 [2024-07-25 00:05:19.300011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:21:23.473 [2024-07-25 00:05:19.300178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.473 [2024-07-25 00:05:19.302837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.473 [2024-07-25 00:05:19.302888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:23.473 BaseBdev2 00:21:23.473 00:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:23.473 00:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:23.731 BaseBdev3_malloc 00:21:23.731 00:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:23.990 true 00:21:23.990 00:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:24.248 [2024-07-25 00:05:19.999305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:24.248 [2024-07-25 00:05:19.999403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.249 [2024-07-25 00:05:19.999434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:21:24.249 [2024-07-25 00:05:19.999449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.249 [2024-07-25 00:05:20.002198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.249 [2024-07-25 00:05:20.002251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:24.249 BaseBdev3 00:21:24.249 00:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:24.249 00:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:24.507 BaseBdev4_malloc 00:21:24.507 00:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:21:24.766 true 00:21:24.766 00:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:21:25.024 [2024-07-25 00:05:20.743939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:21:25.024 [2024-07-25 00:05:20.744032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.024 [2024-07-25 00:05:20.744065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:21:25.024 [2024-07-25 00:05:20.744081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.024 [2024-07-25 00:05:20.746592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.024 [2024-07-25 00:05:20.746684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:25.024 BaseBdev4 00:21:25.024 00:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:21:25.283 [2024-07-25 00:05:20.976119] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:25.283 [2024-07-25 00:05:20.978417] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:25.283 [2024-07-25 00:05:20.978511] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:25.283 [2024-07-25 00:05:20.978596] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:25.283 [2024-07-25 00:05:20.978940] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a280 00:21:25.283 [2024-07-25 00:05:20.978963] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:25.283 [2024-07-25 00:05:20.979168] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:21:25.283 [2024-07-25 00:05:20.979719] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a280 00:21:25.283 [2024-07-25 00:05:20.979742] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a280 00:21:25.283 [2024-07-25 00:05:20.980014] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.283 00:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:25.283 00:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:25.283 00:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:25.283 00:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:25.283 00:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:25.283 00:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:25.283 00:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:25.283 00:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:25.283 00:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:25.283 00:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:25.283 00:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.283 00:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.559 00:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:25.559 "name": "raid_bdev1", 00:21:25.559 "uuid": "065d0397-42e9-445f-8fcf-f4dade41e6c1", 00:21:25.559 "strip_size_kb": 64, 00:21:25.559 "state": "online", 00:21:25.559 "raid_level": "raid0", 00:21:25.559 "superblock": true, 00:21:25.559 "num_base_bdevs": 4, 00:21:25.559 "num_base_bdevs_discovered": 4, 00:21:25.559 "num_base_bdevs_operational": 4, 00:21:25.559 "base_bdevs_list": [ 00:21:25.559 { 00:21:25.559 "name": "BaseBdev1", 00:21:25.559 "uuid": "c3b7d098-94c7-5fb6-aecd-dff621cf8288", 00:21:25.559 "is_configured": true, 00:21:25.559 "data_offset": 2048, 00:21:25.559 "data_size": 63488 00:21:25.559 }, 00:21:25.559 { 00:21:25.559 "name": "BaseBdev2", 00:21:25.559 "uuid": "d58c2d07-c174-5516-8473-07b429e80e2a", 00:21:25.559 "is_configured": true, 00:21:25.559 "data_offset": 2048, 00:21:25.559 "data_size": 63488 00:21:25.559 }, 00:21:25.559 { 00:21:25.559 "name": "BaseBdev3", 00:21:25.559 "uuid": "7252edde-8c07-58d4-bc41-3f8f55148b0d", 00:21:25.559 "is_configured": true, 00:21:25.559 "data_offset": 2048, 00:21:25.559 "data_size": 63488 00:21:25.559 }, 00:21:25.559 { 00:21:25.559 "name": "BaseBdev4", 00:21:25.559 "uuid": "2e1b7231-b0ba-59e6-8885-4cecc201df09", 00:21:25.559 "is_configured": true, 00:21:25.559 "data_offset": 2048, 00:21:25.559 "data_size": 63488 00:21:25.559 } 00:21:25.559 ] 00:21:25.559 }' 00:21:25.559 00:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:25.559 00:05:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.817 00:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:21:25.817 00:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:26.075 [2024-07-25 00:05:21.705464] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ba0 00:21:27.011 00:05:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:27.011 00:05:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:21:27.011 00:05:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:21:27.011 00:05:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:21:27.011 00:05:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:27.011 00:05:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:27.011 00:05:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:27.011 00:05:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:27.011 00:05:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:27.011 00:05:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:27.011 00:05:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:27.011 00:05:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:27.011 00:05:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:27.011 00:05:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:27.011 00:05:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.011 00:05:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.271 00:05:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:27.271 "name": "raid_bdev1", 00:21:27.271 "uuid": "065d0397-42e9-445f-8fcf-f4dade41e6c1", 00:21:27.271 "strip_size_kb": 64, 00:21:27.271 "state": "online", 00:21:27.271 "raid_level": "raid0", 00:21:27.271 "superblock": true, 00:21:27.271 "num_base_bdevs": 4, 00:21:27.271 "num_base_bdevs_discovered": 4, 00:21:27.271 "num_base_bdevs_operational": 4, 00:21:27.271 "base_bdevs_list": [ 00:21:27.271 { 00:21:27.271 "name": "BaseBdev1", 00:21:27.271 "uuid": "c3b7d098-94c7-5fb6-aecd-dff621cf8288", 00:21:27.271 "is_configured": true, 00:21:27.271 "data_offset": 2048, 00:21:27.271 "data_size": 63488 00:21:27.271 }, 00:21:27.271 { 00:21:27.271 "name": "BaseBdev2", 00:21:27.271 "uuid": "d58c2d07-c174-5516-8473-07b429e80e2a", 00:21:27.271 "is_configured": true, 00:21:27.271 "data_offset": 2048, 00:21:27.271 "data_size": 63488 00:21:27.271 }, 00:21:27.271 { 00:21:27.271 "name": "BaseBdev3", 00:21:27.271 "uuid": "7252edde-8c07-58d4-bc41-3f8f55148b0d", 00:21:27.271 "is_configured": true, 00:21:27.271 "data_offset": 2048, 00:21:27.271 "data_size": 63488 00:21:27.271 }, 00:21:27.271 { 00:21:27.271 "name": "BaseBdev4", 00:21:27.271 "uuid": "2e1b7231-b0ba-59e6-8885-4cecc201df09", 00:21:27.271 "is_configured": true, 00:21:27.271 "data_offset": 2048, 00:21:27.271 "data_size": 63488 00:21:27.271 } 00:21:27.271 ] 00:21:27.271 }' 00:21:27.271 00:05:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:27.271 00:05:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.837 00:05:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:27.837 [2024-07-25 00:05:23.664118] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:27.837 [2024-07-25 00:05:23.664167] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:27.837 [2024-07-25 00:05:23.667088] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:27.837 [2024-07-25 00:05:23.667330] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:27.837 [2024-07-25 00:05:23.667399] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:27.837 [2024-07-25 00:05:23.667422] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a280 name raid_bdev1, state offline 00:21:27.837 0 00:21:27.837 00:05:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 90694 00:21:27.837 00:05:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 90694 ']' 00:21:27.837 00:05:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 90694 00:21:27.837 00:05:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:21:27.837 00:05:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:27.837 00:05:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90694 00:21:28.095 killing process with pid 90694 00:21:28.095 00:05:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:28.095 00:05:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:28.095 00:05:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90694' 00:21:28.095 00:05:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 90694 00:21:28.095 [2024-07-25 00:05:23.722721] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:28.095 00:05:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 90694 00:21:28.353 [2024-07-25 00:05:23.968032] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:29.288 00:05:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.w3dGnlodQQ 00:21:29.288 00:05:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:21:29.288 00:05:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:21:29.288 ************************************ 00:21:29.288 END TEST raid_write_error_test 00:21:29.288 ************************************ 00:21:29.288 00:05:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.51 00:21:29.288 00:05:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:21:29.288 00:05:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:29.288 00:05:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:29.288 00:05:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.51 != \0\.\0\0 ]] 00:21:29.288 00:21:29.288 real 0m8.231s 00:21:29.288 user 0m12.388s 00:21:29.288 sys 0m1.029s 00:21:29.288 00:05:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:29.288 00:05:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.288 00:05:25 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:21:29.288 00:05:25 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:21:29.288 00:05:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:29.288 00:05:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:29.288 00:05:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:29.288 ************************************ 00:21:29.288 START TEST raid_state_function_test 00:21:29.288 ************************************ 00:21:29.288 00:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:21:29.288 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:21:29.547 Process raid pid: 90882 00:21:29.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=90882 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 90882' 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 90882 /var/tmp/spdk-raid.sock 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 90882 ']' 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:29.547 00:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.547 [2024-07-25 00:05:25.228154] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:21:29.547 [2024-07-25 00:05:25.228593] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.547 [2024-07-25 00:05:25.402543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.806 [2024-07-25 00:05:25.588980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.065 [2024-07-25 00:05:25.765391] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:30.323 00:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:30.323 00:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:21:30.323 00:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:30.582 [2024-07-25 00:05:26.372985] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:30.582 [2024-07-25 00:05:26.373071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:30.582 [2024-07-25 00:05:26.373088] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:30.582 [2024-07-25 00:05:26.373104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:30.582 [2024-07-25 00:05:26.373113] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:30.582 [2024-07-25 00:05:26.373126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:30.582 [2024-07-25 00:05:26.373135] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:30.582 [2024-07-25 00:05:26.373147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:30.582 00:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:30.582 00:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:30.582 00:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:30.582 00:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:30.582 00:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:30.582 00:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:30.582 00:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:30.582 00:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:30.582 00:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:30.582 00:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:30.582 00:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.582 00:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.841 00:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:30.841 "name": "Existed_Raid", 00:21:30.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.841 "strip_size_kb": 64, 00:21:30.841 "state": "configuring", 00:21:30.841 "raid_level": "concat", 00:21:30.841 "superblock": false, 00:21:30.841 "num_base_bdevs": 4, 00:21:30.841 "num_base_bdevs_discovered": 0, 00:21:30.841 "num_base_bdevs_operational": 4, 00:21:30.841 "base_bdevs_list": [ 00:21:30.841 { 00:21:30.841 "name": "BaseBdev1", 00:21:30.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.841 "is_configured": false, 00:21:30.841 "data_offset": 0, 00:21:30.841 "data_size": 0 00:21:30.841 }, 00:21:30.841 { 00:21:30.841 "name": "BaseBdev2", 00:21:30.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.841 "is_configured": false, 00:21:30.841 "data_offset": 0, 00:21:30.841 "data_size": 0 00:21:30.841 }, 00:21:30.841 { 00:21:30.841 "name": "BaseBdev3", 00:21:30.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.841 "is_configured": false, 00:21:30.841 "data_offset": 0, 00:21:30.841 "data_size": 0 00:21:30.841 }, 00:21:30.841 { 00:21:30.841 "name": "BaseBdev4", 00:21:30.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.841 "is_configured": false, 00:21:30.841 "data_offset": 0, 00:21:30.841 "data_size": 0 00:21:30.841 } 00:21:30.841 ] 00:21:30.841 }' 00:21:30.841 00:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:30.841 00:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.099 00:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:31.359 [2024-07-25 00:05:27.141076] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:31.359 [2024-07-25 00:05:27.141130] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:21:31.359 00:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:31.618 [2024-07-25 00:05:27.353161] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:31.618 [2024-07-25 00:05:27.353237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:31.618 [2024-07-25 00:05:27.353253] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:31.618 [2024-07-25 00:05:27.353268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:31.618 [2024-07-25 00:05:27.353277] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:31.618 [2024-07-25 00:05:27.353290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:31.618 [2024-07-25 00:05:27.353299] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:31.618 [2024-07-25 00:05:27.353312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:31.618 00:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:31.877 [2024-07-25 00:05:27.589292] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:31.877 BaseBdev1 00:21:31.877 00:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:31.877 00:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:31.877 00:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:31.877 00:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:31.877 00:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:31.877 00:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:31.877 00:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:32.136 00:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:32.395 [ 00:21:32.395 { 00:21:32.395 "name": "BaseBdev1", 00:21:32.395 "aliases": [ 00:21:32.395 "40dd28f8-273c-4b73-a7ed-80bdab24f6dd" 00:21:32.395 ], 00:21:32.395 "product_name": "Malloc disk", 00:21:32.395 "block_size": 512, 00:21:32.395 "num_blocks": 65536, 00:21:32.395 "uuid": "40dd28f8-273c-4b73-a7ed-80bdab24f6dd", 00:21:32.395 "assigned_rate_limits": { 00:21:32.395 "rw_ios_per_sec": 0, 00:21:32.395 "rw_mbytes_per_sec": 0, 00:21:32.395 "r_mbytes_per_sec": 0, 00:21:32.395 "w_mbytes_per_sec": 0 00:21:32.395 }, 00:21:32.395 "claimed": true, 00:21:32.395 "claim_type": "exclusive_write", 00:21:32.395 "zoned": false, 00:21:32.395 "supported_io_types": { 00:21:32.395 "read": true, 00:21:32.395 "write": true, 00:21:32.395 "unmap": true, 00:21:32.395 "flush": true, 00:21:32.395 "reset": true, 00:21:32.395 "nvme_admin": false, 00:21:32.395 "nvme_io": false, 00:21:32.395 "nvme_io_md": false, 00:21:32.395 "write_zeroes": true, 00:21:32.395 "zcopy": true, 00:21:32.395 "get_zone_info": false, 00:21:32.395 "zone_management": false, 00:21:32.395 "zone_append": false, 00:21:32.395 "compare": false, 00:21:32.395 "compare_and_write": false, 00:21:32.395 "abort": true, 00:21:32.395 "seek_hole": false, 00:21:32.395 "seek_data": false, 00:21:32.395 "copy": true, 00:21:32.395 "nvme_iov_md": false 00:21:32.395 }, 00:21:32.395 "memory_domains": [ 00:21:32.395 { 00:21:32.395 "dma_device_id": "system", 00:21:32.395 "dma_device_type": 1 00:21:32.396 }, 00:21:32.396 { 00:21:32.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.396 "dma_device_type": 2 00:21:32.396 } 00:21:32.396 ], 00:21:32.396 "driver_specific": {} 00:21:32.396 } 00:21:32.396 ] 00:21:32.396 00:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:32.396 00:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:32.396 00:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:32.396 00:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:32.396 00:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:32.396 00:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:32.396 00:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:32.396 00:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:32.396 00:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:32.396 00:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:32.396 00:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:32.396 00:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.396 00:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.655 00:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:32.655 "name": "Existed_Raid", 00:21:32.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.655 "strip_size_kb": 64, 00:21:32.655 "state": "configuring", 00:21:32.655 "raid_level": "concat", 00:21:32.655 "superblock": false, 00:21:32.655 "num_base_bdevs": 4, 00:21:32.655 "num_base_bdevs_discovered": 1, 00:21:32.655 "num_base_bdevs_operational": 4, 00:21:32.655 "base_bdevs_list": [ 00:21:32.655 { 00:21:32.655 "name": "BaseBdev1", 00:21:32.655 "uuid": "40dd28f8-273c-4b73-a7ed-80bdab24f6dd", 00:21:32.655 "is_configured": true, 00:21:32.655 "data_offset": 0, 00:21:32.655 "data_size": 65536 00:21:32.655 }, 00:21:32.655 { 00:21:32.655 "name": "BaseBdev2", 00:21:32.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.655 "is_configured": false, 00:21:32.655 "data_offset": 0, 00:21:32.655 "data_size": 0 00:21:32.655 }, 00:21:32.655 { 00:21:32.655 "name": "BaseBdev3", 00:21:32.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.655 "is_configured": false, 00:21:32.655 "data_offset": 0, 00:21:32.655 "data_size": 0 00:21:32.655 }, 00:21:32.655 { 00:21:32.655 "name": "BaseBdev4", 00:21:32.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.655 "is_configured": false, 00:21:32.655 "data_offset": 0, 00:21:32.655 "data_size": 0 00:21:32.655 } 00:21:32.655 ] 00:21:32.655 }' 00:21:32.655 00:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:32.655 00:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.914 00:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:33.173 [2024-07-25 00:05:28.857653] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:33.173 [2024-07-25 00:05:28.857711] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:21:33.173 00:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:33.432 [2024-07-25 00:05:29.077768] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:33.432 [2024-07-25 00:05:29.080010] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:33.432 [2024-07-25 00:05:29.080067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:33.432 [2024-07-25 00:05:29.080084] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:33.432 [2024-07-25 00:05:29.080099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:33.432 [2024-07-25 00:05:29.080110] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:33.432 [2024-07-25 00:05:29.080127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:33.432 00:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:33.432 00:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:33.432 00:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:33.432 00:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:33.432 00:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:33.432 00:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:33.432 00:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:33.432 00:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:33.432 00:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:33.432 00:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:33.432 00:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:33.432 00:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:33.432 00:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.432 00:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:33.691 00:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:33.691 "name": "Existed_Raid", 00:21:33.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.691 "strip_size_kb": 64, 00:21:33.691 "state": "configuring", 00:21:33.691 "raid_level": "concat", 00:21:33.691 "superblock": false, 00:21:33.691 "num_base_bdevs": 4, 00:21:33.691 "num_base_bdevs_discovered": 1, 00:21:33.691 "num_base_bdevs_operational": 4, 00:21:33.691 "base_bdevs_list": [ 00:21:33.691 { 00:21:33.691 "name": "BaseBdev1", 00:21:33.691 "uuid": "40dd28f8-273c-4b73-a7ed-80bdab24f6dd", 00:21:33.691 "is_configured": true, 00:21:33.691 "data_offset": 0, 00:21:33.691 "data_size": 65536 00:21:33.691 }, 00:21:33.691 { 00:21:33.691 "name": "BaseBdev2", 00:21:33.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.691 "is_configured": false, 00:21:33.691 "data_offset": 0, 00:21:33.691 "data_size": 0 00:21:33.691 }, 00:21:33.691 { 00:21:33.691 "name": "BaseBdev3", 00:21:33.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.691 "is_configured": false, 00:21:33.691 "data_offset": 0, 00:21:33.691 "data_size": 0 00:21:33.691 }, 00:21:33.691 { 00:21:33.691 "name": "BaseBdev4", 00:21:33.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.691 "is_configured": false, 00:21:33.691 "data_offset": 0, 00:21:33.691 "data_size": 0 00:21:33.691 } 00:21:33.691 ] 00:21:33.691 }' 00:21:33.691 00:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:33.691 00:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.951 00:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:34.209 [2024-07-25 00:05:29.958284] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:34.209 BaseBdev2 00:21:34.209 00:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:34.209 00:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:34.209 00:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:34.209 00:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:34.209 00:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:34.209 00:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:34.209 00:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:34.470 00:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:34.729 [ 00:21:34.729 { 00:21:34.729 "name": "BaseBdev2", 00:21:34.729 "aliases": [ 00:21:34.729 "2c68f635-5655-4026-ad24-484d17415f88" 00:21:34.729 ], 00:21:34.729 "product_name": "Malloc disk", 00:21:34.729 "block_size": 512, 00:21:34.729 "num_blocks": 65536, 00:21:34.729 "uuid": "2c68f635-5655-4026-ad24-484d17415f88", 00:21:34.729 "assigned_rate_limits": { 00:21:34.729 "rw_ios_per_sec": 0, 00:21:34.729 "rw_mbytes_per_sec": 0, 00:21:34.729 "r_mbytes_per_sec": 0, 00:21:34.729 "w_mbytes_per_sec": 0 00:21:34.729 }, 00:21:34.729 "claimed": true, 00:21:34.729 "claim_type": "exclusive_write", 00:21:34.729 "zoned": false, 00:21:34.729 "supported_io_types": { 00:21:34.729 "read": true, 00:21:34.729 "write": true, 00:21:34.729 "unmap": true, 00:21:34.729 "flush": true, 00:21:34.729 "reset": true, 00:21:34.729 "nvme_admin": false, 00:21:34.729 "nvme_io": false, 00:21:34.729 "nvme_io_md": false, 00:21:34.729 "write_zeroes": true, 00:21:34.729 "zcopy": true, 00:21:34.729 "get_zone_info": false, 00:21:34.729 "zone_management": false, 00:21:34.729 "zone_append": false, 00:21:34.729 "compare": false, 00:21:34.729 "compare_and_write": false, 00:21:34.729 "abort": true, 00:21:34.729 "seek_hole": false, 00:21:34.729 "seek_data": false, 00:21:34.729 "copy": true, 00:21:34.729 "nvme_iov_md": false 00:21:34.729 }, 00:21:34.729 "memory_domains": [ 00:21:34.729 { 00:21:34.729 "dma_device_id": "system", 00:21:34.729 "dma_device_type": 1 00:21:34.729 }, 00:21:34.729 { 00:21:34.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.729 "dma_device_type": 2 00:21:34.729 } 00:21:34.729 ], 00:21:34.729 "driver_specific": {} 00:21:34.729 } 00:21:34.729 ] 00:21:34.729 00:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:34.729 00:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:34.729 00:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:34.729 00:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:34.729 00:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:34.729 00:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:34.729 00:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:34.729 00:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:34.729 00:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:34.729 00:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:34.729 00:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:34.729 00:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:34.729 00:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:34.729 00:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.729 00:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.988 00:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:34.988 "name": "Existed_Raid", 00:21:34.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.988 "strip_size_kb": 64, 00:21:34.988 "state": "configuring", 00:21:34.988 "raid_level": "concat", 00:21:34.988 "superblock": false, 00:21:34.988 "num_base_bdevs": 4, 00:21:34.988 "num_base_bdevs_discovered": 2, 00:21:34.988 "num_base_bdevs_operational": 4, 00:21:34.988 "base_bdevs_list": [ 00:21:34.988 { 00:21:34.988 "name": "BaseBdev1", 00:21:34.988 "uuid": "40dd28f8-273c-4b73-a7ed-80bdab24f6dd", 00:21:34.988 "is_configured": true, 00:21:34.988 "data_offset": 0, 00:21:34.988 "data_size": 65536 00:21:34.988 }, 00:21:34.988 { 00:21:34.988 "name": "BaseBdev2", 00:21:34.988 "uuid": "2c68f635-5655-4026-ad24-484d17415f88", 00:21:34.988 "is_configured": true, 00:21:34.988 "data_offset": 0, 00:21:34.988 "data_size": 65536 00:21:34.988 }, 00:21:34.988 { 00:21:34.988 "name": "BaseBdev3", 00:21:34.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.988 "is_configured": false, 00:21:34.988 "data_offset": 0, 00:21:34.988 "data_size": 0 00:21:34.988 }, 00:21:34.988 { 00:21:34.988 "name": "BaseBdev4", 00:21:34.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.988 "is_configured": false, 00:21:34.988 "data_offset": 0, 00:21:34.988 "data_size": 0 00:21:34.988 } 00:21:34.988 ] 00:21:34.988 }' 00:21:34.988 00:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:34.988 00:05:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.247 00:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:35.505 [2024-07-25 00:05:31.275963] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:35.505 BaseBdev3 00:21:35.505 00:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:35.505 00:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:35.505 00:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:35.505 00:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:35.505 00:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:35.505 00:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:35.506 00:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:35.764 00:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:36.023 [ 00:21:36.023 { 00:21:36.023 "name": "BaseBdev3", 00:21:36.023 "aliases": [ 00:21:36.023 "3e928083-56cd-48c7-a25e-2c6761210afb" 00:21:36.023 ], 00:21:36.023 "product_name": "Malloc disk", 00:21:36.023 "block_size": 512, 00:21:36.023 "num_blocks": 65536, 00:21:36.023 "uuid": "3e928083-56cd-48c7-a25e-2c6761210afb", 00:21:36.023 "assigned_rate_limits": { 00:21:36.023 "rw_ios_per_sec": 0, 00:21:36.023 "rw_mbytes_per_sec": 0, 00:21:36.023 "r_mbytes_per_sec": 0, 00:21:36.023 "w_mbytes_per_sec": 0 00:21:36.023 }, 00:21:36.023 "claimed": true, 00:21:36.023 "claim_type": "exclusive_write", 00:21:36.023 "zoned": false, 00:21:36.023 "supported_io_types": { 00:21:36.023 "read": true, 00:21:36.023 "write": true, 00:21:36.023 "unmap": true, 00:21:36.023 "flush": true, 00:21:36.023 "reset": true, 00:21:36.023 "nvme_admin": false, 00:21:36.023 "nvme_io": false, 00:21:36.023 "nvme_io_md": false, 00:21:36.023 "write_zeroes": true, 00:21:36.023 "zcopy": true, 00:21:36.023 "get_zone_info": false, 00:21:36.023 "zone_management": false, 00:21:36.023 "zone_append": false, 00:21:36.023 "compare": false, 00:21:36.023 "compare_and_write": false, 00:21:36.023 "abort": true, 00:21:36.023 "seek_hole": false, 00:21:36.023 "seek_data": false, 00:21:36.023 "copy": true, 00:21:36.023 "nvme_iov_md": false 00:21:36.023 }, 00:21:36.023 "memory_domains": [ 00:21:36.023 { 00:21:36.023 "dma_device_id": "system", 00:21:36.023 "dma_device_type": 1 00:21:36.023 }, 00:21:36.023 { 00:21:36.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.023 "dma_device_type": 2 00:21:36.023 } 00:21:36.023 ], 00:21:36.024 "driver_specific": {} 00:21:36.024 } 00:21:36.024 ] 00:21:36.024 00:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:36.024 00:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:36.024 00:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:36.024 00:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:36.024 00:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:36.024 00:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:36.024 00:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:36.024 00:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:36.024 00:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:36.024 00:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:36.024 00:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:36.024 00:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:36.024 00:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:36.024 00:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.024 00:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:36.283 00:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:36.283 "name": "Existed_Raid", 00:21:36.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.283 "strip_size_kb": 64, 00:21:36.283 "state": "configuring", 00:21:36.283 "raid_level": "concat", 00:21:36.283 "superblock": false, 00:21:36.283 "num_base_bdevs": 4, 00:21:36.283 "num_base_bdevs_discovered": 3, 00:21:36.283 "num_base_bdevs_operational": 4, 00:21:36.283 "base_bdevs_list": [ 00:21:36.283 { 00:21:36.283 "name": "BaseBdev1", 00:21:36.283 "uuid": "40dd28f8-273c-4b73-a7ed-80bdab24f6dd", 00:21:36.283 "is_configured": true, 00:21:36.283 "data_offset": 0, 00:21:36.283 "data_size": 65536 00:21:36.283 }, 00:21:36.283 { 00:21:36.283 "name": "BaseBdev2", 00:21:36.283 "uuid": "2c68f635-5655-4026-ad24-484d17415f88", 00:21:36.283 "is_configured": true, 00:21:36.283 "data_offset": 0, 00:21:36.283 "data_size": 65536 00:21:36.283 }, 00:21:36.283 { 00:21:36.283 "name": "BaseBdev3", 00:21:36.283 "uuid": "3e928083-56cd-48c7-a25e-2c6761210afb", 00:21:36.283 "is_configured": true, 00:21:36.283 "data_offset": 0, 00:21:36.283 "data_size": 65536 00:21:36.283 }, 00:21:36.283 { 00:21:36.283 "name": "BaseBdev4", 00:21:36.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.283 "is_configured": false, 00:21:36.283 "data_offset": 0, 00:21:36.283 "data_size": 0 00:21:36.283 } 00:21:36.283 ] 00:21:36.283 }' 00:21:36.283 00:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:36.283 00:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.542 00:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:36.801 [2024-07-25 00:05:32.444780] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:36.801 [2024-07-25 00:05:32.445143] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:21:36.801 [2024-07-25 00:05:32.445192] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:36.801 [2024-07-25 00:05:32.445406] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:21:36.801 [2024-07-25 00:05:32.445772] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:21:36.801 [2024-07-25 00:05:32.445917] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:21:36.801 [2024-07-25 00:05:32.446282] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:36.801 BaseBdev4 00:21:36.801 00:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:21:36.801 00:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:21:36.801 00:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:36.801 00:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:36.801 00:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:36.801 00:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:36.801 00:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:37.060 00:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:37.319 [ 00:21:37.320 { 00:21:37.320 "name": "BaseBdev4", 00:21:37.320 "aliases": [ 00:21:37.320 "9ff92fe8-f8de-4298-96b5-08b114654a96" 00:21:37.320 ], 00:21:37.320 "product_name": "Malloc disk", 00:21:37.320 "block_size": 512, 00:21:37.320 "num_blocks": 65536, 00:21:37.320 "uuid": "9ff92fe8-f8de-4298-96b5-08b114654a96", 00:21:37.320 "assigned_rate_limits": { 00:21:37.320 "rw_ios_per_sec": 0, 00:21:37.320 "rw_mbytes_per_sec": 0, 00:21:37.320 "r_mbytes_per_sec": 0, 00:21:37.320 "w_mbytes_per_sec": 0 00:21:37.320 }, 00:21:37.320 "claimed": true, 00:21:37.320 "claim_type": "exclusive_write", 00:21:37.320 "zoned": false, 00:21:37.320 "supported_io_types": { 00:21:37.320 "read": true, 00:21:37.320 "write": true, 00:21:37.320 "unmap": true, 00:21:37.320 "flush": true, 00:21:37.320 "reset": true, 00:21:37.320 "nvme_admin": false, 00:21:37.320 "nvme_io": false, 00:21:37.320 "nvme_io_md": false, 00:21:37.320 "write_zeroes": true, 00:21:37.320 "zcopy": true, 00:21:37.320 "get_zone_info": false, 00:21:37.320 "zone_management": false, 00:21:37.320 "zone_append": false, 00:21:37.320 "compare": false, 00:21:37.320 "compare_and_write": false, 00:21:37.320 "abort": true, 00:21:37.320 "seek_hole": false, 00:21:37.320 "seek_data": false, 00:21:37.320 "copy": true, 00:21:37.320 "nvme_iov_md": false 00:21:37.320 }, 00:21:37.320 "memory_domains": [ 00:21:37.320 { 00:21:37.320 "dma_device_id": "system", 00:21:37.320 "dma_device_type": 1 00:21:37.320 }, 00:21:37.320 { 00:21:37.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.320 "dma_device_type": 2 00:21:37.320 } 00:21:37.320 ], 00:21:37.320 "driver_specific": {} 00:21:37.320 } 00:21:37.320 ] 00:21:37.320 00:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:37.320 00:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:37.320 00:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:37.320 00:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:37.320 00:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:37.320 00:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:37.320 00:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:37.320 00:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:37.320 00:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:37.320 00:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:37.320 00:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:37.320 00:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:37.320 00:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:37.320 00:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.320 00:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.579 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:37.579 "name": "Existed_Raid", 00:21:37.579 "uuid": "9cfea1c2-0fce-4d61-a659-952362a1a925", 00:21:37.579 "strip_size_kb": 64, 00:21:37.579 "state": "online", 00:21:37.579 "raid_level": "concat", 00:21:37.579 "superblock": false, 00:21:37.579 "num_base_bdevs": 4, 00:21:37.579 "num_base_bdevs_discovered": 4, 00:21:37.579 "num_base_bdevs_operational": 4, 00:21:37.579 "base_bdevs_list": [ 00:21:37.579 { 00:21:37.579 "name": "BaseBdev1", 00:21:37.579 "uuid": "40dd28f8-273c-4b73-a7ed-80bdab24f6dd", 00:21:37.579 "is_configured": true, 00:21:37.579 "data_offset": 0, 00:21:37.579 "data_size": 65536 00:21:37.579 }, 00:21:37.579 { 00:21:37.579 "name": "BaseBdev2", 00:21:37.579 "uuid": "2c68f635-5655-4026-ad24-484d17415f88", 00:21:37.579 "is_configured": true, 00:21:37.579 "data_offset": 0, 00:21:37.579 "data_size": 65536 00:21:37.579 }, 00:21:37.579 { 00:21:37.579 "name": "BaseBdev3", 00:21:37.579 "uuid": "3e928083-56cd-48c7-a25e-2c6761210afb", 00:21:37.579 "is_configured": true, 00:21:37.579 "data_offset": 0, 00:21:37.579 "data_size": 65536 00:21:37.579 }, 00:21:37.579 { 00:21:37.579 "name": "BaseBdev4", 00:21:37.579 "uuid": "9ff92fe8-f8de-4298-96b5-08b114654a96", 00:21:37.579 "is_configured": true, 00:21:37.579 "data_offset": 0, 00:21:37.579 "data_size": 65536 00:21:37.579 } 00:21:37.579 ] 00:21:37.579 }' 00:21:37.579 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:37.579 00:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.837 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:37.837 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:37.837 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:37.837 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:37.837 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:37.837 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:37.837 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:37.837 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:37.837 [2024-07-25 00:05:33.657501] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:37.837 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:37.837 "name": "Existed_Raid", 00:21:37.837 "aliases": [ 00:21:37.837 "9cfea1c2-0fce-4d61-a659-952362a1a925" 00:21:37.837 ], 00:21:37.837 "product_name": "Raid Volume", 00:21:37.837 "block_size": 512, 00:21:37.837 "num_blocks": 262144, 00:21:37.837 "uuid": "9cfea1c2-0fce-4d61-a659-952362a1a925", 00:21:37.837 "assigned_rate_limits": { 00:21:37.837 "rw_ios_per_sec": 0, 00:21:37.837 "rw_mbytes_per_sec": 0, 00:21:37.837 "r_mbytes_per_sec": 0, 00:21:37.837 "w_mbytes_per_sec": 0 00:21:37.837 }, 00:21:37.837 "claimed": false, 00:21:37.837 "zoned": false, 00:21:37.837 "supported_io_types": { 00:21:37.837 "read": true, 00:21:37.837 "write": true, 00:21:37.837 "unmap": true, 00:21:37.837 "flush": true, 00:21:37.837 "reset": true, 00:21:37.837 "nvme_admin": false, 00:21:37.837 "nvme_io": false, 00:21:37.837 "nvme_io_md": false, 00:21:37.837 "write_zeroes": true, 00:21:37.837 "zcopy": false, 00:21:37.837 "get_zone_info": false, 00:21:37.837 "zone_management": false, 00:21:37.837 "zone_append": false, 00:21:37.837 "compare": false, 00:21:37.837 "compare_and_write": false, 00:21:37.837 "abort": false, 00:21:37.837 "seek_hole": false, 00:21:37.837 "seek_data": false, 00:21:37.837 "copy": false, 00:21:37.837 "nvme_iov_md": false 00:21:37.837 }, 00:21:37.837 "memory_domains": [ 00:21:37.837 { 00:21:37.837 "dma_device_id": "system", 00:21:37.837 "dma_device_type": 1 00:21:37.837 }, 00:21:37.837 { 00:21:37.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.837 "dma_device_type": 2 00:21:37.837 }, 00:21:37.837 { 00:21:37.837 "dma_device_id": "system", 00:21:37.837 "dma_device_type": 1 00:21:37.837 }, 00:21:37.837 { 00:21:37.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.837 "dma_device_type": 2 00:21:37.837 }, 00:21:37.837 { 00:21:37.837 "dma_device_id": "system", 00:21:37.837 "dma_device_type": 1 00:21:37.837 }, 00:21:37.837 { 00:21:37.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.837 "dma_device_type": 2 00:21:37.837 }, 00:21:37.837 { 00:21:37.837 "dma_device_id": "system", 00:21:37.837 "dma_device_type": 1 00:21:37.837 }, 00:21:37.837 { 00:21:37.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.837 "dma_device_type": 2 00:21:37.837 } 00:21:37.837 ], 00:21:37.837 "driver_specific": { 00:21:37.837 "raid": { 00:21:37.837 "uuid": "9cfea1c2-0fce-4d61-a659-952362a1a925", 00:21:37.837 "strip_size_kb": 64, 00:21:37.837 "state": "online", 00:21:37.837 "raid_level": "concat", 00:21:37.837 "superblock": false, 00:21:37.837 "num_base_bdevs": 4, 00:21:37.837 "num_base_bdevs_discovered": 4, 00:21:37.837 "num_base_bdevs_operational": 4, 00:21:37.837 "base_bdevs_list": [ 00:21:37.837 { 00:21:37.837 "name": "BaseBdev1", 00:21:37.837 "uuid": "40dd28f8-273c-4b73-a7ed-80bdab24f6dd", 00:21:37.837 "is_configured": true, 00:21:37.837 "data_offset": 0, 00:21:37.837 "data_size": 65536 00:21:37.837 }, 00:21:37.837 { 00:21:37.837 "name": "BaseBdev2", 00:21:37.837 "uuid": "2c68f635-5655-4026-ad24-484d17415f88", 00:21:37.837 "is_configured": true, 00:21:37.837 "data_offset": 0, 00:21:37.837 "data_size": 65536 00:21:37.837 }, 00:21:37.837 { 00:21:37.837 "name": "BaseBdev3", 00:21:37.837 "uuid": "3e928083-56cd-48c7-a25e-2c6761210afb", 00:21:37.837 "is_configured": true, 00:21:37.837 "data_offset": 0, 00:21:37.837 "data_size": 65536 00:21:37.837 }, 00:21:37.837 { 00:21:37.837 "name": "BaseBdev4", 00:21:37.837 "uuid": "9ff92fe8-f8de-4298-96b5-08b114654a96", 00:21:37.837 "is_configured": true, 00:21:37.837 "data_offset": 0, 00:21:37.837 "data_size": 65536 00:21:37.837 } 00:21:37.837 ] 00:21:37.837 } 00:21:37.837 } 00:21:37.837 }' 00:21:37.837 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:37.837 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:37.837 BaseBdev2 00:21:37.837 BaseBdev3 00:21:37.837 BaseBdev4' 00:21:37.837 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:37.837 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:37.837 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:38.096 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:38.096 "name": "BaseBdev1", 00:21:38.096 "aliases": [ 00:21:38.096 "40dd28f8-273c-4b73-a7ed-80bdab24f6dd" 00:21:38.096 ], 00:21:38.096 "product_name": "Malloc disk", 00:21:38.096 "block_size": 512, 00:21:38.096 "num_blocks": 65536, 00:21:38.096 "uuid": "40dd28f8-273c-4b73-a7ed-80bdab24f6dd", 00:21:38.096 "assigned_rate_limits": { 00:21:38.096 "rw_ios_per_sec": 0, 00:21:38.096 "rw_mbytes_per_sec": 0, 00:21:38.096 "r_mbytes_per_sec": 0, 00:21:38.096 "w_mbytes_per_sec": 0 00:21:38.096 }, 00:21:38.096 "claimed": true, 00:21:38.096 "claim_type": "exclusive_write", 00:21:38.096 "zoned": false, 00:21:38.096 "supported_io_types": { 00:21:38.096 "read": true, 00:21:38.096 "write": true, 00:21:38.096 "unmap": true, 00:21:38.096 "flush": true, 00:21:38.096 "reset": true, 00:21:38.096 "nvme_admin": false, 00:21:38.096 "nvme_io": false, 00:21:38.096 "nvme_io_md": false, 00:21:38.096 "write_zeroes": true, 00:21:38.096 "zcopy": true, 00:21:38.096 "get_zone_info": false, 00:21:38.096 "zone_management": false, 00:21:38.096 "zone_append": false, 00:21:38.096 "compare": false, 00:21:38.096 "compare_and_write": false, 00:21:38.096 "abort": true, 00:21:38.096 "seek_hole": false, 00:21:38.096 "seek_data": false, 00:21:38.096 "copy": true, 00:21:38.096 "nvme_iov_md": false 00:21:38.096 }, 00:21:38.096 "memory_domains": [ 00:21:38.096 { 00:21:38.096 "dma_device_id": "system", 00:21:38.096 "dma_device_type": 1 00:21:38.096 }, 00:21:38.096 { 00:21:38.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.096 "dma_device_type": 2 00:21:38.096 } 00:21:38.096 ], 00:21:38.096 "driver_specific": {} 00:21:38.096 }' 00:21:38.096 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:38.355 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:38.355 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:38.355 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:38.355 00:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:38.355 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:38.355 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:38.355 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:38.355 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:38.355 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:38.355 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:38.355 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:38.355 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:38.355 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:38.355 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:38.614 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:38.614 "name": "BaseBdev2", 00:21:38.614 "aliases": [ 00:21:38.614 "2c68f635-5655-4026-ad24-484d17415f88" 00:21:38.614 ], 00:21:38.614 "product_name": "Malloc disk", 00:21:38.614 "block_size": 512, 00:21:38.614 "num_blocks": 65536, 00:21:38.614 "uuid": "2c68f635-5655-4026-ad24-484d17415f88", 00:21:38.614 "assigned_rate_limits": { 00:21:38.614 "rw_ios_per_sec": 0, 00:21:38.614 "rw_mbytes_per_sec": 0, 00:21:38.614 "r_mbytes_per_sec": 0, 00:21:38.614 "w_mbytes_per_sec": 0 00:21:38.614 }, 00:21:38.614 "claimed": true, 00:21:38.614 "claim_type": "exclusive_write", 00:21:38.614 "zoned": false, 00:21:38.614 "supported_io_types": { 00:21:38.614 "read": true, 00:21:38.614 "write": true, 00:21:38.614 "unmap": true, 00:21:38.614 "flush": true, 00:21:38.614 "reset": true, 00:21:38.614 "nvme_admin": false, 00:21:38.614 "nvme_io": false, 00:21:38.614 "nvme_io_md": false, 00:21:38.614 "write_zeroes": true, 00:21:38.614 "zcopy": true, 00:21:38.614 "get_zone_info": false, 00:21:38.614 "zone_management": false, 00:21:38.614 "zone_append": false, 00:21:38.614 "compare": false, 00:21:38.614 "compare_and_write": false, 00:21:38.614 "abort": true, 00:21:38.614 "seek_hole": false, 00:21:38.614 "seek_data": false, 00:21:38.614 "copy": true, 00:21:38.614 "nvme_iov_md": false 00:21:38.614 }, 00:21:38.614 "memory_domains": [ 00:21:38.614 { 00:21:38.614 "dma_device_id": "system", 00:21:38.614 "dma_device_type": 1 00:21:38.614 }, 00:21:38.614 { 00:21:38.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.614 "dma_device_type": 2 00:21:38.614 } 00:21:38.614 ], 00:21:38.614 "driver_specific": {} 00:21:38.614 }' 00:21:38.614 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:38.614 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:38.614 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:38.614 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:38.614 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:38.614 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:38.614 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:38.614 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:38.614 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:38.614 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:38.614 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:38.614 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:38.614 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:38.614 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:38.614 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:38.873 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:38.873 "name": "BaseBdev3", 00:21:38.873 "aliases": [ 00:21:38.873 "3e928083-56cd-48c7-a25e-2c6761210afb" 00:21:38.873 ], 00:21:38.873 "product_name": "Malloc disk", 00:21:38.873 "block_size": 512, 00:21:38.873 "num_blocks": 65536, 00:21:38.873 "uuid": "3e928083-56cd-48c7-a25e-2c6761210afb", 00:21:38.873 "assigned_rate_limits": { 00:21:38.873 "rw_ios_per_sec": 0, 00:21:38.873 "rw_mbytes_per_sec": 0, 00:21:38.873 "r_mbytes_per_sec": 0, 00:21:38.873 "w_mbytes_per_sec": 0 00:21:38.873 }, 00:21:38.873 "claimed": true, 00:21:38.873 "claim_type": "exclusive_write", 00:21:38.873 "zoned": false, 00:21:38.873 "supported_io_types": { 00:21:38.873 "read": true, 00:21:38.873 "write": true, 00:21:38.873 "unmap": true, 00:21:38.873 "flush": true, 00:21:38.873 "reset": true, 00:21:38.873 "nvme_admin": false, 00:21:38.873 "nvme_io": false, 00:21:38.873 "nvme_io_md": false, 00:21:38.873 "write_zeroes": true, 00:21:38.873 "zcopy": true, 00:21:38.873 "get_zone_info": false, 00:21:38.873 "zone_management": false, 00:21:38.873 "zone_append": false, 00:21:38.873 "compare": false, 00:21:38.873 "compare_and_write": false, 00:21:38.873 "abort": true, 00:21:38.873 "seek_hole": false, 00:21:38.873 "seek_data": false, 00:21:38.873 "copy": true, 00:21:38.873 "nvme_iov_md": false 00:21:38.873 }, 00:21:38.873 "memory_domains": [ 00:21:38.873 { 00:21:38.873 "dma_device_id": "system", 00:21:38.873 "dma_device_type": 1 00:21:38.873 }, 00:21:38.873 { 00:21:38.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.873 "dma_device_type": 2 00:21:38.873 } 00:21:38.873 ], 00:21:38.873 "driver_specific": {} 00:21:38.873 }' 00:21:38.873 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:38.873 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:38.873 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:38.873 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:38.873 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:38.873 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:38.873 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:38.873 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:38.873 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:38.873 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:38.873 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:38.873 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:38.873 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:38.873 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:38.873 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:39.132 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:39.132 "name": "BaseBdev4", 00:21:39.132 "aliases": [ 00:21:39.132 "9ff92fe8-f8de-4298-96b5-08b114654a96" 00:21:39.132 ], 00:21:39.132 "product_name": "Malloc disk", 00:21:39.132 "block_size": 512, 00:21:39.132 "num_blocks": 65536, 00:21:39.132 "uuid": "9ff92fe8-f8de-4298-96b5-08b114654a96", 00:21:39.132 "assigned_rate_limits": { 00:21:39.132 "rw_ios_per_sec": 0, 00:21:39.132 "rw_mbytes_per_sec": 0, 00:21:39.132 "r_mbytes_per_sec": 0, 00:21:39.132 "w_mbytes_per_sec": 0 00:21:39.132 }, 00:21:39.132 "claimed": true, 00:21:39.132 "claim_type": "exclusive_write", 00:21:39.132 "zoned": false, 00:21:39.132 "supported_io_types": { 00:21:39.132 "read": true, 00:21:39.132 "write": true, 00:21:39.132 "unmap": true, 00:21:39.132 "flush": true, 00:21:39.132 "reset": true, 00:21:39.132 "nvme_admin": false, 00:21:39.132 "nvme_io": false, 00:21:39.132 "nvme_io_md": false, 00:21:39.132 "write_zeroes": true, 00:21:39.132 "zcopy": true, 00:21:39.132 "get_zone_info": false, 00:21:39.132 "zone_management": false, 00:21:39.132 "zone_append": false, 00:21:39.132 "compare": false, 00:21:39.132 "compare_and_write": false, 00:21:39.132 "abort": true, 00:21:39.132 "seek_hole": false, 00:21:39.132 "seek_data": false, 00:21:39.132 "copy": true, 00:21:39.132 "nvme_iov_md": false 00:21:39.132 }, 00:21:39.132 "memory_domains": [ 00:21:39.132 { 00:21:39.132 "dma_device_id": "system", 00:21:39.132 "dma_device_type": 1 00:21:39.132 }, 00:21:39.132 { 00:21:39.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.132 "dma_device_type": 2 00:21:39.132 } 00:21:39.132 ], 00:21:39.132 "driver_specific": {} 00:21:39.132 }' 00:21:39.132 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:39.132 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:39.132 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:39.132 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:39.132 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:39.132 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:39.132 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:39.132 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:39.132 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:39.132 00:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:39.391 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:39.391 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:39.391 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:39.391 [2024-07-25 00:05:35.229814] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:39.391 [2024-07-25 00:05:35.229855] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:39.391 [2024-07-25 00:05:35.229947] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:39.650 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:39.650 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:21:39.650 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:39.650 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:39.650 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:21:39.650 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:21:39.650 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:39.650 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:21:39.650 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:39.650 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:39.650 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:39.650 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:39.650 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:39.650 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:39.650 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:39.650 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.650 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.909 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:39.909 "name": "Existed_Raid", 00:21:39.909 "uuid": "9cfea1c2-0fce-4d61-a659-952362a1a925", 00:21:39.909 "strip_size_kb": 64, 00:21:39.909 "state": "offline", 00:21:39.909 "raid_level": "concat", 00:21:39.909 "superblock": false, 00:21:39.909 "num_base_bdevs": 4, 00:21:39.909 "num_base_bdevs_discovered": 3, 00:21:39.909 "num_base_bdevs_operational": 3, 00:21:39.909 "base_bdevs_list": [ 00:21:39.909 { 00:21:39.909 "name": null, 00:21:39.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.909 "is_configured": false, 00:21:39.909 "data_offset": 0, 00:21:39.909 "data_size": 65536 00:21:39.909 }, 00:21:39.909 { 00:21:39.909 "name": "BaseBdev2", 00:21:39.909 "uuid": "2c68f635-5655-4026-ad24-484d17415f88", 00:21:39.909 "is_configured": true, 00:21:39.909 "data_offset": 0, 00:21:39.909 "data_size": 65536 00:21:39.909 }, 00:21:39.909 { 00:21:39.909 "name": "BaseBdev3", 00:21:39.909 "uuid": "3e928083-56cd-48c7-a25e-2c6761210afb", 00:21:39.909 "is_configured": true, 00:21:39.909 "data_offset": 0, 00:21:39.909 "data_size": 65536 00:21:39.909 }, 00:21:39.909 { 00:21:39.909 "name": "BaseBdev4", 00:21:39.909 "uuid": "9ff92fe8-f8de-4298-96b5-08b114654a96", 00:21:39.909 "is_configured": true, 00:21:39.909 "data_offset": 0, 00:21:39.909 "data_size": 65536 00:21:39.909 } 00:21:39.909 ] 00:21:39.909 }' 00:21:39.909 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:39.909 00:05:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.168 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:40.168 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:40.168 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:40.168 00:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.426 00:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:40.426 00:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:40.426 00:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:40.685 [2024-07-25 00:05:36.419588] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:40.686 00:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:40.686 00:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:40.686 00:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.686 00:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:40.945 00:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:40.945 00:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:40.945 00:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:41.204 [2024-07-25 00:05:36.981098] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:41.463 00:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:41.463 00:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:41.463 00:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:41.463 00:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.463 00:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:41.463 00:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:41.463 00:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:41.721 [2024-07-25 00:05:37.515414] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:41.721 [2024-07-25 00:05:37.515693] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:21:41.980 00:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:41.980 00:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:41.980 00:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.980 00:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:41.980 00:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:41.980 00:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:41.980 00:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:41.980 00:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:41.980 00:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:41.980 00:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:42.239 BaseBdev2 00:21:42.239 00:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:42.239 00:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:42.239 00:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:42.239 00:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:42.239 00:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:42.239 00:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:42.239 00:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:42.497 00:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:42.757 [ 00:21:42.757 { 00:21:42.757 "name": "BaseBdev2", 00:21:42.757 "aliases": [ 00:21:42.757 "5dd5bfb2-b5b8-49b4-ab41-971d7424f75e" 00:21:42.757 ], 00:21:42.757 "product_name": "Malloc disk", 00:21:42.757 "block_size": 512, 00:21:42.757 "num_blocks": 65536, 00:21:42.757 "uuid": "5dd5bfb2-b5b8-49b4-ab41-971d7424f75e", 00:21:42.757 "assigned_rate_limits": { 00:21:42.757 "rw_ios_per_sec": 0, 00:21:42.757 "rw_mbytes_per_sec": 0, 00:21:42.757 "r_mbytes_per_sec": 0, 00:21:42.757 "w_mbytes_per_sec": 0 00:21:42.757 }, 00:21:42.757 "claimed": false, 00:21:42.757 "zoned": false, 00:21:42.757 "supported_io_types": { 00:21:42.757 "read": true, 00:21:42.757 "write": true, 00:21:42.757 "unmap": true, 00:21:42.757 "flush": true, 00:21:42.757 "reset": true, 00:21:42.757 "nvme_admin": false, 00:21:42.757 "nvme_io": false, 00:21:42.757 "nvme_io_md": false, 00:21:42.757 "write_zeroes": true, 00:21:42.757 "zcopy": true, 00:21:42.757 "get_zone_info": false, 00:21:42.757 "zone_management": false, 00:21:42.757 "zone_append": false, 00:21:42.757 "compare": false, 00:21:42.757 "compare_and_write": false, 00:21:42.757 "abort": true, 00:21:42.757 "seek_hole": false, 00:21:42.757 "seek_data": false, 00:21:42.757 "copy": true, 00:21:42.757 "nvme_iov_md": false 00:21:42.757 }, 00:21:42.757 "memory_domains": [ 00:21:42.757 { 00:21:42.757 "dma_device_id": "system", 00:21:42.757 "dma_device_type": 1 00:21:42.757 }, 00:21:42.757 { 00:21:42.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.757 "dma_device_type": 2 00:21:42.757 } 00:21:42.757 ], 00:21:42.757 "driver_specific": {} 00:21:42.757 } 00:21:42.757 ] 00:21:42.757 00:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:42.757 00:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:42.757 00:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:42.757 00:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:43.016 BaseBdev3 00:21:43.016 00:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:43.016 00:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:43.016 00:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:43.016 00:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:43.016 00:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:43.016 00:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:43.016 00:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:43.275 00:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:43.534 [ 00:21:43.534 { 00:21:43.534 "name": "BaseBdev3", 00:21:43.534 "aliases": [ 00:21:43.534 "9a81c92a-9960-42a9-839e-1bba80f08b07" 00:21:43.534 ], 00:21:43.534 "product_name": "Malloc disk", 00:21:43.534 "block_size": 512, 00:21:43.534 "num_blocks": 65536, 00:21:43.534 "uuid": "9a81c92a-9960-42a9-839e-1bba80f08b07", 00:21:43.534 "assigned_rate_limits": { 00:21:43.534 "rw_ios_per_sec": 0, 00:21:43.534 "rw_mbytes_per_sec": 0, 00:21:43.534 "r_mbytes_per_sec": 0, 00:21:43.534 "w_mbytes_per_sec": 0 00:21:43.534 }, 00:21:43.534 "claimed": false, 00:21:43.534 "zoned": false, 00:21:43.534 "supported_io_types": { 00:21:43.534 "read": true, 00:21:43.534 "write": true, 00:21:43.534 "unmap": true, 00:21:43.534 "flush": true, 00:21:43.534 "reset": true, 00:21:43.534 "nvme_admin": false, 00:21:43.534 "nvme_io": false, 00:21:43.534 "nvme_io_md": false, 00:21:43.534 "write_zeroes": true, 00:21:43.534 "zcopy": true, 00:21:43.534 "get_zone_info": false, 00:21:43.534 "zone_management": false, 00:21:43.534 "zone_append": false, 00:21:43.534 "compare": false, 00:21:43.534 "compare_and_write": false, 00:21:43.534 "abort": true, 00:21:43.534 "seek_hole": false, 00:21:43.534 "seek_data": false, 00:21:43.534 "copy": true, 00:21:43.534 "nvme_iov_md": false 00:21:43.534 }, 00:21:43.534 "memory_domains": [ 00:21:43.534 { 00:21:43.534 "dma_device_id": "system", 00:21:43.534 "dma_device_type": 1 00:21:43.534 }, 00:21:43.534 { 00:21:43.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.534 "dma_device_type": 2 00:21:43.534 } 00:21:43.534 ], 00:21:43.534 "driver_specific": {} 00:21:43.534 } 00:21:43.534 ] 00:21:43.534 00:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:43.534 00:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:43.534 00:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:43.534 00:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:43.794 BaseBdev4 00:21:43.794 00:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:21:43.794 00:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:21:43.794 00:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:43.794 00:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:43.794 00:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:43.794 00:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:43.794 00:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:43.794 00:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:44.053 [ 00:21:44.053 { 00:21:44.053 "name": "BaseBdev4", 00:21:44.053 "aliases": [ 00:21:44.053 "0fe7c82c-085b-48f1-bb3f-88441747d6b0" 00:21:44.053 ], 00:21:44.053 "product_name": "Malloc disk", 00:21:44.053 "block_size": 512, 00:21:44.053 "num_blocks": 65536, 00:21:44.053 "uuid": "0fe7c82c-085b-48f1-bb3f-88441747d6b0", 00:21:44.053 "assigned_rate_limits": { 00:21:44.053 "rw_ios_per_sec": 0, 00:21:44.053 "rw_mbytes_per_sec": 0, 00:21:44.053 "r_mbytes_per_sec": 0, 00:21:44.053 "w_mbytes_per_sec": 0 00:21:44.053 }, 00:21:44.053 "claimed": false, 00:21:44.053 "zoned": false, 00:21:44.053 "supported_io_types": { 00:21:44.053 "read": true, 00:21:44.053 "write": true, 00:21:44.053 "unmap": true, 00:21:44.053 "flush": true, 00:21:44.053 "reset": true, 00:21:44.053 "nvme_admin": false, 00:21:44.053 "nvme_io": false, 00:21:44.053 "nvme_io_md": false, 00:21:44.053 "write_zeroes": true, 00:21:44.053 "zcopy": true, 00:21:44.053 "get_zone_info": false, 00:21:44.053 "zone_management": false, 00:21:44.053 "zone_append": false, 00:21:44.053 "compare": false, 00:21:44.053 "compare_and_write": false, 00:21:44.053 "abort": true, 00:21:44.053 "seek_hole": false, 00:21:44.053 "seek_data": false, 00:21:44.053 "copy": true, 00:21:44.053 "nvme_iov_md": false 00:21:44.053 }, 00:21:44.053 "memory_domains": [ 00:21:44.053 { 00:21:44.053 "dma_device_id": "system", 00:21:44.053 "dma_device_type": 1 00:21:44.053 }, 00:21:44.053 { 00:21:44.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.053 "dma_device_type": 2 00:21:44.053 } 00:21:44.053 ], 00:21:44.053 "driver_specific": {} 00:21:44.053 } 00:21:44.053 ] 00:21:44.053 00:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:44.053 00:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:44.053 00:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:44.053 00:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:44.312 [2024-07-25 00:05:40.078841] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:44.312 [2024-07-25 00:05:40.079194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:44.312 [2024-07-25 00:05:40.079242] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:44.312 [2024-07-25 00:05:40.081447] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:44.312 [2024-07-25 00:05:40.081509] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:44.312 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:44.312 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:44.312 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:44.312 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:44.312 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:44.312 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:44.312 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:44.312 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:44.312 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:44.312 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:44.312 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.312 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:44.571 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:44.571 "name": "Existed_Raid", 00:21:44.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.571 "strip_size_kb": 64, 00:21:44.571 "state": "configuring", 00:21:44.571 "raid_level": "concat", 00:21:44.571 "superblock": false, 00:21:44.571 "num_base_bdevs": 4, 00:21:44.571 "num_base_bdevs_discovered": 3, 00:21:44.571 "num_base_bdevs_operational": 4, 00:21:44.571 "base_bdevs_list": [ 00:21:44.571 { 00:21:44.571 "name": "BaseBdev1", 00:21:44.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.571 "is_configured": false, 00:21:44.571 "data_offset": 0, 00:21:44.571 "data_size": 0 00:21:44.571 }, 00:21:44.571 { 00:21:44.571 "name": "BaseBdev2", 00:21:44.571 "uuid": "5dd5bfb2-b5b8-49b4-ab41-971d7424f75e", 00:21:44.571 "is_configured": true, 00:21:44.571 "data_offset": 0, 00:21:44.571 "data_size": 65536 00:21:44.571 }, 00:21:44.571 { 00:21:44.571 "name": "BaseBdev3", 00:21:44.571 "uuid": "9a81c92a-9960-42a9-839e-1bba80f08b07", 00:21:44.571 "is_configured": true, 00:21:44.571 "data_offset": 0, 00:21:44.571 "data_size": 65536 00:21:44.571 }, 00:21:44.571 { 00:21:44.571 "name": "BaseBdev4", 00:21:44.571 "uuid": "0fe7c82c-085b-48f1-bb3f-88441747d6b0", 00:21:44.571 "is_configured": true, 00:21:44.571 "data_offset": 0, 00:21:44.571 "data_size": 65536 00:21:44.571 } 00:21:44.571 ] 00:21:44.571 }' 00:21:44.571 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:44.571 00:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.829 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:45.088 [2024-07-25 00:05:40.791005] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:45.088 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:45.088 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:45.088 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:45.088 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:45.088 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:45.088 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:45.088 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:45.088 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:45.088 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:45.088 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:45.088 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.088 00:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.348 00:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:45.348 "name": "Existed_Raid", 00:21:45.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.348 "strip_size_kb": 64, 00:21:45.348 "state": "configuring", 00:21:45.348 "raid_level": "concat", 00:21:45.348 "superblock": false, 00:21:45.348 "num_base_bdevs": 4, 00:21:45.348 "num_base_bdevs_discovered": 2, 00:21:45.348 "num_base_bdevs_operational": 4, 00:21:45.348 "base_bdevs_list": [ 00:21:45.348 { 00:21:45.348 "name": "BaseBdev1", 00:21:45.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.348 "is_configured": false, 00:21:45.348 "data_offset": 0, 00:21:45.348 "data_size": 0 00:21:45.348 }, 00:21:45.348 { 00:21:45.348 "name": null, 00:21:45.348 "uuid": "5dd5bfb2-b5b8-49b4-ab41-971d7424f75e", 00:21:45.348 "is_configured": false, 00:21:45.348 "data_offset": 0, 00:21:45.348 "data_size": 65536 00:21:45.348 }, 00:21:45.348 { 00:21:45.348 "name": "BaseBdev3", 00:21:45.348 "uuid": "9a81c92a-9960-42a9-839e-1bba80f08b07", 00:21:45.348 "is_configured": true, 00:21:45.348 "data_offset": 0, 00:21:45.348 "data_size": 65536 00:21:45.348 }, 00:21:45.348 { 00:21:45.348 "name": "BaseBdev4", 00:21:45.348 "uuid": "0fe7c82c-085b-48f1-bb3f-88441747d6b0", 00:21:45.348 "is_configured": true, 00:21:45.348 "data_offset": 0, 00:21:45.348 "data_size": 65536 00:21:45.348 } 00:21:45.348 ] 00:21:45.348 }' 00:21:45.348 00:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:45.348 00:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.606 00:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:45.606 00:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.864 00:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:45.865 00:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:46.123 [2024-07-25 00:05:41.828366] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:46.123 BaseBdev1 00:21:46.123 00:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:46.123 00:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:46.123 00:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:46.123 00:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:46.123 00:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:46.123 00:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:46.123 00:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:46.382 00:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:46.641 [ 00:21:46.641 { 00:21:46.641 "name": "BaseBdev1", 00:21:46.641 "aliases": [ 00:21:46.641 "7980842e-3b91-4f7e-ad1b-6bd9a65be536" 00:21:46.641 ], 00:21:46.641 "product_name": "Malloc disk", 00:21:46.641 "block_size": 512, 00:21:46.641 "num_blocks": 65536, 00:21:46.641 "uuid": "7980842e-3b91-4f7e-ad1b-6bd9a65be536", 00:21:46.641 "assigned_rate_limits": { 00:21:46.641 "rw_ios_per_sec": 0, 00:21:46.641 "rw_mbytes_per_sec": 0, 00:21:46.641 "r_mbytes_per_sec": 0, 00:21:46.641 "w_mbytes_per_sec": 0 00:21:46.641 }, 00:21:46.641 "claimed": true, 00:21:46.641 "claim_type": "exclusive_write", 00:21:46.641 "zoned": false, 00:21:46.641 "supported_io_types": { 00:21:46.641 "read": true, 00:21:46.641 "write": true, 00:21:46.641 "unmap": true, 00:21:46.641 "flush": true, 00:21:46.641 "reset": true, 00:21:46.641 "nvme_admin": false, 00:21:46.641 "nvme_io": false, 00:21:46.641 "nvme_io_md": false, 00:21:46.641 "write_zeroes": true, 00:21:46.641 "zcopy": true, 00:21:46.641 "get_zone_info": false, 00:21:46.641 "zone_management": false, 00:21:46.641 "zone_append": false, 00:21:46.641 "compare": false, 00:21:46.641 "compare_and_write": false, 00:21:46.641 "abort": true, 00:21:46.641 "seek_hole": false, 00:21:46.641 "seek_data": false, 00:21:46.641 "copy": true, 00:21:46.641 "nvme_iov_md": false 00:21:46.641 }, 00:21:46.641 "memory_domains": [ 00:21:46.641 { 00:21:46.641 "dma_device_id": "system", 00:21:46.641 "dma_device_type": 1 00:21:46.641 }, 00:21:46.641 { 00:21:46.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.641 "dma_device_type": 2 00:21:46.641 } 00:21:46.641 ], 00:21:46.641 "driver_specific": {} 00:21:46.641 } 00:21:46.641 ] 00:21:46.641 00:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:46.641 00:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:46.641 00:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:46.641 00:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:46.641 00:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:46.641 00:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:46.641 00:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:46.641 00:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:46.641 00:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:46.641 00:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:46.641 00:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:46.641 00:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.641 00:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:46.900 00:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:46.900 "name": "Existed_Raid", 00:21:46.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.900 "strip_size_kb": 64, 00:21:46.900 "state": "configuring", 00:21:46.900 "raid_level": "concat", 00:21:46.900 "superblock": false, 00:21:46.900 "num_base_bdevs": 4, 00:21:46.900 "num_base_bdevs_discovered": 3, 00:21:46.900 "num_base_bdevs_operational": 4, 00:21:46.900 "base_bdevs_list": [ 00:21:46.900 { 00:21:46.900 "name": "BaseBdev1", 00:21:46.900 "uuid": "7980842e-3b91-4f7e-ad1b-6bd9a65be536", 00:21:46.900 "is_configured": true, 00:21:46.900 "data_offset": 0, 00:21:46.900 "data_size": 65536 00:21:46.900 }, 00:21:46.900 { 00:21:46.900 "name": null, 00:21:46.900 "uuid": "5dd5bfb2-b5b8-49b4-ab41-971d7424f75e", 00:21:46.900 "is_configured": false, 00:21:46.900 "data_offset": 0, 00:21:46.900 "data_size": 65536 00:21:46.900 }, 00:21:46.900 { 00:21:46.900 "name": "BaseBdev3", 00:21:46.900 "uuid": "9a81c92a-9960-42a9-839e-1bba80f08b07", 00:21:46.900 "is_configured": true, 00:21:46.900 "data_offset": 0, 00:21:46.900 "data_size": 65536 00:21:46.900 }, 00:21:46.900 { 00:21:46.900 "name": "BaseBdev4", 00:21:46.900 "uuid": "0fe7c82c-085b-48f1-bb3f-88441747d6b0", 00:21:46.900 "is_configured": true, 00:21:46.900 "data_offset": 0, 00:21:46.900 "data_size": 65536 00:21:46.900 } 00:21:46.900 ] 00:21:46.900 }' 00:21:46.900 00:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:46.900 00:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.163 00:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.163 00:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:47.471 00:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:47.471 00:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:47.729 [2024-07-25 00:05:43.369067] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:47.729 00:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:47.729 00:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:47.729 00:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:47.729 00:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:47.729 00:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:47.729 00:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:47.729 00:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:47.729 00:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:47.729 00:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:47.729 00:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:47.729 00:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.729 00:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:47.988 00:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:47.988 "name": "Existed_Raid", 00:21:47.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.988 "strip_size_kb": 64, 00:21:47.988 "state": "configuring", 00:21:47.988 "raid_level": "concat", 00:21:47.988 "superblock": false, 00:21:47.988 "num_base_bdevs": 4, 00:21:47.988 "num_base_bdevs_discovered": 2, 00:21:47.988 "num_base_bdevs_operational": 4, 00:21:47.988 "base_bdevs_list": [ 00:21:47.988 { 00:21:47.988 "name": "BaseBdev1", 00:21:47.988 "uuid": "7980842e-3b91-4f7e-ad1b-6bd9a65be536", 00:21:47.988 "is_configured": true, 00:21:47.988 "data_offset": 0, 00:21:47.988 "data_size": 65536 00:21:47.988 }, 00:21:47.988 { 00:21:47.988 "name": null, 00:21:47.988 "uuid": "5dd5bfb2-b5b8-49b4-ab41-971d7424f75e", 00:21:47.988 "is_configured": false, 00:21:47.988 "data_offset": 0, 00:21:47.988 "data_size": 65536 00:21:47.988 }, 00:21:47.988 { 00:21:47.988 "name": null, 00:21:47.988 "uuid": "9a81c92a-9960-42a9-839e-1bba80f08b07", 00:21:47.988 "is_configured": false, 00:21:47.988 "data_offset": 0, 00:21:47.988 "data_size": 65536 00:21:47.988 }, 00:21:47.988 { 00:21:47.988 "name": "BaseBdev4", 00:21:47.988 "uuid": "0fe7c82c-085b-48f1-bb3f-88441747d6b0", 00:21:47.988 "is_configured": true, 00:21:47.988 "data_offset": 0, 00:21:47.988 "data_size": 65536 00:21:47.988 } 00:21:47.988 ] 00:21:47.988 }' 00:21:47.988 00:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:47.988 00:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.246 00:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.246 00:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:48.505 00:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:48.505 00:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:48.764 [2024-07-25 00:05:44.425334] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:48.764 00:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:48.764 00:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:48.764 00:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:48.764 00:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:48.764 00:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:48.764 00:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:48.764 00:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:48.764 00:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:48.764 00:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:48.764 00:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:48.764 00:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.764 00:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.022 00:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:49.022 "name": "Existed_Raid", 00:21:49.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.022 "strip_size_kb": 64, 00:21:49.022 "state": "configuring", 00:21:49.022 "raid_level": "concat", 00:21:49.022 "superblock": false, 00:21:49.022 "num_base_bdevs": 4, 00:21:49.022 "num_base_bdevs_discovered": 3, 00:21:49.022 "num_base_bdevs_operational": 4, 00:21:49.022 "base_bdevs_list": [ 00:21:49.022 { 00:21:49.022 "name": "BaseBdev1", 00:21:49.022 "uuid": "7980842e-3b91-4f7e-ad1b-6bd9a65be536", 00:21:49.022 "is_configured": true, 00:21:49.022 "data_offset": 0, 00:21:49.022 "data_size": 65536 00:21:49.022 }, 00:21:49.022 { 00:21:49.022 "name": null, 00:21:49.022 "uuid": "5dd5bfb2-b5b8-49b4-ab41-971d7424f75e", 00:21:49.022 "is_configured": false, 00:21:49.022 "data_offset": 0, 00:21:49.022 "data_size": 65536 00:21:49.022 }, 00:21:49.022 { 00:21:49.022 "name": "BaseBdev3", 00:21:49.022 "uuid": "9a81c92a-9960-42a9-839e-1bba80f08b07", 00:21:49.022 "is_configured": true, 00:21:49.022 "data_offset": 0, 00:21:49.022 "data_size": 65536 00:21:49.022 }, 00:21:49.022 { 00:21:49.022 "name": "BaseBdev4", 00:21:49.022 "uuid": "0fe7c82c-085b-48f1-bb3f-88441747d6b0", 00:21:49.022 "is_configured": true, 00:21:49.022 "data_offset": 0, 00:21:49.022 "data_size": 65536 00:21:49.022 } 00:21:49.023 ] 00:21:49.023 }' 00:21:49.023 00:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:49.023 00:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.281 00:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.281 00:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:49.540 00:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:49.540 00:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:49.799 [2024-07-25 00:05:45.509665] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:49.799 00:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:49.799 00:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:49.799 00:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:49.799 00:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:49.799 00:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:49.799 00:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:49.799 00:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:49.799 00:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:49.799 00:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:49.799 00:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:49.799 00:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.799 00:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.059 00:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:50.059 "name": "Existed_Raid", 00:21:50.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.059 "strip_size_kb": 64, 00:21:50.059 "state": "configuring", 00:21:50.059 "raid_level": "concat", 00:21:50.059 "superblock": false, 00:21:50.059 "num_base_bdevs": 4, 00:21:50.059 "num_base_bdevs_discovered": 2, 00:21:50.059 "num_base_bdevs_operational": 4, 00:21:50.059 "base_bdevs_list": [ 00:21:50.059 { 00:21:50.059 "name": null, 00:21:50.059 "uuid": "7980842e-3b91-4f7e-ad1b-6bd9a65be536", 00:21:50.059 "is_configured": false, 00:21:50.059 "data_offset": 0, 00:21:50.059 "data_size": 65536 00:21:50.059 }, 00:21:50.059 { 00:21:50.059 "name": null, 00:21:50.059 "uuid": "5dd5bfb2-b5b8-49b4-ab41-971d7424f75e", 00:21:50.059 "is_configured": false, 00:21:50.059 "data_offset": 0, 00:21:50.059 "data_size": 65536 00:21:50.059 }, 00:21:50.059 { 00:21:50.059 "name": "BaseBdev3", 00:21:50.059 "uuid": "9a81c92a-9960-42a9-839e-1bba80f08b07", 00:21:50.059 "is_configured": true, 00:21:50.059 "data_offset": 0, 00:21:50.059 "data_size": 65536 00:21:50.059 }, 00:21:50.059 { 00:21:50.059 "name": "BaseBdev4", 00:21:50.059 "uuid": "0fe7c82c-085b-48f1-bb3f-88441747d6b0", 00:21:50.059 "is_configured": true, 00:21:50.059 "data_offset": 0, 00:21:50.059 "data_size": 65536 00:21:50.059 } 00:21:50.059 ] 00:21:50.059 }' 00:21:50.059 00:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:50.059 00:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.318 00:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.318 00:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:50.577 00:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:50.577 00:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:50.838 [2024-07-25 00:05:46.576988] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:50.838 00:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:50.838 00:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:50.838 00:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:50.838 00:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:50.838 00:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:50.838 00:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:50.838 00:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:50.838 00:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:50.838 00:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:50.838 00:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:50.838 00:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.838 00:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:51.097 00:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:51.097 "name": "Existed_Raid", 00:21:51.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.097 "strip_size_kb": 64, 00:21:51.097 "state": "configuring", 00:21:51.097 "raid_level": "concat", 00:21:51.097 "superblock": false, 00:21:51.097 "num_base_bdevs": 4, 00:21:51.097 "num_base_bdevs_discovered": 3, 00:21:51.097 "num_base_bdevs_operational": 4, 00:21:51.097 "base_bdevs_list": [ 00:21:51.097 { 00:21:51.097 "name": null, 00:21:51.097 "uuid": "7980842e-3b91-4f7e-ad1b-6bd9a65be536", 00:21:51.097 "is_configured": false, 00:21:51.097 "data_offset": 0, 00:21:51.097 "data_size": 65536 00:21:51.097 }, 00:21:51.097 { 00:21:51.097 "name": "BaseBdev2", 00:21:51.097 "uuid": "5dd5bfb2-b5b8-49b4-ab41-971d7424f75e", 00:21:51.097 "is_configured": true, 00:21:51.097 "data_offset": 0, 00:21:51.097 "data_size": 65536 00:21:51.097 }, 00:21:51.097 { 00:21:51.097 "name": "BaseBdev3", 00:21:51.097 "uuid": "9a81c92a-9960-42a9-839e-1bba80f08b07", 00:21:51.097 "is_configured": true, 00:21:51.097 "data_offset": 0, 00:21:51.097 "data_size": 65536 00:21:51.097 }, 00:21:51.097 { 00:21:51.098 "name": "BaseBdev4", 00:21:51.098 "uuid": "0fe7c82c-085b-48f1-bb3f-88441747d6b0", 00:21:51.098 "is_configured": true, 00:21:51.098 "data_offset": 0, 00:21:51.098 "data_size": 65536 00:21:51.098 } 00:21:51.098 ] 00:21:51.098 }' 00:21:51.098 00:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:51.098 00:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.357 00:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.357 00:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:51.617 00:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:51.617 00:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.617 00:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:51.875 00:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 7980842e-3b91-4f7e-ad1b-6bd9a65be536 00:21:52.134 [2024-07-25 00:05:47.880205] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:52.134 [2024-07-25 00:05:47.880278] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:21:52.134 [2024-07-25 00:05:47.880321] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:52.134 [2024-07-25 00:05:47.880454] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ee0 00:21:52.134 [2024-07-25 00:05:47.880848] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:21:52.134 [2024-07-25 00:05:47.880870] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000009380 00:21:52.134 [2024-07-25 00:05:47.881189] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.134 NewBaseBdev 00:21:52.134 00:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:52.134 00:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:21:52.134 00:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:52.134 00:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:52.134 00:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:52.134 00:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:52.134 00:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:52.393 00:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:52.653 [ 00:21:52.653 { 00:21:52.653 "name": "NewBaseBdev", 00:21:52.653 "aliases": [ 00:21:52.653 "7980842e-3b91-4f7e-ad1b-6bd9a65be536" 00:21:52.653 ], 00:21:52.653 "product_name": "Malloc disk", 00:21:52.653 "block_size": 512, 00:21:52.653 "num_blocks": 65536, 00:21:52.653 "uuid": "7980842e-3b91-4f7e-ad1b-6bd9a65be536", 00:21:52.653 "assigned_rate_limits": { 00:21:52.653 "rw_ios_per_sec": 0, 00:21:52.653 "rw_mbytes_per_sec": 0, 00:21:52.653 "r_mbytes_per_sec": 0, 00:21:52.653 "w_mbytes_per_sec": 0 00:21:52.653 }, 00:21:52.653 "claimed": true, 00:21:52.653 "claim_type": "exclusive_write", 00:21:52.653 "zoned": false, 00:21:52.653 "supported_io_types": { 00:21:52.653 "read": true, 00:21:52.653 "write": true, 00:21:52.653 "unmap": true, 00:21:52.653 "flush": true, 00:21:52.653 "reset": true, 00:21:52.653 "nvme_admin": false, 00:21:52.653 "nvme_io": false, 00:21:52.653 "nvme_io_md": false, 00:21:52.653 "write_zeroes": true, 00:21:52.653 "zcopy": true, 00:21:52.653 "get_zone_info": false, 00:21:52.653 "zone_management": false, 00:21:52.653 "zone_append": false, 00:21:52.653 "compare": false, 00:21:52.653 "compare_and_write": false, 00:21:52.653 "abort": true, 00:21:52.653 "seek_hole": false, 00:21:52.653 "seek_data": false, 00:21:52.653 "copy": true, 00:21:52.653 "nvme_iov_md": false 00:21:52.653 }, 00:21:52.653 "memory_domains": [ 00:21:52.653 { 00:21:52.653 "dma_device_id": "system", 00:21:52.653 "dma_device_type": 1 00:21:52.653 }, 00:21:52.653 { 00:21:52.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.653 "dma_device_type": 2 00:21:52.653 } 00:21:52.653 ], 00:21:52.653 "driver_specific": {} 00:21:52.653 } 00:21:52.653 ] 00:21:52.653 00:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:52.653 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:52.653 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:52.653 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:52.653 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:52.653 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:52.653 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:52.653 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:52.653 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:52.653 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:52.653 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:52.653 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:52.653 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.912 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:52.912 "name": "Existed_Raid", 00:21:52.912 "uuid": "f7309640-b477-4be0-bb48-dad64d5bf851", 00:21:52.912 "strip_size_kb": 64, 00:21:52.912 "state": "online", 00:21:52.912 "raid_level": "concat", 00:21:52.912 "superblock": false, 00:21:52.912 "num_base_bdevs": 4, 00:21:52.912 "num_base_bdevs_discovered": 4, 00:21:52.912 "num_base_bdevs_operational": 4, 00:21:52.912 "base_bdevs_list": [ 00:21:52.912 { 00:21:52.912 "name": "NewBaseBdev", 00:21:52.912 "uuid": "7980842e-3b91-4f7e-ad1b-6bd9a65be536", 00:21:52.912 "is_configured": true, 00:21:52.912 "data_offset": 0, 00:21:52.912 "data_size": 65536 00:21:52.912 }, 00:21:52.912 { 00:21:52.912 "name": "BaseBdev2", 00:21:52.912 "uuid": "5dd5bfb2-b5b8-49b4-ab41-971d7424f75e", 00:21:52.912 "is_configured": true, 00:21:52.912 "data_offset": 0, 00:21:52.912 "data_size": 65536 00:21:52.912 }, 00:21:52.912 { 00:21:52.912 "name": "BaseBdev3", 00:21:52.912 "uuid": "9a81c92a-9960-42a9-839e-1bba80f08b07", 00:21:52.912 "is_configured": true, 00:21:52.912 "data_offset": 0, 00:21:52.912 "data_size": 65536 00:21:52.912 }, 00:21:52.912 { 00:21:52.912 "name": "BaseBdev4", 00:21:52.912 "uuid": "0fe7c82c-085b-48f1-bb3f-88441747d6b0", 00:21:52.912 "is_configured": true, 00:21:52.912 "data_offset": 0, 00:21:52.912 "data_size": 65536 00:21:52.912 } 00:21:52.912 ] 00:21:52.912 }' 00:21:52.912 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:52.912 00:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.172 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:53.172 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:53.172 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:53.172 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:53.172 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:53.172 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:53.172 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:53.172 00:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:53.431 [2024-07-25 00:05:49.068976] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:53.431 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:53.431 "name": "Existed_Raid", 00:21:53.431 "aliases": [ 00:21:53.431 "f7309640-b477-4be0-bb48-dad64d5bf851" 00:21:53.431 ], 00:21:53.431 "product_name": "Raid Volume", 00:21:53.431 "block_size": 512, 00:21:53.431 "num_blocks": 262144, 00:21:53.431 "uuid": "f7309640-b477-4be0-bb48-dad64d5bf851", 00:21:53.431 "assigned_rate_limits": { 00:21:53.431 "rw_ios_per_sec": 0, 00:21:53.431 "rw_mbytes_per_sec": 0, 00:21:53.431 "r_mbytes_per_sec": 0, 00:21:53.431 "w_mbytes_per_sec": 0 00:21:53.431 }, 00:21:53.431 "claimed": false, 00:21:53.431 "zoned": false, 00:21:53.431 "supported_io_types": { 00:21:53.431 "read": true, 00:21:53.431 "write": true, 00:21:53.431 "unmap": true, 00:21:53.431 "flush": true, 00:21:53.431 "reset": true, 00:21:53.431 "nvme_admin": false, 00:21:53.431 "nvme_io": false, 00:21:53.431 "nvme_io_md": false, 00:21:53.431 "write_zeroes": true, 00:21:53.431 "zcopy": false, 00:21:53.431 "get_zone_info": false, 00:21:53.431 "zone_management": false, 00:21:53.431 "zone_append": false, 00:21:53.431 "compare": false, 00:21:53.431 "compare_and_write": false, 00:21:53.431 "abort": false, 00:21:53.431 "seek_hole": false, 00:21:53.431 "seek_data": false, 00:21:53.431 "copy": false, 00:21:53.431 "nvme_iov_md": false 00:21:53.431 }, 00:21:53.431 "memory_domains": [ 00:21:53.431 { 00:21:53.431 "dma_device_id": "system", 00:21:53.431 "dma_device_type": 1 00:21:53.431 }, 00:21:53.431 { 00:21:53.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.431 "dma_device_type": 2 00:21:53.431 }, 00:21:53.431 { 00:21:53.431 "dma_device_id": "system", 00:21:53.431 "dma_device_type": 1 00:21:53.431 }, 00:21:53.431 { 00:21:53.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.431 "dma_device_type": 2 00:21:53.431 }, 00:21:53.431 { 00:21:53.431 "dma_device_id": "system", 00:21:53.431 "dma_device_type": 1 00:21:53.431 }, 00:21:53.431 { 00:21:53.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.431 "dma_device_type": 2 00:21:53.431 }, 00:21:53.431 { 00:21:53.431 "dma_device_id": "system", 00:21:53.431 "dma_device_type": 1 00:21:53.431 }, 00:21:53.431 { 00:21:53.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.431 "dma_device_type": 2 00:21:53.431 } 00:21:53.431 ], 00:21:53.431 "driver_specific": { 00:21:53.431 "raid": { 00:21:53.431 "uuid": "f7309640-b477-4be0-bb48-dad64d5bf851", 00:21:53.431 "strip_size_kb": 64, 00:21:53.431 "state": "online", 00:21:53.431 "raid_level": "concat", 00:21:53.431 "superblock": false, 00:21:53.431 "num_base_bdevs": 4, 00:21:53.431 "num_base_bdevs_discovered": 4, 00:21:53.431 "num_base_bdevs_operational": 4, 00:21:53.431 "base_bdevs_list": [ 00:21:53.431 { 00:21:53.431 "name": "NewBaseBdev", 00:21:53.431 "uuid": "7980842e-3b91-4f7e-ad1b-6bd9a65be536", 00:21:53.431 "is_configured": true, 00:21:53.431 "data_offset": 0, 00:21:53.431 "data_size": 65536 00:21:53.431 }, 00:21:53.431 { 00:21:53.431 "name": "BaseBdev2", 00:21:53.431 "uuid": "5dd5bfb2-b5b8-49b4-ab41-971d7424f75e", 00:21:53.431 "is_configured": true, 00:21:53.431 "data_offset": 0, 00:21:53.431 "data_size": 65536 00:21:53.431 }, 00:21:53.431 { 00:21:53.431 "name": "BaseBdev3", 00:21:53.431 "uuid": "9a81c92a-9960-42a9-839e-1bba80f08b07", 00:21:53.431 "is_configured": true, 00:21:53.431 "data_offset": 0, 00:21:53.431 "data_size": 65536 00:21:53.431 }, 00:21:53.431 { 00:21:53.431 "name": "BaseBdev4", 00:21:53.431 "uuid": "0fe7c82c-085b-48f1-bb3f-88441747d6b0", 00:21:53.431 "is_configured": true, 00:21:53.431 "data_offset": 0, 00:21:53.431 "data_size": 65536 00:21:53.431 } 00:21:53.431 ] 00:21:53.431 } 00:21:53.431 } 00:21:53.431 }' 00:21:53.431 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:53.431 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:53.431 BaseBdev2 00:21:53.431 BaseBdev3 00:21:53.431 BaseBdev4' 00:21:53.431 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:53.431 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:53.431 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:53.690 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:53.690 "name": "NewBaseBdev", 00:21:53.690 "aliases": [ 00:21:53.690 "7980842e-3b91-4f7e-ad1b-6bd9a65be536" 00:21:53.690 ], 00:21:53.690 "product_name": "Malloc disk", 00:21:53.690 "block_size": 512, 00:21:53.690 "num_blocks": 65536, 00:21:53.690 "uuid": "7980842e-3b91-4f7e-ad1b-6bd9a65be536", 00:21:53.690 "assigned_rate_limits": { 00:21:53.690 "rw_ios_per_sec": 0, 00:21:53.690 "rw_mbytes_per_sec": 0, 00:21:53.690 "r_mbytes_per_sec": 0, 00:21:53.690 "w_mbytes_per_sec": 0 00:21:53.690 }, 00:21:53.690 "claimed": true, 00:21:53.690 "claim_type": "exclusive_write", 00:21:53.690 "zoned": false, 00:21:53.690 "supported_io_types": { 00:21:53.690 "read": true, 00:21:53.690 "write": true, 00:21:53.690 "unmap": true, 00:21:53.690 "flush": true, 00:21:53.690 "reset": true, 00:21:53.690 "nvme_admin": false, 00:21:53.690 "nvme_io": false, 00:21:53.690 "nvme_io_md": false, 00:21:53.690 "write_zeroes": true, 00:21:53.690 "zcopy": true, 00:21:53.690 "get_zone_info": false, 00:21:53.691 "zone_management": false, 00:21:53.691 "zone_append": false, 00:21:53.691 "compare": false, 00:21:53.691 "compare_and_write": false, 00:21:53.691 "abort": true, 00:21:53.691 "seek_hole": false, 00:21:53.691 "seek_data": false, 00:21:53.691 "copy": true, 00:21:53.691 "nvme_iov_md": false 00:21:53.691 }, 00:21:53.691 "memory_domains": [ 00:21:53.691 { 00:21:53.691 "dma_device_id": "system", 00:21:53.691 "dma_device_type": 1 00:21:53.691 }, 00:21:53.691 { 00:21:53.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.691 "dma_device_type": 2 00:21:53.691 } 00:21:53.691 ], 00:21:53.691 "driver_specific": {} 00:21:53.691 }' 00:21:53.691 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:53.691 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:53.691 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:53.691 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:53.691 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:53.691 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:53.691 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:53.691 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:53.691 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:53.691 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:53.691 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:53.691 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:53.691 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:53.691 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:53.691 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:53.950 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:53.950 "name": "BaseBdev2", 00:21:53.950 "aliases": [ 00:21:53.950 "5dd5bfb2-b5b8-49b4-ab41-971d7424f75e" 00:21:53.950 ], 00:21:53.950 "product_name": "Malloc disk", 00:21:53.950 "block_size": 512, 00:21:53.950 "num_blocks": 65536, 00:21:53.950 "uuid": "5dd5bfb2-b5b8-49b4-ab41-971d7424f75e", 00:21:53.950 "assigned_rate_limits": { 00:21:53.950 "rw_ios_per_sec": 0, 00:21:53.950 "rw_mbytes_per_sec": 0, 00:21:53.950 "r_mbytes_per_sec": 0, 00:21:53.950 "w_mbytes_per_sec": 0 00:21:53.950 }, 00:21:53.950 "claimed": true, 00:21:53.950 "claim_type": "exclusive_write", 00:21:53.950 "zoned": false, 00:21:53.950 "supported_io_types": { 00:21:53.950 "read": true, 00:21:53.950 "write": true, 00:21:53.950 "unmap": true, 00:21:53.950 "flush": true, 00:21:53.950 "reset": true, 00:21:53.950 "nvme_admin": false, 00:21:53.950 "nvme_io": false, 00:21:53.950 "nvme_io_md": false, 00:21:53.950 "write_zeroes": true, 00:21:53.950 "zcopy": true, 00:21:53.950 "get_zone_info": false, 00:21:53.950 "zone_management": false, 00:21:53.950 "zone_append": false, 00:21:53.950 "compare": false, 00:21:53.950 "compare_and_write": false, 00:21:53.950 "abort": true, 00:21:53.950 "seek_hole": false, 00:21:53.950 "seek_data": false, 00:21:53.950 "copy": true, 00:21:53.950 "nvme_iov_md": false 00:21:53.950 }, 00:21:53.950 "memory_domains": [ 00:21:53.950 { 00:21:53.950 "dma_device_id": "system", 00:21:53.950 "dma_device_type": 1 00:21:53.950 }, 00:21:53.950 { 00:21:53.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.950 "dma_device_type": 2 00:21:53.950 } 00:21:53.950 ], 00:21:53.950 "driver_specific": {} 00:21:53.950 }' 00:21:53.950 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:53.950 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:53.950 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:53.950 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:53.950 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:53.950 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:53.950 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:53.950 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:53.950 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:53.950 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:53.950 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:53.950 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:53.950 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:53.950 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:53.950 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:54.209 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:54.209 "name": "BaseBdev3", 00:21:54.209 "aliases": [ 00:21:54.209 "9a81c92a-9960-42a9-839e-1bba80f08b07" 00:21:54.209 ], 00:21:54.209 "product_name": "Malloc disk", 00:21:54.209 "block_size": 512, 00:21:54.209 "num_blocks": 65536, 00:21:54.209 "uuid": "9a81c92a-9960-42a9-839e-1bba80f08b07", 00:21:54.209 "assigned_rate_limits": { 00:21:54.209 "rw_ios_per_sec": 0, 00:21:54.209 "rw_mbytes_per_sec": 0, 00:21:54.209 "r_mbytes_per_sec": 0, 00:21:54.209 "w_mbytes_per_sec": 0 00:21:54.209 }, 00:21:54.209 "claimed": true, 00:21:54.209 "claim_type": "exclusive_write", 00:21:54.209 "zoned": false, 00:21:54.209 "supported_io_types": { 00:21:54.209 "read": true, 00:21:54.209 "write": true, 00:21:54.209 "unmap": true, 00:21:54.209 "flush": true, 00:21:54.209 "reset": true, 00:21:54.209 "nvme_admin": false, 00:21:54.209 "nvme_io": false, 00:21:54.209 "nvme_io_md": false, 00:21:54.209 "write_zeroes": true, 00:21:54.209 "zcopy": true, 00:21:54.209 "get_zone_info": false, 00:21:54.209 "zone_management": false, 00:21:54.209 "zone_append": false, 00:21:54.209 "compare": false, 00:21:54.209 "compare_and_write": false, 00:21:54.209 "abort": true, 00:21:54.209 "seek_hole": false, 00:21:54.209 "seek_data": false, 00:21:54.209 "copy": true, 00:21:54.209 "nvme_iov_md": false 00:21:54.209 }, 00:21:54.209 "memory_domains": [ 00:21:54.209 { 00:21:54.209 "dma_device_id": "system", 00:21:54.209 "dma_device_type": 1 00:21:54.209 }, 00:21:54.209 { 00:21:54.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.209 "dma_device_type": 2 00:21:54.209 } 00:21:54.209 ], 00:21:54.209 "driver_specific": {} 00:21:54.209 }' 00:21:54.209 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:54.209 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:54.209 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:54.209 00:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:54.209 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:54.209 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:54.209 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:54.209 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:54.209 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:54.209 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:54.209 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:54.209 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:54.209 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:54.209 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:54.209 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:54.467 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:54.467 "name": "BaseBdev4", 00:21:54.467 "aliases": [ 00:21:54.467 "0fe7c82c-085b-48f1-bb3f-88441747d6b0" 00:21:54.467 ], 00:21:54.467 "product_name": "Malloc disk", 00:21:54.468 "block_size": 512, 00:21:54.468 "num_blocks": 65536, 00:21:54.468 "uuid": "0fe7c82c-085b-48f1-bb3f-88441747d6b0", 00:21:54.468 "assigned_rate_limits": { 00:21:54.468 "rw_ios_per_sec": 0, 00:21:54.468 "rw_mbytes_per_sec": 0, 00:21:54.468 "r_mbytes_per_sec": 0, 00:21:54.468 "w_mbytes_per_sec": 0 00:21:54.468 }, 00:21:54.468 "claimed": true, 00:21:54.468 "claim_type": "exclusive_write", 00:21:54.468 "zoned": false, 00:21:54.468 "supported_io_types": { 00:21:54.468 "read": true, 00:21:54.468 "write": true, 00:21:54.468 "unmap": true, 00:21:54.468 "flush": true, 00:21:54.468 "reset": true, 00:21:54.468 "nvme_admin": false, 00:21:54.468 "nvme_io": false, 00:21:54.468 "nvme_io_md": false, 00:21:54.468 "write_zeroes": true, 00:21:54.468 "zcopy": true, 00:21:54.468 "get_zone_info": false, 00:21:54.468 "zone_management": false, 00:21:54.468 "zone_append": false, 00:21:54.468 "compare": false, 00:21:54.468 "compare_and_write": false, 00:21:54.468 "abort": true, 00:21:54.468 "seek_hole": false, 00:21:54.468 "seek_data": false, 00:21:54.468 "copy": true, 00:21:54.468 "nvme_iov_md": false 00:21:54.468 }, 00:21:54.468 "memory_domains": [ 00:21:54.468 { 00:21:54.468 "dma_device_id": "system", 00:21:54.468 "dma_device_type": 1 00:21:54.468 }, 00:21:54.468 { 00:21:54.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.468 "dma_device_type": 2 00:21:54.468 } 00:21:54.468 ], 00:21:54.468 "driver_specific": {} 00:21:54.468 }' 00:21:54.468 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:54.468 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:54.468 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:54.468 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:54.468 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:54.468 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:54.468 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:54.468 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:54.468 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:54.468 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:54.726 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:54.726 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:54.727 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:54.727 [2024-07-25 00:05:50.549083] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:54.727 [2024-07-25 00:05:50.549123] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:54.727 [2024-07-25 00:05:50.549204] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:54.727 [2024-07-25 00:05:50.549281] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:54.727 [2024-07-25 00:05:50.549312] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name Existed_Raid, state offline 00:21:54.727 00:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 90882 00:21:54.727 00:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 90882 ']' 00:21:54.727 00:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 90882 00:21:54.727 00:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:21:54.727 00:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:54.727 00:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90882 00:21:54.727 killing process with pid 90882 00:21:54.727 00:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:54.727 00:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:54.727 00:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90882' 00:21:54.727 00:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 90882 00:21:54.727 [2024-07-25 00:05:50.591953] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:54.727 00:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 90882 00:21:55.294 [2024-07-25 00:05:50.896099] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:56.230 00:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:21:56.230 00:21:56.230 real 0m26.785s 00:21:56.230 user 0m46.863s 00:21:56.230 sys 0m4.141s 00:21:56.230 ************************************ 00:21:56.230 END TEST raid_state_function_test 00:21:56.230 ************************************ 00:21:56.230 00:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:56.230 00:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.230 00:05:51 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:21:56.230 00:05:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:56.230 00:05:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:56.230 00:05:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:56.230 ************************************ 00:21:56.230 START TEST raid_state_function_test_sb 00:21:56.230 ************************************ 00:21:56.230 00:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:21:56.230 00:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:21:56.230 00:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:21:56.230 00:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:21:56.230 00:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:56.230 00:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:56.230 00:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:56.230 00:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:21:56.230 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:21:56.231 Process raid pid: 91864 00:21:56.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:56.231 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=91864 00:21:56.231 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 91864' 00:21:56.231 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:56.231 00:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 91864 /var/tmp/spdk-raid.sock 00:21:56.231 00:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 91864 ']' 00:21:56.231 00:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:56.231 00:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:56.231 00:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:56.231 00:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:56.231 00:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:56.231 [2024-07-25 00:05:52.094876] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:21:56.231 [2024-07-25 00:05:52.095182] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.489 [2024-07-25 00:05:52.290401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.748 [2024-07-25 00:05:52.467055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.007 [2024-07-25 00:05:52.642542] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:57.266 00:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:57.266 00:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:21:57.266 00:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:57.524 [2024-07-25 00:05:53.304450] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:57.524 [2024-07-25 00:05:53.304533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:57.524 [2024-07-25 00:05:53.304550] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:57.524 [2024-07-25 00:05:53.304566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:57.524 [2024-07-25 00:05:53.304576] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:57.524 [2024-07-25 00:05:53.304589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:57.524 [2024-07-25 00:05:53.304598] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:57.524 [2024-07-25 00:05:53.304611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:57.524 00:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:57.524 00:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:57.524 00:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:57.524 00:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:57.524 00:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:57.524 00:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:57.524 00:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:57.524 00:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:57.524 00:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:57.524 00:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:57.524 00:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.524 00:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.781 00:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:57.781 "name": "Existed_Raid", 00:21:57.781 "uuid": "c3d4ff5e-32e2-46df-b51d-18366cbd8633", 00:21:57.781 "strip_size_kb": 64, 00:21:57.781 "state": "configuring", 00:21:57.781 "raid_level": "concat", 00:21:57.781 "superblock": true, 00:21:57.781 "num_base_bdevs": 4, 00:21:57.781 "num_base_bdevs_discovered": 0, 00:21:57.781 "num_base_bdevs_operational": 4, 00:21:57.781 "base_bdevs_list": [ 00:21:57.781 { 00:21:57.781 "name": "BaseBdev1", 00:21:57.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.781 "is_configured": false, 00:21:57.781 "data_offset": 0, 00:21:57.781 "data_size": 0 00:21:57.781 }, 00:21:57.781 { 00:21:57.781 "name": "BaseBdev2", 00:21:57.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.781 "is_configured": false, 00:21:57.781 "data_offset": 0, 00:21:57.781 "data_size": 0 00:21:57.781 }, 00:21:57.781 { 00:21:57.781 "name": "BaseBdev3", 00:21:57.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.781 "is_configured": false, 00:21:57.781 "data_offset": 0, 00:21:57.781 "data_size": 0 00:21:57.781 }, 00:21:57.781 { 00:21:57.781 "name": "BaseBdev4", 00:21:57.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.781 "is_configured": false, 00:21:57.781 "data_offset": 0, 00:21:57.781 "data_size": 0 00:21:57.781 } 00:21:57.781 ] 00:21:57.781 }' 00:21:57.781 00:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:57.781 00:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:58.039 00:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:58.297 [2024-07-25 00:05:54.040515] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:58.297 [2024-07-25 00:05:54.040563] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:21:58.297 00:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:58.556 [2024-07-25 00:05:54.256604] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:58.556 [2024-07-25 00:05:54.256688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:58.556 [2024-07-25 00:05:54.256702] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:58.556 [2024-07-25 00:05:54.256731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:58.556 [2024-07-25 00:05:54.256740] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:58.556 [2024-07-25 00:05:54.256752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:58.556 [2024-07-25 00:05:54.256761] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:58.556 [2024-07-25 00:05:54.256773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:58.556 00:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:58.817 [2024-07-25 00:05:54.551958] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:58.817 BaseBdev1 00:21:58.817 00:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:58.817 00:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:58.817 00:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:58.817 00:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:58.817 00:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:58.817 00:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:58.817 00:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:59.075 00:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:59.332 [ 00:21:59.332 { 00:21:59.332 "name": "BaseBdev1", 00:21:59.332 "aliases": [ 00:21:59.332 "d8ab94d4-a934-4e13-9a49-91fb13dea6fb" 00:21:59.332 ], 00:21:59.332 "product_name": "Malloc disk", 00:21:59.333 "block_size": 512, 00:21:59.333 "num_blocks": 65536, 00:21:59.333 "uuid": "d8ab94d4-a934-4e13-9a49-91fb13dea6fb", 00:21:59.333 "assigned_rate_limits": { 00:21:59.333 "rw_ios_per_sec": 0, 00:21:59.333 "rw_mbytes_per_sec": 0, 00:21:59.333 "r_mbytes_per_sec": 0, 00:21:59.333 "w_mbytes_per_sec": 0 00:21:59.333 }, 00:21:59.333 "claimed": true, 00:21:59.333 "claim_type": "exclusive_write", 00:21:59.333 "zoned": false, 00:21:59.333 "supported_io_types": { 00:21:59.333 "read": true, 00:21:59.333 "write": true, 00:21:59.333 "unmap": true, 00:21:59.333 "flush": true, 00:21:59.333 "reset": true, 00:21:59.333 "nvme_admin": false, 00:21:59.333 "nvme_io": false, 00:21:59.333 "nvme_io_md": false, 00:21:59.333 "write_zeroes": true, 00:21:59.333 "zcopy": true, 00:21:59.333 "get_zone_info": false, 00:21:59.333 "zone_management": false, 00:21:59.333 "zone_append": false, 00:21:59.333 "compare": false, 00:21:59.333 "compare_and_write": false, 00:21:59.333 "abort": true, 00:21:59.333 "seek_hole": false, 00:21:59.333 "seek_data": false, 00:21:59.333 "copy": true, 00:21:59.333 "nvme_iov_md": false 00:21:59.333 }, 00:21:59.333 "memory_domains": [ 00:21:59.333 { 00:21:59.333 "dma_device_id": "system", 00:21:59.333 "dma_device_type": 1 00:21:59.333 }, 00:21:59.333 { 00:21:59.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.333 "dma_device_type": 2 00:21:59.333 } 00:21:59.333 ], 00:21:59.333 "driver_specific": {} 00:21:59.333 } 00:21:59.333 ] 00:21:59.333 00:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:59.333 00:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:59.333 00:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:59.333 00:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:59.333 00:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:59.333 00:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:59.333 00:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:59.333 00:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:59.333 00:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:59.333 00:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:59.333 00:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:59.333 00:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.333 00:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.591 00:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:59.591 "name": "Existed_Raid", 00:21:59.591 "uuid": "207dca67-0e8a-497e-aa06-43f3708032e9", 00:21:59.591 "strip_size_kb": 64, 00:21:59.591 "state": "configuring", 00:21:59.591 "raid_level": "concat", 00:21:59.591 "superblock": true, 00:21:59.591 "num_base_bdevs": 4, 00:21:59.591 "num_base_bdevs_discovered": 1, 00:21:59.591 "num_base_bdevs_operational": 4, 00:21:59.591 "base_bdevs_list": [ 00:21:59.591 { 00:21:59.591 "name": "BaseBdev1", 00:21:59.591 "uuid": "d8ab94d4-a934-4e13-9a49-91fb13dea6fb", 00:21:59.591 "is_configured": true, 00:21:59.591 "data_offset": 2048, 00:21:59.591 "data_size": 63488 00:21:59.591 }, 00:21:59.591 { 00:21:59.591 "name": "BaseBdev2", 00:21:59.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.591 "is_configured": false, 00:21:59.591 "data_offset": 0, 00:21:59.591 "data_size": 0 00:21:59.591 }, 00:21:59.591 { 00:21:59.591 "name": "BaseBdev3", 00:21:59.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.591 "is_configured": false, 00:21:59.591 "data_offset": 0, 00:21:59.591 "data_size": 0 00:21:59.591 }, 00:21:59.591 { 00:21:59.591 "name": "BaseBdev4", 00:21:59.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.591 "is_configured": false, 00:21:59.591 "data_offset": 0, 00:21:59.591 "data_size": 0 00:21:59.591 } 00:21:59.591 ] 00:21:59.591 }' 00:21:59.591 00:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:59.591 00:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:59.848 00:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:00.106 [2024-07-25 00:05:55.776386] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:00.106 [2024-07-25 00:05:55.776442] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:22:00.106 00:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:00.364 [2024-07-25 00:05:56.004501] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:00.364 [2024-07-25 00:05:56.006677] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:00.364 [2024-07-25 00:05:56.006752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:00.364 [2024-07-25 00:05:56.006769] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:00.364 [2024-07-25 00:05:56.006785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:00.364 [2024-07-25 00:05:56.006795] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:00.364 [2024-07-25 00:05:56.006824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:00.364 00:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:00.364 00:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:00.364 00:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:00.364 00:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:00.364 00:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:00.364 00:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:00.364 00:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:00.364 00:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:00.364 00:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:00.364 00:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:00.364 00:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:00.364 00:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:00.364 00:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.364 00:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.623 00:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:00.623 "name": "Existed_Raid", 00:22:00.623 "uuid": "12827508-749f-457c-a4c2-3181fb38118d", 00:22:00.623 "strip_size_kb": 64, 00:22:00.623 "state": "configuring", 00:22:00.623 "raid_level": "concat", 00:22:00.623 "superblock": true, 00:22:00.623 "num_base_bdevs": 4, 00:22:00.623 "num_base_bdevs_discovered": 1, 00:22:00.623 "num_base_bdevs_operational": 4, 00:22:00.623 "base_bdevs_list": [ 00:22:00.623 { 00:22:00.623 "name": "BaseBdev1", 00:22:00.623 "uuid": "d8ab94d4-a934-4e13-9a49-91fb13dea6fb", 00:22:00.623 "is_configured": true, 00:22:00.623 "data_offset": 2048, 00:22:00.623 "data_size": 63488 00:22:00.623 }, 00:22:00.623 { 00:22:00.623 "name": "BaseBdev2", 00:22:00.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.623 "is_configured": false, 00:22:00.623 "data_offset": 0, 00:22:00.623 "data_size": 0 00:22:00.623 }, 00:22:00.623 { 00:22:00.623 "name": "BaseBdev3", 00:22:00.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.623 "is_configured": false, 00:22:00.623 "data_offset": 0, 00:22:00.623 "data_size": 0 00:22:00.623 }, 00:22:00.623 { 00:22:00.623 "name": "BaseBdev4", 00:22:00.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.623 "is_configured": false, 00:22:00.623 "data_offset": 0, 00:22:00.623 "data_size": 0 00:22:00.623 } 00:22:00.623 ] 00:22:00.623 }' 00:22:00.623 00:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:00.623 00:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:00.882 00:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:01.140 [2024-07-25 00:05:56.827410] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:01.140 BaseBdev2 00:22:01.140 00:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:01.140 00:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:22:01.140 00:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:01.140 00:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:01.140 00:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:01.140 00:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:01.140 00:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:01.399 00:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:01.657 [ 00:22:01.657 { 00:22:01.657 "name": "BaseBdev2", 00:22:01.657 "aliases": [ 00:22:01.657 "b03ee533-58fc-47b2-aa78-7081c4f02827" 00:22:01.657 ], 00:22:01.657 "product_name": "Malloc disk", 00:22:01.657 "block_size": 512, 00:22:01.657 "num_blocks": 65536, 00:22:01.657 "uuid": "b03ee533-58fc-47b2-aa78-7081c4f02827", 00:22:01.657 "assigned_rate_limits": { 00:22:01.657 "rw_ios_per_sec": 0, 00:22:01.657 "rw_mbytes_per_sec": 0, 00:22:01.657 "r_mbytes_per_sec": 0, 00:22:01.657 "w_mbytes_per_sec": 0 00:22:01.657 }, 00:22:01.657 "claimed": true, 00:22:01.657 "claim_type": "exclusive_write", 00:22:01.657 "zoned": false, 00:22:01.658 "supported_io_types": { 00:22:01.658 "read": true, 00:22:01.658 "write": true, 00:22:01.658 "unmap": true, 00:22:01.658 "flush": true, 00:22:01.658 "reset": true, 00:22:01.658 "nvme_admin": false, 00:22:01.658 "nvme_io": false, 00:22:01.658 "nvme_io_md": false, 00:22:01.658 "write_zeroes": true, 00:22:01.658 "zcopy": true, 00:22:01.658 "get_zone_info": false, 00:22:01.658 "zone_management": false, 00:22:01.658 "zone_append": false, 00:22:01.658 "compare": false, 00:22:01.658 "compare_and_write": false, 00:22:01.658 "abort": true, 00:22:01.658 "seek_hole": false, 00:22:01.658 "seek_data": false, 00:22:01.658 "copy": true, 00:22:01.658 "nvme_iov_md": false 00:22:01.658 }, 00:22:01.658 "memory_domains": [ 00:22:01.658 { 00:22:01.658 "dma_device_id": "system", 00:22:01.658 "dma_device_type": 1 00:22:01.658 }, 00:22:01.658 { 00:22:01.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.658 "dma_device_type": 2 00:22:01.658 } 00:22:01.658 ], 00:22:01.658 "driver_specific": {} 00:22:01.658 } 00:22:01.658 ] 00:22:01.658 00:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:01.658 00:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:01.658 00:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:01.658 00:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:01.658 00:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:01.658 00:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:01.658 00:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:01.658 00:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:01.658 00:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:01.658 00:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:01.658 00:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:01.658 00:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:01.658 00:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:01.658 00:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.658 00:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.916 00:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:01.916 "name": "Existed_Raid", 00:22:01.916 "uuid": "12827508-749f-457c-a4c2-3181fb38118d", 00:22:01.916 "strip_size_kb": 64, 00:22:01.916 "state": "configuring", 00:22:01.916 "raid_level": "concat", 00:22:01.916 "superblock": true, 00:22:01.916 "num_base_bdevs": 4, 00:22:01.916 "num_base_bdevs_discovered": 2, 00:22:01.916 "num_base_bdevs_operational": 4, 00:22:01.916 "base_bdevs_list": [ 00:22:01.916 { 00:22:01.916 "name": "BaseBdev1", 00:22:01.916 "uuid": "d8ab94d4-a934-4e13-9a49-91fb13dea6fb", 00:22:01.916 "is_configured": true, 00:22:01.916 "data_offset": 2048, 00:22:01.917 "data_size": 63488 00:22:01.917 }, 00:22:01.917 { 00:22:01.917 "name": "BaseBdev2", 00:22:01.917 "uuid": "b03ee533-58fc-47b2-aa78-7081c4f02827", 00:22:01.917 "is_configured": true, 00:22:01.917 "data_offset": 2048, 00:22:01.917 "data_size": 63488 00:22:01.917 }, 00:22:01.917 { 00:22:01.917 "name": "BaseBdev3", 00:22:01.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.917 "is_configured": false, 00:22:01.917 "data_offset": 0, 00:22:01.917 "data_size": 0 00:22:01.917 }, 00:22:01.917 { 00:22:01.917 "name": "BaseBdev4", 00:22:01.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.917 "is_configured": false, 00:22:01.917 "data_offset": 0, 00:22:01.917 "data_size": 0 00:22:01.917 } 00:22:01.917 ] 00:22:01.917 }' 00:22:01.917 00:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:01.917 00:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.176 00:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:02.434 [2024-07-25 00:05:58.126620] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:02.434 BaseBdev3 00:22:02.434 00:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:02.434 00:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:22:02.434 00:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:02.434 00:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:02.434 00:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:02.434 00:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:02.434 00:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:02.693 00:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:02.952 [ 00:22:02.952 { 00:22:02.952 "name": "BaseBdev3", 00:22:02.952 "aliases": [ 00:22:02.952 "4b5a44af-24b4-4952-97f6-94775d3f2dda" 00:22:02.952 ], 00:22:02.952 "product_name": "Malloc disk", 00:22:02.952 "block_size": 512, 00:22:02.952 "num_blocks": 65536, 00:22:02.952 "uuid": "4b5a44af-24b4-4952-97f6-94775d3f2dda", 00:22:02.952 "assigned_rate_limits": { 00:22:02.952 "rw_ios_per_sec": 0, 00:22:02.952 "rw_mbytes_per_sec": 0, 00:22:02.952 "r_mbytes_per_sec": 0, 00:22:02.952 "w_mbytes_per_sec": 0 00:22:02.952 }, 00:22:02.952 "claimed": true, 00:22:02.952 "claim_type": "exclusive_write", 00:22:02.952 "zoned": false, 00:22:02.952 "supported_io_types": { 00:22:02.952 "read": true, 00:22:02.952 "write": true, 00:22:02.952 "unmap": true, 00:22:02.952 "flush": true, 00:22:02.952 "reset": true, 00:22:02.952 "nvme_admin": false, 00:22:02.952 "nvme_io": false, 00:22:02.952 "nvme_io_md": false, 00:22:02.952 "write_zeroes": true, 00:22:02.952 "zcopy": true, 00:22:02.952 "get_zone_info": false, 00:22:02.952 "zone_management": false, 00:22:02.952 "zone_append": false, 00:22:02.952 "compare": false, 00:22:02.952 "compare_and_write": false, 00:22:02.952 "abort": true, 00:22:02.952 "seek_hole": false, 00:22:02.952 "seek_data": false, 00:22:02.952 "copy": true, 00:22:02.952 "nvme_iov_md": false 00:22:02.952 }, 00:22:02.952 "memory_domains": [ 00:22:02.952 { 00:22:02.952 "dma_device_id": "system", 00:22:02.952 "dma_device_type": 1 00:22:02.952 }, 00:22:02.952 { 00:22:02.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:02.952 "dma_device_type": 2 00:22:02.952 } 00:22:02.952 ], 00:22:02.952 "driver_specific": {} 00:22:02.952 } 00:22:02.952 ] 00:22:02.952 00:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:02.952 00:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:02.952 00:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:02.952 00:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:02.953 00:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:02.953 00:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:02.953 00:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:02.953 00:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:02.953 00:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:02.953 00:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:02.953 00:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:02.953 00:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:02.953 00:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:02.953 00:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.953 00:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.212 00:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:03.212 "name": "Existed_Raid", 00:22:03.212 "uuid": "12827508-749f-457c-a4c2-3181fb38118d", 00:22:03.212 "strip_size_kb": 64, 00:22:03.212 "state": "configuring", 00:22:03.212 "raid_level": "concat", 00:22:03.212 "superblock": true, 00:22:03.212 "num_base_bdevs": 4, 00:22:03.212 "num_base_bdevs_discovered": 3, 00:22:03.212 "num_base_bdevs_operational": 4, 00:22:03.212 "base_bdevs_list": [ 00:22:03.212 { 00:22:03.212 "name": "BaseBdev1", 00:22:03.212 "uuid": "d8ab94d4-a934-4e13-9a49-91fb13dea6fb", 00:22:03.212 "is_configured": true, 00:22:03.212 "data_offset": 2048, 00:22:03.212 "data_size": 63488 00:22:03.212 }, 00:22:03.212 { 00:22:03.212 "name": "BaseBdev2", 00:22:03.212 "uuid": "b03ee533-58fc-47b2-aa78-7081c4f02827", 00:22:03.212 "is_configured": true, 00:22:03.212 "data_offset": 2048, 00:22:03.212 "data_size": 63488 00:22:03.212 }, 00:22:03.212 { 00:22:03.212 "name": "BaseBdev3", 00:22:03.212 "uuid": "4b5a44af-24b4-4952-97f6-94775d3f2dda", 00:22:03.212 "is_configured": true, 00:22:03.212 "data_offset": 2048, 00:22:03.212 "data_size": 63488 00:22:03.212 }, 00:22:03.212 { 00:22:03.212 "name": "BaseBdev4", 00:22:03.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.212 "is_configured": false, 00:22:03.212 "data_offset": 0, 00:22:03.212 "data_size": 0 00:22:03.212 } 00:22:03.212 ] 00:22:03.212 }' 00:22:03.212 00:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:03.212 00:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:03.471 00:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:03.730 [2024-07-25 00:05:59.469180] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:03.730 [2024-07-25 00:05:59.469665] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:22:03.730 [2024-07-25 00:05:59.469877] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:03.730 [2024-07-25 00:05:59.470151] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:22:03.730 [2024-07-25 00:05:59.470687] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:22:03.730 BaseBdev4 00:22:03.730 [2024-07-25 00:05:59.470870] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:22:03.730 [2024-07-25 00:05:59.471287] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.730 00:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:22:03.730 00:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:22:03.730 00:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:03.730 00:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:03.730 00:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:03.730 00:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:03.730 00:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:03.989 00:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:04.248 [ 00:22:04.248 { 00:22:04.248 "name": "BaseBdev4", 00:22:04.248 "aliases": [ 00:22:04.248 "9869c8ee-0b5c-4cef-a99a-795782f33add" 00:22:04.248 ], 00:22:04.248 "product_name": "Malloc disk", 00:22:04.248 "block_size": 512, 00:22:04.248 "num_blocks": 65536, 00:22:04.248 "uuid": "9869c8ee-0b5c-4cef-a99a-795782f33add", 00:22:04.248 "assigned_rate_limits": { 00:22:04.248 "rw_ios_per_sec": 0, 00:22:04.248 "rw_mbytes_per_sec": 0, 00:22:04.248 "r_mbytes_per_sec": 0, 00:22:04.248 "w_mbytes_per_sec": 0 00:22:04.249 }, 00:22:04.249 "claimed": true, 00:22:04.249 "claim_type": "exclusive_write", 00:22:04.249 "zoned": false, 00:22:04.249 "supported_io_types": { 00:22:04.249 "read": true, 00:22:04.249 "write": true, 00:22:04.249 "unmap": true, 00:22:04.249 "flush": true, 00:22:04.249 "reset": true, 00:22:04.249 "nvme_admin": false, 00:22:04.249 "nvme_io": false, 00:22:04.249 "nvme_io_md": false, 00:22:04.249 "write_zeroes": true, 00:22:04.249 "zcopy": true, 00:22:04.249 "get_zone_info": false, 00:22:04.249 "zone_management": false, 00:22:04.249 "zone_append": false, 00:22:04.249 "compare": false, 00:22:04.249 "compare_and_write": false, 00:22:04.249 "abort": true, 00:22:04.249 "seek_hole": false, 00:22:04.249 "seek_data": false, 00:22:04.249 "copy": true, 00:22:04.249 "nvme_iov_md": false 00:22:04.249 }, 00:22:04.249 "memory_domains": [ 00:22:04.249 { 00:22:04.249 "dma_device_id": "system", 00:22:04.249 "dma_device_type": 1 00:22:04.249 }, 00:22:04.249 { 00:22:04.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.249 "dma_device_type": 2 00:22:04.249 } 00:22:04.249 ], 00:22:04.249 "driver_specific": {} 00:22:04.249 } 00:22:04.249 ] 00:22:04.249 00:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:04.249 00:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:04.249 00:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:04.249 00:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:22:04.249 00:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:04.249 00:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:04.249 00:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:04.249 00:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:04.249 00:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:04.249 00:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:04.249 00:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:04.249 00:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:04.249 00:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:04.249 00:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:04.249 00:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.508 00:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:04.508 "name": "Existed_Raid", 00:22:04.508 "uuid": "12827508-749f-457c-a4c2-3181fb38118d", 00:22:04.508 "strip_size_kb": 64, 00:22:04.508 "state": "online", 00:22:04.508 "raid_level": "concat", 00:22:04.508 "superblock": true, 00:22:04.508 "num_base_bdevs": 4, 00:22:04.508 "num_base_bdevs_discovered": 4, 00:22:04.508 "num_base_bdevs_operational": 4, 00:22:04.508 "base_bdevs_list": [ 00:22:04.508 { 00:22:04.508 "name": "BaseBdev1", 00:22:04.508 "uuid": "d8ab94d4-a934-4e13-9a49-91fb13dea6fb", 00:22:04.508 "is_configured": true, 00:22:04.508 "data_offset": 2048, 00:22:04.508 "data_size": 63488 00:22:04.508 }, 00:22:04.508 { 00:22:04.508 "name": "BaseBdev2", 00:22:04.508 "uuid": "b03ee533-58fc-47b2-aa78-7081c4f02827", 00:22:04.508 "is_configured": true, 00:22:04.508 "data_offset": 2048, 00:22:04.508 "data_size": 63488 00:22:04.508 }, 00:22:04.508 { 00:22:04.508 "name": "BaseBdev3", 00:22:04.508 "uuid": "4b5a44af-24b4-4952-97f6-94775d3f2dda", 00:22:04.508 "is_configured": true, 00:22:04.508 "data_offset": 2048, 00:22:04.508 "data_size": 63488 00:22:04.508 }, 00:22:04.508 { 00:22:04.508 "name": "BaseBdev4", 00:22:04.508 "uuid": "9869c8ee-0b5c-4cef-a99a-795782f33add", 00:22:04.508 "is_configured": true, 00:22:04.508 "data_offset": 2048, 00:22:04.508 "data_size": 63488 00:22:04.508 } 00:22:04.508 ] 00:22:04.508 }' 00:22:04.508 00:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:04.508 00:06:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.767 00:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:04.767 00:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:04.767 00:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:04.767 00:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:04.767 00:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:04.767 00:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:04.767 00:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:04.767 00:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:05.026 [2024-07-25 00:06:00.770080] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:05.026 00:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:05.026 "name": "Existed_Raid", 00:22:05.026 "aliases": [ 00:22:05.026 "12827508-749f-457c-a4c2-3181fb38118d" 00:22:05.026 ], 00:22:05.026 "product_name": "Raid Volume", 00:22:05.026 "block_size": 512, 00:22:05.026 "num_blocks": 253952, 00:22:05.026 "uuid": "12827508-749f-457c-a4c2-3181fb38118d", 00:22:05.026 "assigned_rate_limits": { 00:22:05.026 "rw_ios_per_sec": 0, 00:22:05.026 "rw_mbytes_per_sec": 0, 00:22:05.026 "r_mbytes_per_sec": 0, 00:22:05.026 "w_mbytes_per_sec": 0 00:22:05.026 }, 00:22:05.026 "claimed": false, 00:22:05.026 "zoned": false, 00:22:05.026 "supported_io_types": { 00:22:05.026 "read": true, 00:22:05.026 "write": true, 00:22:05.026 "unmap": true, 00:22:05.026 "flush": true, 00:22:05.026 "reset": true, 00:22:05.026 "nvme_admin": false, 00:22:05.026 "nvme_io": false, 00:22:05.026 "nvme_io_md": false, 00:22:05.026 "write_zeroes": true, 00:22:05.026 "zcopy": false, 00:22:05.026 "get_zone_info": false, 00:22:05.026 "zone_management": false, 00:22:05.026 "zone_append": false, 00:22:05.026 "compare": false, 00:22:05.026 "compare_and_write": false, 00:22:05.026 "abort": false, 00:22:05.026 "seek_hole": false, 00:22:05.026 "seek_data": false, 00:22:05.026 "copy": false, 00:22:05.026 "nvme_iov_md": false 00:22:05.026 }, 00:22:05.026 "memory_domains": [ 00:22:05.026 { 00:22:05.026 "dma_device_id": "system", 00:22:05.026 "dma_device_type": 1 00:22:05.026 }, 00:22:05.026 { 00:22:05.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.026 "dma_device_type": 2 00:22:05.026 }, 00:22:05.026 { 00:22:05.026 "dma_device_id": "system", 00:22:05.026 "dma_device_type": 1 00:22:05.026 }, 00:22:05.026 { 00:22:05.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.026 "dma_device_type": 2 00:22:05.026 }, 00:22:05.026 { 00:22:05.026 "dma_device_id": "system", 00:22:05.026 "dma_device_type": 1 00:22:05.026 }, 00:22:05.026 { 00:22:05.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.026 "dma_device_type": 2 00:22:05.026 }, 00:22:05.026 { 00:22:05.026 "dma_device_id": "system", 00:22:05.026 "dma_device_type": 1 00:22:05.026 }, 00:22:05.026 { 00:22:05.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.026 "dma_device_type": 2 00:22:05.026 } 00:22:05.026 ], 00:22:05.026 "driver_specific": { 00:22:05.026 "raid": { 00:22:05.026 "uuid": "12827508-749f-457c-a4c2-3181fb38118d", 00:22:05.026 "strip_size_kb": 64, 00:22:05.026 "state": "online", 00:22:05.026 "raid_level": "concat", 00:22:05.026 "superblock": true, 00:22:05.026 "num_base_bdevs": 4, 00:22:05.026 "num_base_bdevs_discovered": 4, 00:22:05.026 "num_base_bdevs_operational": 4, 00:22:05.026 "base_bdevs_list": [ 00:22:05.026 { 00:22:05.026 "name": "BaseBdev1", 00:22:05.026 "uuid": "d8ab94d4-a934-4e13-9a49-91fb13dea6fb", 00:22:05.026 "is_configured": true, 00:22:05.026 "data_offset": 2048, 00:22:05.026 "data_size": 63488 00:22:05.026 }, 00:22:05.026 { 00:22:05.026 "name": "BaseBdev2", 00:22:05.026 "uuid": "b03ee533-58fc-47b2-aa78-7081c4f02827", 00:22:05.026 "is_configured": true, 00:22:05.026 "data_offset": 2048, 00:22:05.026 "data_size": 63488 00:22:05.026 }, 00:22:05.026 { 00:22:05.026 "name": "BaseBdev3", 00:22:05.026 "uuid": "4b5a44af-24b4-4952-97f6-94775d3f2dda", 00:22:05.026 "is_configured": true, 00:22:05.026 "data_offset": 2048, 00:22:05.026 "data_size": 63488 00:22:05.026 }, 00:22:05.026 { 00:22:05.026 "name": "BaseBdev4", 00:22:05.026 "uuid": "9869c8ee-0b5c-4cef-a99a-795782f33add", 00:22:05.026 "is_configured": true, 00:22:05.026 "data_offset": 2048, 00:22:05.026 "data_size": 63488 00:22:05.026 } 00:22:05.026 ] 00:22:05.026 } 00:22:05.026 } 00:22:05.026 }' 00:22:05.026 00:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:05.026 00:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:05.026 BaseBdev2 00:22:05.026 BaseBdev3 00:22:05.026 BaseBdev4' 00:22:05.026 00:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:05.026 00:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:05.026 00:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:05.285 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:05.285 "name": "BaseBdev1", 00:22:05.285 "aliases": [ 00:22:05.285 "d8ab94d4-a934-4e13-9a49-91fb13dea6fb" 00:22:05.285 ], 00:22:05.285 "product_name": "Malloc disk", 00:22:05.285 "block_size": 512, 00:22:05.285 "num_blocks": 65536, 00:22:05.285 "uuid": "d8ab94d4-a934-4e13-9a49-91fb13dea6fb", 00:22:05.285 "assigned_rate_limits": { 00:22:05.285 "rw_ios_per_sec": 0, 00:22:05.285 "rw_mbytes_per_sec": 0, 00:22:05.285 "r_mbytes_per_sec": 0, 00:22:05.285 "w_mbytes_per_sec": 0 00:22:05.286 }, 00:22:05.286 "claimed": true, 00:22:05.286 "claim_type": "exclusive_write", 00:22:05.286 "zoned": false, 00:22:05.286 "supported_io_types": { 00:22:05.286 "read": true, 00:22:05.286 "write": true, 00:22:05.286 "unmap": true, 00:22:05.286 "flush": true, 00:22:05.286 "reset": true, 00:22:05.286 "nvme_admin": false, 00:22:05.286 "nvme_io": false, 00:22:05.286 "nvme_io_md": false, 00:22:05.286 "write_zeroes": true, 00:22:05.286 "zcopy": true, 00:22:05.286 "get_zone_info": false, 00:22:05.286 "zone_management": false, 00:22:05.286 "zone_append": false, 00:22:05.286 "compare": false, 00:22:05.286 "compare_and_write": false, 00:22:05.286 "abort": true, 00:22:05.286 "seek_hole": false, 00:22:05.286 "seek_data": false, 00:22:05.286 "copy": true, 00:22:05.286 "nvme_iov_md": false 00:22:05.286 }, 00:22:05.286 "memory_domains": [ 00:22:05.286 { 00:22:05.286 "dma_device_id": "system", 00:22:05.286 "dma_device_type": 1 00:22:05.286 }, 00:22:05.286 { 00:22:05.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.286 "dma_device_type": 2 00:22:05.286 } 00:22:05.286 ], 00:22:05.286 "driver_specific": {} 00:22:05.286 }' 00:22:05.286 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.286 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.286 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:05.286 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.286 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.286 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:05.286 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:05.286 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:05.286 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:05.286 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:05.286 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:05.286 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:05.286 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:05.286 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:05.286 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:05.544 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:05.544 "name": "BaseBdev2", 00:22:05.544 "aliases": [ 00:22:05.544 "b03ee533-58fc-47b2-aa78-7081c4f02827" 00:22:05.544 ], 00:22:05.544 "product_name": "Malloc disk", 00:22:05.544 "block_size": 512, 00:22:05.544 "num_blocks": 65536, 00:22:05.544 "uuid": "b03ee533-58fc-47b2-aa78-7081c4f02827", 00:22:05.544 "assigned_rate_limits": { 00:22:05.544 "rw_ios_per_sec": 0, 00:22:05.544 "rw_mbytes_per_sec": 0, 00:22:05.544 "r_mbytes_per_sec": 0, 00:22:05.544 "w_mbytes_per_sec": 0 00:22:05.544 }, 00:22:05.544 "claimed": true, 00:22:05.544 "claim_type": "exclusive_write", 00:22:05.544 "zoned": false, 00:22:05.544 "supported_io_types": { 00:22:05.544 "read": true, 00:22:05.544 "write": true, 00:22:05.544 "unmap": true, 00:22:05.544 "flush": true, 00:22:05.544 "reset": true, 00:22:05.544 "nvme_admin": false, 00:22:05.544 "nvme_io": false, 00:22:05.544 "nvme_io_md": false, 00:22:05.544 "write_zeroes": true, 00:22:05.544 "zcopy": true, 00:22:05.544 "get_zone_info": false, 00:22:05.544 "zone_management": false, 00:22:05.544 "zone_append": false, 00:22:05.544 "compare": false, 00:22:05.544 "compare_and_write": false, 00:22:05.544 "abort": true, 00:22:05.544 "seek_hole": false, 00:22:05.544 "seek_data": false, 00:22:05.544 "copy": true, 00:22:05.544 "nvme_iov_md": false 00:22:05.544 }, 00:22:05.544 "memory_domains": [ 00:22:05.544 { 00:22:05.544 "dma_device_id": "system", 00:22:05.544 "dma_device_type": 1 00:22:05.544 }, 00:22:05.544 { 00:22:05.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.544 "dma_device_type": 2 00:22:05.544 } 00:22:05.544 ], 00:22:05.544 "driver_specific": {} 00:22:05.544 }' 00:22:05.544 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.544 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.544 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:05.544 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.544 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.544 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:05.544 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:05.812 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:05.812 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:05.812 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:05.812 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:05.812 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:05.812 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:05.812 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:05.812 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:06.092 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:06.092 "name": "BaseBdev3", 00:22:06.092 "aliases": [ 00:22:06.092 "4b5a44af-24b4-4952-97f6-94775d3f2dda" 00:22:06.092 ], 00:22:06.092 "product_name": "Malloc disk", 00:22:06.092 "block_size": 512, 00:22:06.092 "num_blocks": 65536, 00:22:06.092 "uuid": "4b5a44af-24b4-4952-97f6-94775d3f2dda", 00:22:06.092 "assigned_rate_limits": { 00:22:06.092 "rw_ios_per_sec": 0, 00:22:06.092 "rw_mbytes_per_sec": 0, 00:22:06.092 "r_mbytes_per_sec": 0, 00:22:06.092 "w_mbytes_per_sec": 0 00:22:06.092 }, 00:22:06.092 "claimed": true, 00:22:06.092 "claim_type": "exclusive_write", 00:22:06.092 "zoned": false, 00:22:06.092 "supported_io_types": { 00:22:06.092 "read": true, 00:22:06.092 "write": true, 00:22:06.092 "unmap": true, 00:22:06.092 "flush": true, 00:22:06.092 "reset": true, 00:22:06.092 "nvme_admin": false, 00:22:06.092 "nvme_io": false, 00:22:06.092 "nvme_io_md": false, 00:22:06.092 "write_zeroes": true, 00:22:06.092 "zcopy": true, 00:22:06.092 "get_zone_info": false, 00:22:06.092 "zone_management": false, 00:22:06.092 "zone_append": false, 00:22:06.092 "compare": false, 00:22:06.092 "compare_and_write": false, 00:22:06.092 "abort": true, 00:22:06.092 "seek_hole": false, 00:22:06.092 "seek_data": false, 00:22:06.092 "copy": true, 00:22:06.092 "nvme_iov_md": false 00:22:06.092 }, 00:22:06.092 "memory_domains": [ 00:22:06.092 { 00:22:06.092 "dma_device_id": "system", 00:22:06.092 "dma_device_type": 1 00:22:06.092 }, 00:22:06.092 { 00:22:06.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.092 "dma_device_type": 2 00:22:06.092 } 00:22:06.092 ], 00:22:06.092 "driver_specific": {} 00:22:06.092 }' 00:22:06.092 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:06.092 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:06.092 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:06.092 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:06.092 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:06.092 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:06.092 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:06.092 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:06.092 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:06.092 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.092 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.092 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:06.092 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:06.092 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:06.092 00:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:06.351 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:06.351 "name": "BaseBdev4", 00:22:06.351 "aliases": [ 00:22:06.351 "9869c8ee-0b5c-4cef-a99a-795782f33add" 00:22:06.351 ], 00:22:06.351 "product_name": "Malloc disk", 00:22:06.351 "block_size": 512, 00:22:06.351 "num_blocks": 65536, 00:22:06.351 "uuid": "9869c8ee-0b5c-4cef-a99a-795782f33add", 00:22:06.351 "assigned_rate_limits": { 00:22:06.351 "rw_ios_per_sec": 0, 00:22:06.351 "rw_mbytes_per_sec": 0, 00:22:06.351 "r_mbytes_per_sec": 0, 00:22:06.351 "w_mbytes_per_sec": 0 00:22:06.351 }, 00:22:06.351 "claimed": true, 00:22:06.351 "claim_type": "exclusive_write", 00:22:06.351 "zoned": false, 00:22:06.351 "supported_io_types": { 00:22:06.351 "read": true, 00:22:06.351 "write": true, 00:22:06.351 "unmap": true, 00:22:06.351 "flush": true, 00:22:06.351 "reset": true, 00:22:06.351 "nvme_admin": false, 00:22:06.351 "nvme_io": false, 00:22:06.351 "nvme_io_md": false, 00:22:06.351 "write_zeroes": true, 00:22:06.351 "zcopy": true, 00:22:06.351 "get_zone_info": false, 00:22:06.351 "zone_management": false, 00:22:06.351 "zone_append": false, 00:22:06.351 "compare": false, 00:22:06.351 "compare_and_write": false, 00:22:06.351 "abort": true, 00:22:06.351 "seek_hole": false, 00:22:06.351 "seek_data": false, 00:22:06.351 "copy": true, 00:22:06.351 "nvme_iov_md": false 00:22:06.351 }, 00:22:06.351 "memory_domains": [ 00:22:06.351 { 00:22:06.351 "dma_device_id": "system", 00:22:06.351 "dma_device_type": 1 00:22:06.351 }, 00:22:06.351 { 00:22:06.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.351 "dma_device_type": 2 00:22:06.351 } 00:22:06.351 ], 00:22:06.351 "driver_specific": {} 00:22:06.351 }' 00:22:06.351 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:06.351 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:06.351 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:06.351 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:06.351 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:06.351 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:06.351 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:06.351 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:06.351 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:06.351 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.351 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.351 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:06.351 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:06.610 [2024-07-25 00:06:02.354268] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:06.611 [2024-07-25 00:06:02.354314] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:06.611 [2024-07-25 00:06:02.354378] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:06.611 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:06.611 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:22:06.611 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:06.611 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:22:06.611 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:22:06.611 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:22:06.611 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:06.611 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:22:06.611 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:06.611 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:06.611 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:06.611 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:06.611 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:06.611 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:06.611 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:06.611 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.611 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.178 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:07.178 "name": "Existed_Raid", 00:22:07.179 "uuid": "12827508-749f-457c-a4c2-3181fb38118d", 00:22:07.179 "strip_size_kb": 64, 00:22:07.179 "state": "offline", 00:22:07.179 "raid_level": "concat", 00:22:07.179 "superblock": true, 00:22:07.179 "num_base_bdevs": 4, 00:22:07.179 "num_base_bdevs_discovered": 3, 00:22:07.179 "num_base_bdevs_operational": 3, 00:22:07.179 "base_bdevs_list": [ 00:22:07.179 { 00:22:07.179 "name": null, 00:22:07.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.179 "is_configured": false, 00:22:07.179 "data_offset": 2048, 00:22:07.179 "data_size": 63488 00:22:07.179 }, 00:22:07.179 { 00:22:07.179 "name": "BaseBdev2", 00:22:07.179 "uuid": "b03ee533-58fc-47b2-aa78-7081c4f02827", 00:22:07.179 "is_configured": true, 00:22:07.179 "data_offset": 2048, 00:22:07.179 "data_size": 63488 00:22:07.179 }, 00:22:07.179 { 00:22:07.179 "name": "BaseBdev3", 00:22:07.179 "uuid": "4b5a44af-24b4-4952-97f6-94775d3f2dda", 00:22:07.179 "is_configured": true, 00:22:07.179 "data_offset": 2048, 00:22:07.179 "data_size": 63488 00:22:07.179 }, 00:22:07.179 { 00:22:07.179 "name": "BaseBdev4", 00:22:07.179 "uuid": "9869c8ee-0b5c-4cef-a99a-795782f33add", 00:22:07.179 "is_configured": true, 00:22:07.179 "data_offset": 2048, 00:22:07.179 "data_size": 63488 00:22:07.179 } 00:22:07.179 ] 00:22:07.179 }' 00:22:07.179 00:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:07.179 00:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.438 00:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:07.438 00:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:07.438 00:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.438 00:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:07.695 00:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:07.695 00:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:07.695 00:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:07.952 [2024-07-25 00:06:03.564479] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:07.952 00:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:07.952 00:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:07.952 00:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.952 00:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:08.210 00:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:08.210 00:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:08.210 00:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:08.467 [2024-07-25 00:06:04.090105] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:08.467 00:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:08.467 00:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:08.467 00:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.467 00:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:08.725 00:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:08.725 00:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:08.725 00:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:08.983 [2024-07-25 00:06:04.647491] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:08.983 [2024-07-25 00:06:04.647739] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:22:08.983 00:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:08.983 00:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:08.983 00:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:08.983 00:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.241 00:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:09.241 00:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:09.241 00:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:22:09.241 00:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:09.241 00:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:09.241 00:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:09.499 BaseBdev2 00:22:09.499 00:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:09.499 00:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:22:09.499 00:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:09.499 00:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:09.499 00:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:09.499 00:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:09.499 00:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:09.757 00:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:10.015 [ 00:22:10.015 { 00:22:10.015 "name": "BaseBdev2", 00:22:10.015 "aliases": [ 00:22:10.015 "410a2630-2565-4bd5-b6d7-bac9b538d720" 00:22:10.015 ], 00:22:10.015 "product_name": "Malloc disk", 00:22:10.015 "block_size": 512, 00:22:10.015 "num_blocks": 65536, 00:22:10.015 "uuid": "410a2630-2565-4bd5-b6d7-bac9b538d720", 00:22:10.015 "assigned_rate_limits": { 00:22:10.015 "rw_ios_per_sec": 0, 00:22:10.015 "rw_mbytes_per_sec": 0, 00:22:10.015 "r_mbytes_per_sec": 0, 00:22:10.015 "w_mbytes_per_sec": 0 00:22:10.015 }, 00:22:10.015 "claimed": false, 00:22:10.015 "zoned": false, 00:22:10.015 "supported_io_types": { 00:22:10.015 "read": true, 00:22:10.015 "write": true, 00:22:10.015 "unmap": true, 00:22:10.015 "flush": true, 00:22:10.015 "reset": true, 00:22:10.015 "nvme_admin": false, 00:22:10.015 "nvme_io": false, 00:22:10.015 "nvme_io_md": false, 00:22:10.015 "write_zeroes": true, 00:22:10.015 "zcopy": true, 00:22:10.015 "get_zone_info": false, 00:22:10.015 "zone_management": false, 00:22:10.015 "zone_append": false, 00:22:10.015 "compare": false, 00:22:10.015 "compare_and_write": false, 00:22:10.015 "abort": true, 00:22:10.015 "seek_hole": false, 00:22:10.015 "seek_data": false, 00:22:10.015 "copy": true, 00:22:10.015 "nvme_iov_md": false 00:22:10.015 }, 00:22:10.015 "memory_domains": [ 00:22:10.015 { 00:22:10.015 "dma_device_id": "system", 00:22:10.015 "dma_device_type": 1 00:22:10.015 }, 00:22:10.015 { 00:22:10.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.015 "dma_device_type": 2 00:22:10.015 } 00:22:10.015 ], 00:22:10.015 "driver_specific": {} 00:22:10.015 } 00:22:10.015 ] 00:22:10.015 00:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:10.015 00:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:10.015 00:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:10.015 00:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:10.273 BaseBdev3 00:22:10.273 00:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:10.273 00:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:22:10.273 00:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:10.273 00:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:10.273 00:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:10.273 00:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:10.273 00:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:10.531 00:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:10.531 [ 00:22:10.531 { 00:22:10.531 "name": "BaseBdev3", 00:22:10.531 "aliases": [ 00:22:10.531 "52ff38f3-4876-4da9-b81c-0e513577632a" 00:22:10.531 ], 00:22:10.531 "product_name": "Malloc disk", 00:22:10.531 "block_size": 512, 00:22:10.531 "num_blocks": 65536, 00:22:10.531 "uuid": "52ff38f3-4876-4da9-b81c-0e513577632a", 00:22:10.531 "assigned_rate_limits": { 00:22:10.531 "rw_ios_per_sec": 0, 00:22:10.531 "rw_mbytes_per_sec": 0, 00:22:10.531 "r_mbytes_per_sec": 0, 00:22:10.531 "w_mbytes_per_sec": 0 00:22:10.531 }, 00:22:10.531 "claimed": false, 00:22:10.531 "zoned": false, 00:22:10.531 "supported_io_types": { 00:22:10.531 "read": true, 00:22:10.531 "write": true, 00:22:10.531 "unmap": true, 00:22:10.531 "flush": true, 00:22:10.531 "reset": true, 00:22:10.531 "nvme_admin": false, 00:22:10.531 "nvme_io": false, 00:22:10.531 "nvme_io_md": false, 00:22:10.531 "write_zeroes": true, 00:22:10.531 "zcopy": true, 00:22:10.531 "get_zone_info": false, 00:22:10.531 "zone_management": false, 00:22:10.531 "zone_append": false, 00:22:10.531 "compare": false, 00:22:10.531 "compare_and_write": false, 00:22:10.531 "abort": true, 00:22:10.531 "seek_hole": false, 00:22:10.531 "seek_data": false, 00:22:10.531 "copy": true, 00:22:10.531 "nvme_iov_md": false 00:22:10.531 }, 00:22:10.531 "memory_domains": [ 00:22:10.531 { 00:22:10.531 "dma_device_id": "system", 00:22:10.531 "dma_device_type": 1 00:22:10.531 }, 00:22:10.531 { 00:22:10.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.531 "dma_device_type": 2 00:22:10.531 } 00:22:10.531 ], 00:22:10.531 "driver_specific": {} 00:22:10.531 } 00:22:10.531 ] 00:22:10.531 00:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:10.531 00:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:10.531 00:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:10.531 00:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:10.789 BaseBdev4 00:22:10.789 00:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:22:10.789 00:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:22:10.789 00:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:10.789 00:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:10.789 00:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:10.789 00:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:10.789 00:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:11.046 00:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:11.305 [ 00:22:11.305 { 00:22:11.305 "name": "BaseBdev4", 00:22:11.305 "aliases": [ 00:22:11.305 "5ab0ecfc-8f64-4644-a5b8-613d6d56ba7e" 00:22:11.305 ], 00:22:11.305 "product_name": "Malloc disk", 00:22:11.305 "block_size": 512, 00:22:11.305 "num_blocks": 65536, 00:22:11.305 "uuid": "5ab0ecfc-8f64-4644-a5b8-613d6d56ba7e", 00:22:11.305 "assigned_rate_limits": { 00:22:11.305 "rw_ios_per_sec": 0, 00:22:11.305 "rw_mbytes_per_sec": 0, 00:22:11.305 "r_mbytes_per_sec": 0, 00:22:11.305 "w_mbytes_per_sec": 0 00:22:11.305 }, 00:22:11.305 "claimed": false, 00:22:11.305 "zoned": false, 00:22:11.305 "supported_io_types": { 00:22:11.305 "read": true, 00:22:11.305 "write": true, 00:22:11.305 "unmap": true, 00:22:11.305 "flush": true, 00:22:11.305 "reset": true, 00:22:11.305 "nvme_admin": false, 00:22:11.305 "nvme_io": false, 00:22:11.305 "nvme_io_md": false, 00:22:11.305 "write_zeroes": true, 00:22:11.305 "zcopy": true, 00:22:11.305 "get_zone_info": false, 00:22:11.305 "zone_management": false, 00:22:11.305 "zone_append": false, 00:22:11.305 "compare": false, 00:22:11.305 "compare_and_write": false, 00:22:11.305 "abort": true, 00:22:11.305 "seek_hole": false, 00:22:11.305 "seek_data": false, 00:22:11.305 "copy": true, 00:22:11.305 "nvme_iov_md": false 00:22:11.305 }, 00:22:11.305 "memory_domains": [ 00:22:11.305 { 00:22:11.305 "dma_device_id": "system", 00:22:11.305 "dma_device_type": 1 00:22:11.305 }, 00:22:11.305 { 00:22:11.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.305 "dma_device_type": 2 00:22:11.305 } 00:22:11.305 ], 00:22:11.305 "driver_specific": {} 00:22:11.305 } 00:22:11.305 ] 00:22:11.305 00:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:11.305 00:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:11.305 00:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:11.305 00:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:11.563 [2024-07-25 00:06:07.267422] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:11.563 [2024-07-25 00:06:07.267729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:11.563 [2024-07-25 00:06:07.267795] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:11.563 [2024-07-25 00:06:07.270040] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:11.563 [2024-07-25 00:06:07.270107] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:11.563 00:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:11.563 00:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:11.563 00:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:11.563 00:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:11.563 00:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:11.563 00:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:11.563 00:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:11.563 00:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:11.563 00:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:11.563 00:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:11.563 00:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.563 00:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.822 00:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:11.822 "name": "Existed_Raid", 00:22:11.822 "uuid": "6f84d303-2e1c-42ac-9833-3301e88cdd44", 00:22:11.822 "strip_size_kb": 64, 00:22:11.822 "state": "configuring", 00:22:11.822 "raid_level": "concat", 00:22:11.822 "superblock": true, 00:22:11.822 "num_base_bdevs": 4, 00:22:11.822 "num_base_bdevs_discovered": 3, 00:22:11.822 "num_base_bdevs_operational": 4, 00:22:11.822 "base_bdevs_list": [ 00:22:11.822 { 00:22:11.822 "name": "BaseBdev1", 00:22:11.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.822 "is_configured": false, 00:22:11.822 "data_offset": 0, 00:22:11.822 "data_size": 0 00:22:11.822 }, 00:22:11.822 { 00:22:11.822 "name": "BaseBdev2", 00:22:11.822 "uuid": "410a2630-2565-4bd5-b6d7-bac9b538d720", 00:22:11.822 "is_configured": true, 00:22:11.822 "data_offset": 2048, 00:22:11.822 "data_size": 63488 00:22:11.822 }, 00:22:11.822 { 00:22:11.822 "name": "BaseBdev3", 00:22:11.822 "uuid": "52ff38f3-4876-4da9-b81c-0e513577632a", 00:22:11.822 "is_configured": true, 00:22:11.822 "data_offset": 2048, 00:22:11.822 "data_size": 63488 00:22:11.822 }, 00:22:11.822 { 00:22:11.822 "name": "BaseBdev4", 00:22:11.822 "uuid": "5ab0ecfc-8f64-4644-a5b8-613d6d56ba7e", 00:22:11.822 "is_configured": true, 00:22:11.822 "data_offset": 2048, 00:22:11.822 "data_size": 63488 00:22:11.822 } 00:22:11.822 ] 00:22:11.822 }' 00:22:11.822 00:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:11.822 00:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.081 00:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:12.341 [2024-07-25 00:06:08.043547] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:12.341 00:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:12.341 00:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:12.341 00:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:12.341 00:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:12.341 00:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:12.341 00:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:12.341 00:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:12.341 00:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:12.341 00:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:12.341 00:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:12.341 00:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.341 00:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.600 00:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:12.600 "name": "Existed_Raid", 00:22:12.600 "uuid": "6f84d303-2e1c-42ac-9833-3301e88cdd44", 00:22:12.600 "strip_size_kb": 64, 00:22:12.600 "state": "configuring", 00:22:12.600 "raid_level": "concat", 00:22:12.600 "superblock": true, 00:22:12.600 "num_base_bdevs": 4, 00:22:12.600 "num_base_bdevs_discovered": 2, 00:22:12.600 "num_base_bdevs_operational": 4, 00:22:12.600 "base_bdevs_list": [ 00:22:12.600 { 00:22:12.600 "name": "BaseBdev1", 00:22:12.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.600 "is_configured": false, 00:22:12.600 "data_offset": 0, 00:22:12.600 "data_size": 0 00:22:12.600 }, 00:22:12.600 { 00:22:12.600 "name": null, 00:22:12.600 "uuid": "410a2630-2565-4bd5-b6d7-bac9b538d720", 00:22:12.600 "is_configured": false, 00:22:12.600 "data_offset": 2048, 00:22:12.600 "data_size": 63488 00:22:12.600 }, 00:22:12.600 { 00:22:12.600 "name": "BaseBdev3", 00:22:12.600 "uuid": "52ff38f3-4876-4da9-b81c-0e513577632a", 00:22:12.600 "is_configured": true, 00:22:12.600 "data_offset": 2048, 00:22:12.600 "data_size": 63488 00:22:12.600 }, 00:22:12.600 { 00:22:12.600 "name": "BaseBdev4", 00:22:12.600 "uuid": "5ab0ecfc-8f64-4644-a5b8-613d6d56ba7e", 00:22:12.600 "is_configured": true, 00:22:12.600 "data_offset": 2048, 00:22:12.600 "data_size": 63488 00:22:12.600 } 00:22:12.600 ] 00:22:12.600 }' 00:22:12.600 00:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:12.600 00:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.859 00:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.859 00:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:13.118 00:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:13.118 00:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:13.377 [2024-07-25 00:06:09.130584] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:13.377 BaseBdev1 00:22:13.377 00:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:13.377 00:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:22:13.377 00:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:13.377 00:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:13.377 00:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:13.377 00:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:13.377 00:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:13.636 00:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:13.895 [ 00:22:13.895 { 00:22:13.895 "name": "BaseBdev1", 00:22:13.895 "aliases": [ 00:22:13.895 "9b8ce669-e010-4371-8a56-de914246a907" 00:22:13.895 ], 00:22:13.895 "product_name": "Malloc disk", 00:22:13.895 "block_size": 512, 00:22:13.895 "num_blocks": 65536, 00:22:13.895 "uuid": "9b8ce669-e010-4371-8a56-de914246a907", 00:22:13.895 "assigned_rate_limits": { 00:22:13.895 "rw_ios_per_sec": 0, 00:22:13.895 "rw_mbytes_per_sec": 0, 00:22:13.895 "r_mbytes_per_sec": 0, 00:22:13.895 "w_mbytes_per_sec": 0 00:22:13.895 }, 00:22:13.895 "claimed": true, 00:22:13.895 "claim_type": "exclusive_write", 00:22:13.895 "zoned": false, 00:22:13.895 "supported_io_types": { 00:22:13.895 "read": true, 00:22:13.895 "write": true, 00:22:13.895 "unmap": true, 00:22:13.895 "flush": true, 00:22:13.895 "reset": true, 00:22:13.895 "nvme_admin": false, 00:22:13.895 "nvme_io": false, 00:22:13.895 "nvme_io_md": false, 00:22:13.895 "write_zeroes": true, 00:22:13.895 "zcopy": true, 00:22:13.895 "get_zone_info": false, 00:22:13.895 "zone_management": false, 00:22:13.895 "zone_append": false, 00:22:13.895 "compare": false, 00:22:13.895 "compare_and_write": false, 00:22:13.895 "abort": true, 00:22:13.895 "seek_hole": false, 00:22:13.895 "seek_data": false, 00:22:13.895 "copy": true, 00:22:13.895 "nvme_iov_md": false 00:22:13.895 }, 00:22:13.895 "memory_domains": [ 00:22:13.895 { 00:22:13.895 "dma_device_id": "system", 00:22:13.895 "dma_device_type": 1 00:22:13.895 }, 00:22:13.895 { 00:22:13.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.895 "dma_device_type": 2 00:22:13.895 } 00:22:13.895 ], 00:22:13.895 "driver_specific": {} 00:22:13.895 } 00:22:13.895 ] 00:22:13.895 00:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:13.895 00:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:13.895 00:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:13.895 00:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:13.895 00:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:13.895 00:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:13.895 00:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:13.895 00:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:13.895 00:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:13.895 00:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:13.895 00:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:13.895 00:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.895 00:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:14.155 00:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:14.155 "name": "Existed_Raid", 00:22:14.155 "uuid": "6f84d303-2e1c-42ac-9833-3301e88cdd44", 00:22:14.155 "strip_size_kb": 64, 00:22:14.155 "state": "configuring", 00:22:14.155 "raid_level": "concat", 00:22:14.155 "superblock": true, 00:22:14.155 "num_base_bdevs": 4, 00:22:14.155 "num_base_bdevs_discovered": 3, 00:22:14.155 "num_base_bdevs_operational": 4, 00:22:14.155 "base_bdevs_list": [ 00:22:14.155 { 00:22:14.155 "name": "BaseBdev1", 00:22:14.155 "uuid": "9b8ce669-e010-4371-8a56-de914246a907", 00:22:14.155 "is_configured": true, 00:22:14.155 "data_offset": 2048, 00:22:14.155 "data_size": 63488 00:22:14.155 }, 00:22:14.155 { 00:22:14.155 "name": null, 00:22:14.155 "uuid": "410a2630-2565-4bd5-b6d7-bac9b538d720", 00:22:14.155 "is_configured": false, 00:22:14.155 "data_offset": 2048, 00:22:14.155 "data_size": 63488 00:22:14.155 }, 00:22:14.155 { 00:22:14.155 "name": "BaseBdev3", 00:22:14.155 "uuid": "52ff38f3-4876-4da9-b81c-0e513577632a", 00:22:14.155 "is_configured": true, 00:22:14.155 "data_offset": 2048, 00:22:14.155 "data_size": 63488 00:22:14.155 }, 00:22:14.155 { 00:22:14.155 "name": "BaseBdev4", 00:22:14.155 "uuid": "5ab0ecfc-8f64-4644-a5b8-613d6d56ba7e", 00:22:14.155 "is_configured": true, 00:22:14.155 "data_offset": 2048, 00:22:14.155 "data_size": 63488 00:22:14.155 } 00:22:14.155 ] 00:22:14.155 }' 00:22:14.155 00:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:14.155 00:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.414 00:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.414 00:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:14.691 00:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:14.691 00:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:14.691 [2024-07-25 00:06:10.503149] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:14.691 00:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:14.691 00:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:14.691 00:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:14.691 00:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:14.691 00:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:14.691 00:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:14.691 00:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:14.691 00:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:14.691 00:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:14.691 00:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:14.691 00:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.691 00:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:14.961 00:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:14.961 "name": "Existed_Raid", 00:22:14.961 "uuid": "6f84d303-2e1c-42ac-9833-3301e88cdd44", 00:22:14.961 "strip_size_kb": 64, 00:22:14.961 "state": "configuring", 00:22:14.961 "raid_level": "concat", 00:22:14.961 "superblock": true, 00:22:14.961 "num_base_bdevs": 4, 00:22:14.961 "num_base_bdevs_discovered": 2, 00:22:14.961 "num_base_bdevs_operational": 4, 00:22:14.961 "base_bdevs_list": [ 00:22:14.961 { 00:22:14.961 "name": "BaseBdev1", 00:22:14.961 "uuid": "9b8ce669-e010-4371-8a56-de914246a907", 00:22:14.961 "is_configured": true, 00:22:14.961 "data_offset": 2048, 00:22:14.961 "data_size": 63488 00:22:14.961 }, 00:22:14.961 { 00:22:14.961 "name": null, 00:22:14.961 "uuid": "410a2630-2565-4bd5-b6d7-bac9b538d720", 00:22:14.961 "is_configured": false, 00:22:14.961 "data_offset": 2048, 00:22:14.961 "data_size": 63488 00:22:14.961 }, 00:22:14.961 { 00:22:14.961 "name": null, 00:22:14.961 "uuid": "52ff38f3-4876-4da9-b81c-0e513577632a", 00:22:14.961 "is_configured": false, 00:22:14.961 "data_offset": 2048, 00:22:14.961 "data_size": 63488 00:22:14.961 }, 00:22:14.961 { 00:22:14.961 "name": "BaseBdev4", 00:22:14.961 "uuid": "5ab0ecfc-8f64-4644-a5b8-613d6d56ba7e", 00:22:14.961 "is_configured": true, 00:22:14.961 "data_offset": 2048, 00:22:14.961 "data_size": 63488 00:22:14.961 } 00:22:14.961 ] 00:22:14.961 }' 00:22:14.961 00:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:14.961 00:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.529 00:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.529 00:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:15.529 00:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:15.529 00:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:15.788 [2024-07-25 00:06:11.579595] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:15.788 00:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:15.788 00:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:15.788 00:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:15.788 00:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:15.788 00:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:15.788 00:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:15.788 00:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:15.788 00:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:15.788 00:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:15.788 00:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:15.788 00:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.788 00:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:16.047 00:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:16.047 "name": "Existed_Raid", 00:22:16.047 "uuid": "6f84d303-2e1c-42ac-9833-3301e88cdd44", 00:22:16.047 "strip_size_kb": 64, 00:22:16.047 "state": "configuring", 00:22:16.047 "raid_level": "concat", 00:22:16.047 "superblock": true, 00:22:16.047 "num_base_bdevs": 4, 00:22:16.047 "num_base_bdevs_discovered": 3, 00:22:16.047 "num_base_bdevs_operational": 4, 00:22:16.047 "base_bdevs_list": [ 00:22:16.047 { 00:22:16.047 "name": "BaseBdev1", 00:22:16.047 "uuid": "9b8ce669-e010-4371-8a56-de914246a907", 00:22:16.047 "is_configured": true, 00:22:16.047 "data_offset": 2048, 00:22:16.047 "data_size": 63488 00:22:16.047 }, 00:22:16.047 { 00:22:16.047 "name": null, 00:22:16.047 "uuid": "410a2630-2565-4bd5-b6d7-bac9b538d720", 00:22:16.047 "is_configured": false, 00:22:16.047 "data_offset": 2048, 00:22:16.047 "data_size": 63488 00:22:16.047 }, 00:22:16.047 { 00:22:16.047 "name": "BaseBdev3", 00:22:16.047 "uuid": "52ff38f3-4876-4da9-b81c-0e513577632a", 00:22:16.047 "is_configured": true, 00:22:16.047 "data_offset": 2048, 00:22:16.047 "data_size": 63488 00:22:16.047 }, 00:22:16.047 { 00:22:16.047 "name": "BaseBdev4", 00:22:16.047 "uuid": "5ab0ecfc-8f64-4644-a5b8-613d6d56ba7e", 00:22:16.047 "is_configured": true, 00:22:16.047 "data_offset": 2048, 00:22:16.047 "data_size": 63488 00:22:16.047 } 00:22:16.047 ] 00:22:16.047 }' 00:22:16.047 00:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:16.047 00:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.613 00:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.613 00:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:16.613 00:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:16.613 00:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:16.871 [2024-07-25 00:06:12.663982] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:17.129 00:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:17.129 00:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:17.129 00:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:17.129 00:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:17.129 00:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:17.129 00:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:17.129 00:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:17.129 00:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:17.129 00:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:17.129 00:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:17.129 00:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.129 00:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:17.386 00:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:17.386 "name": "Existed_Raid", 00:22:17.386 "uuid": "6f84d303-2e1c-42ac-9833-3301e88cdd44", 00:22:17.386 "strip_size_kb": 64, 00:22:17.386 "state": "configuring", 00:22:17.386 "raid_level": "concat", 00:22:17.386 "superblock": true, 00:22:17.386 "num_base_bdevs": 4, 00:22:17.386 "num_base_bdevs_discovered": 2, 00:22:17.386 "num_base_bdevs_operational": 4, 00:22:17.386 "base_bdevs_list": [ 00:22:17.386 { 00:22:17.386 "name": null, 00:22:17.386 "uuid": "9b8ce669-e010-4371-8a56-de914246a907", 00:22:17.386 "is_configured": false, 00:22:17.386 "data_offset": 2048, 00:22:17.386 "data_size": 63488 00:22:17.386 }, 00:22:17.386 { 00:22:17.386 "name": null, 00:22:17.386 "uuid": "410a2630-2565-4bd5-b6d7-bac9b538d720", 00:22:17.386 "is_configured": false, 00:22:17.386 "data_offset": 2048, 00:22:17.386 "data_size": 63488 00:22:17.386 }, 00:22:17.386 { 00:22:17.386 "name": "BaseBdev3", 00:22:17.386 "uuid": "52ff38f3-4876-4da9-b81c-0e513577632a", 00:22:17.386 "is_configured": true, 00:22:17.386 "data_offset": 2048, 00:22:17.386 "data_size": 63488 00:22:17.386 }, 00:22:17.386 { 00:22:17.386 "name": "BaseBdev4", 00:22:17.386 "uuid": "5ab0ecfc-8f64-4644-a5b8-613d6d56ba7e", 00:22:17.386 "is_configured": true, 00:22:17.386 "data_offset": 2048, 00:22:17.386 "data_size": 63488 00:22:17.386 } 00:22:17.386 ] 00:22:17.386 }' 00:22:17.386 00:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:17.386 00:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.644 00:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:17.644 00:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.901 00:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:17.901 00:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:18.159 [2024-07-25 00:06:13.779125] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:18.159 00:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:18.159 00:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:18.159 00:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:18.159 00:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:18.159 00:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:18.159 00:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:18.159 00:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:18.159 00:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:18.159 00:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:18.159 00:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:18.159 00:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.159 00:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:18.417 00:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:18.417 "name": "Existed_Raid", 00:22:18.417 "uuid": "6f84d303-2e1c-42ac-9833-3301e88cdd44", 00:22:18.417 "strip_size_kb": 64, 00:22:18.417 "state": "configuring", 00:22:18.417 "raid_level": "concat", 00:22:18.417 "superblock": true, 00:22:18.417 "num_base_bdevs": 4, 00:22:18.417 "num_base_bdevs_discovered": 3, 00:22:18.417 "num_base_bdevs_operational": 4, 00:22:18.417 "base_bdevs_list": [ 00:22:18.417 { 00:22:18.417 "name": null, 00:22:18.417 "uuid": "9b8ce669-e010-4371-8a56-de914246a907", 00:22:18.417 "is_configured": false, 00:22:18.417 "data_offset": 2048, 00:22:18.417 "data_size": 63488 00:22:18.417 }, 00:22:18.417 { 00:22:18.417 "name": "BaseBdev2", 00:22:18.417 "uuid": "410a2630-2565-4bd5-b6d7-bac9b538d720", 00:22:18.417 "is_configured": true, 00:22:18.417 "data_offset": 2048, 00:22:18.417 "data_size": 63488 00:22:18.417 }, 00:22:18.417 { 00:22:18.417 "name": "BaseBdev3", 00:22:18.417 "uuid": "52ff38f3-4876-4da9-b81c-0e513577632a", 00:22:18.417 "is_configured": true, 00:22:18.417 "data_offset": 2048, 00:22:18.417 "data_size": 63488 00:22:18.417 }, 00:22:18.417 { 00:22:18.417 "name": "BaseBdev4", 00:22:18.417 "uuid": "5ab0ecfc-8f64-4644-a5b8-613d6d56ba7e", 00:22:18.417 "is_configured": true, 00:22:18.417 "data_offset": 2048, 00:22:18.417 "data_size": 63488 00:22:18.417 } 00:22:18.417 ] 00:22:18.417 }' 00:22:18.417 00:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:18.417 00:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.674 00:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.674 00:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:18.932 00:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:18.932 00:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.932 00:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:19.190 00:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 9b8ce669-e010-4371-8a56-de914246a907 00:22:19.447 [2024-07-25 00:06:15.172129] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:19.447 NewBaseBdev 00:22:19.447 [2024-07-25 00:06:15.172824] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:22:19.447 [2024-07-25 00:06:15.172876] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:19.447 [2024-07-25 00:06:15.173066] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ee0 00:22:19.447 [2024-07-25 00:06:15.173500] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:22:19.447 [2024-07-25 00:06:15.173530] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000009380 00:22:19.447 [2024-07-25 00:06:15.173714] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:19.447 00:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:19.447 00:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:22:19.447 00:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:19.447 00:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:19.447 00:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:19.447 00:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:19.447 00:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:19.704 00:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:19.962 [ 00:22:19.962 { 00:22:19.962 "name": "NewBaseBdev", 00:22:19.962 "aliases": [ 00:22:19.962 "9b8ce669-e010-4371-8a56-de914246a907" 00:22:19.962 ], 00:22:19.962 "product_name": "Malloc disk", 00:22:19.962 "block_size": 512, 00:22:19.962 "num_blocks": 65536, 00:22:19.962 "uuid": "9b8ce669-e010-4371-8a56-de914246a907", 00:22:19.962 "assigned_rate_limits": { 00:22:19.962 "rw_ios_per_sec": 0, 00:22:19.962 "rw_mbytes_per_sec": 0, 00:22:19.962 "r_mbytes_per_sec": 0, 00:22:19.962 "w_mbytes_per_sec": 0 00:22:19.962 }, 00:22:19.962 "claimed": true, 00:22:19.962 "claim_type": "exclusive_write", 00:22:19.962 "zoned": false, 00:22:19.962 "supported_io_types": { 00:22:19.962 "read": true, 00:22:19.962 "write": true, 00:22:19.962 "unmap": true, 00:22:19.962 "flush": true, 00:22:19.962 "reset": true, 00:22:19.962 "nvme_admin": false, 00:22:19.962 "nvme_io": false, 00:22:19.962 "nvme_io_md": false, 00:22:19.962 "write_zeroes": true, 00:22:19.962 "zcopy": true, 00:22:19.962 "get_zone_info": false, 00:22:19.962 "zone_management": false, 00:22:19.962 "zone_append": false, 00:22:19.962 "compare": false, 00:22:19.962 "compare_and_write": false, 00:22:19.962 "abort": true, 00:22:19.962 "seek_hole": false, 00:22:19.962 "seek_data": false, 00:22:19.962 "copy": true, 00:22:19.962 "nvme_iov_md": false 00:22:19.962 }, 00:22:19.962 "memory_domains": [ 00:22:19.962 { 00:22:19.962 "dma_device_id": "system", 00:22:19.962 "dma_device_type": 1 00:22:19.962 }, 00:22:19.962 { 00:22:19.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.962 "dma_device_type": 2 00:22:19.962 } 00:22:19.962 ], 00:22:19.962 "driver_specific": {} 00:22:19.962 } 00:22:19.962 ] 00:22:19.962 00:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:19.962 00:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:22:19.962 00:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:19.962 00:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:19.962 00:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:19.962 00:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:19.962 00:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:19.962 00:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:19.962 00:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:19.962 00:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:19.962 00:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:19.962 00:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.962 00:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:20.221 00:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:20.221 "name": "Existed_Raid", 00:22:20.221 "uuid": "6f84d303-2e1c-42ac-9833-3301e88cdd44", 00:22:20.221 "strip_size_kb": 64, 00:22:20.221 "state": "online", 00:22:20.221 "raid_level": "concat", 00:22:20.221 "superblock": true, 00:22:20.221 "num_base_bdevs": 4, 00:22:20.221 "num_base_bdevs_discovered": 4, 00:22:20.221 "num_base_bdevs_operational": 4, 00:22:20.221 "base_bdevs_list": [ 00:22:20.221 { 00:22:20.221 "name": "NewBaseBdev", 00:22:20.221 "uuid": "9b8ce669-e010-4371-8a56-de914246a907", 00:22:20.221 "is_configured": true, 00:22:20.221 "data_offset": 2048, 00:22:20.221 "data_size": 63488 00:22:20.221 }, 00:22:20.221 { 00:22:20.221 "name": "BaseBdev2", 00:22:20.221 "uuid": "410a2630-2565-4bd5-b6d7-bac9b538d720", 00:22:20.221 "is_configured": true, 00:22:20.221 "data_offset": 2048, 00:22:20.221 "data_size": 63488 00:22:20.221 }, 00:22:20.221 { 00:22:20.221 "name": "BaseBdev3", 00:22:20.221 "uuid": "52ff38f3-4876-4da9-b81c-0e513577632a", 00:22:20.221 "is_configured": true, 00:22:20.221 "data_offset": 2048, 00:22:20.221 "data_size": 63488 00:22:20.221 }, 00:22:20.221 { 00:22:20.221 "name": "BaseBdev4", 00:22:20.221 "uuid": "5ab0ecfc-8f64-4644-a5b8-613d6d56ba7e", 00:22:20.221 "is_configured": true, 00:22:20.221 "data_offset": 2048, 00:22:20.221 "data_size": 63488 00:22:20.221 } 00:22:20.221 ] 00:22:20.221 }' 00:22:20.221 00:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:20.221 00:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.479 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:20.479 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:20.479 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:20.479 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:20.479 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:20.479 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:20.479 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:20.479 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:20.737 [2024-07-25 00:06:16.484963] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:20.737 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:20.737 "name": "Existed_Raid", 00:22:20.737 "aliases": [ 00:22:20.737 "6f84d303-2e1c-42ac-9833-3301e88cdd44" 00:22:20.737 ], 00:22:20.737 "product_name": "Raid Volume", 00:22:20.737 "block_size": 512, 00:22:20.737 "num_blocks": 253952, 00:22:20.737 "uuid": "6f84d303-2e1c-42ac-9833-3301e88cdd44", 00:22:20.737 "assigned_rate_limits": { 00:22:20.737 "rw_ios_per_sec": 0, 00:22:20.737 "rw_mbytes_per_sec": 0, 00:22:20.737 "r_mbytes_per_sec": 0, 00:22:20.737 "w_mbytes_per_sec": 0 00:22:20.737 }, 00:22:20.737 "claimed": false, 00:22:20.737 "zoned": false, 00:22:20.737 "supported_io_types": { 00:22:20.737 "read": true, 00:22:20.737 "write": true, 00:22:20.737 "unmap": true, 00:22:20.737 "flush": true, 00:22:20.737 "reset": true, 00:22:20.737 "nvme_admin": false, 00:22:20.737 "nvme_io": false, 00:22:20.737 "nvme_io_md": false, 00:22:20.737 "write_zeroes": true, 00:22:20.737 "zcopy": false, 00:22:20.737 "get_zone_info": false, 00:22:20.737 "zone_management": false, 00:22:20.737 "zone_append": false, 00:22:20.737 "compare": false, 00:22:20.737 "compare_and_write": false, 00:22:20.737 "abort": false, 00:22:20.737 "seek_hole": false, 00:22:20.737 "seek_data": false, 00:22:20.737 "copy": false, 00:22:20.737 "nvme_iov_md": false 00:22:20.737 }, 00:22:20.737 "memory_domains": [ 00:22:20.737 { 00:22:20.737 "dma_device_id": "system", 00:22:20.737 "dma_device_type": 1 00:22:20.737 }, 00:22:20.737 { 00:22:20.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.737 "dma_device_type": 2 00:22:20.737 }, 00:22:20.737 { 00:22:20.737 "dma_device_id": "system", 00:22:20.737 "dma_device_type": 1 00:22:20.737 }, 00:22:20.737 { 00:22:20.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.737 "dma_device_type": 2 00:22:20.737 }, 00:22:20.737 { 00:22:20.737 "dma_device_id": "system", 00:22:20.737 "dma_device_type": 1 00:22:20.737 }, 00:22:20.737 { 00:22:20.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.737 "dma_device_type": 2 00:22:20.737 }, 00:22:20.737 { 00:22:20.737 "dma_device_id": "system", 00:22:20.737 "dma_device_type": 1 00:22:20.737 }, 00:22:20.737 { 00:22:20.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.737 "dma_device_type": 2 00:22:20.737 } 00:22:20.737 ], 00:22:20.737 "driver_specific": { 00:22:20.737 "raid": { 00:22:20.737 "uuid": "6f84d303-2e1c-42ac-9833-3301e88cdd44", 00:22:20.737 "strip_size_kb": 64, 00:22:20.737 "state": "online", 00:22:20.737 "raid_level": "concat", 00:22:20.737 "superblock": true, 00:22:20.737 "num_base_bdevs": 4, 00:22:20.737 "num_base_bdevs_discovered": 4, 00:22:20.737 "num_base_bdevs_operational": 4, 00:22:20.737 "base_bdevs_list": [ 00:22:20.737 { 00:22:20.737 "name": "NewBaseBdev", 00:22:20.737 "uuid": "9b8ce669-e010-4371-8a56-de914246a907", 00:22:20.737 "is_configured": true, 00:22:20.737 "data_offset": 2048, 00:22:20.737 "data_size": 63488 00:22:20.737 }, 00:22:20.737 { 00:22:20.737 "name": "BaseBdev2", 00:22:20.737 "uuid": "410a2630-2565-4bd5-b6d7-bac9b538d720", 00:22:20.737 "is_configured": true, 00:22:20.737 "data_offset": 2048, 00:22:20.737 "data_size": 63488 00:22:20.737 }, 00:22:20.737 { 00:22:20.737 "name": "BaseBdev3", 00:22:20.737 "uuid": "52ff38f3-4876-4da9-b81c-0e513577632a", 00:22:20.737 "is_configured": true, 00:22:20.737 "data_offset": 2048, 00:22:20.737 "data_size": 63488 00:22:20.737 }, 00:22:20.737 { 00:22:20.737 "name": "BaseBdev4", 00:22:20.737 "uuid": "5ab0ecfc-8f64-4644-a5b8-613d6d56ba7e", 00:22:20.737 "is_configured": true, 00:22:20.737 "data_offset": 2048, 00:22:20.737 "data_size": 63488 00:22:20.737 } 00:22:20.737 ] 00:22:20.737 } 00:22:20.737 } 00:22:20.737 }' 00:22:20.737 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:20.737 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:20.737 BaseBdev2 00:22:20.737 BaseBdev3 00:22:20.737 BaseBdev4' 00:22:20.737 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:20.737 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:20.737 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:20.996 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:20.996 "name": "NewBaseBdev", 00:22:20.996 "aliases": [ 00:22:20.996 "9b8ce669-e010-4371-8a56-de914246a907" 00:22:20.996 ], 00:22:20.996 "product_name": "Malloc disk", 00:22:20.996 "block_size": 512, 00:22:20.996 "num_blocks": 65536, 00:22:20.996 "uuid": "9b8ce669-e010-4371-8a56-de914246a907", 00:22:20.996 "assigned_rate_limits": { 00:22:20.996 "rw_ios_per_sec": 0, 00:22:20.996 "rw_mbytes_per_sec": 0, 00:22:20.996 "r_mbytes_per_sec": 0, 00:22:20.996 "w_mbytes_per_sec": 0 00:22:20.996 }, 00:22:20.996 "claimed": true, 00:22:20.996 "claim_type": "exclusive_write", 00:22:20.996 "zoned": false, 00:22:20.996 "supported_io_types": { 00:22:20.996 "read": true, 00:22:20.996 "write": true, 00:22:20.996 "unmap": true, 00:22:20.996 "flush": true, 00:22:20.996 "reset": true, 00:22:20.996 "nvme_admin": false, 00:22:20.996 "nvme_io": false, 00:22:20.996 "nvme_io_md": false, 00:22:20.996 "write_zeroes": true, 00:22:20.996 "zcopy": true, 00:22:20.996 "get_zone_info": false, 00:22:20.996 "zone_management": false, 00:22:20.996 "zone_append": false, 00:22:20.996 "compare": false, 00:22:20.996 "compare_and_write": false, 00:22:20.996 "abort": true, 00:22:20.996 "seek_hole": false, 00:22:20.996 "seek_data": false, 00:22:20.996 "copy": true, 00:22:20.996 "nvme_iov_md": false 00:22:20.996 }, 00:22:20.996 "memory_domains": [ 00:22:20.996 { 00:22:20.996 "dma_device_id": "system", 00:22:20.996 "dma_device_type": 1 00:22:20.996 }, 00:22:20.996 { 00:22:20.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.996 "dma_device_type": 2 00:22:20.996 } 00:22:20.996 ], 00:22:20.996 "driver_specific": {} 00:22:20.996 }' 00:22:20.996 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.996 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.996 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:20.996 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.996 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.996 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:20.996 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.996 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.996 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:20.996 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.996 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.996 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:20.996 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:20.996 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:20.996 00:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:21.254 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:21.254 "name": "BaseBdev2", 00:22:21.254 "aliases": [ 00:22:21.254 "410a2630-2565-4bd5-b6d7-bac9b538d720" 00:22:21.254 ], 00:22:21.254 "product_name": "Malloc disk", 00:22:21.254 "block_size": 512, 00:22:21.254 "num_blocks": 65536, 00:22:21.254 "uuid": "410a2630-2565-4bd5-b6d7-bac9b538d720", 00:22:21.254 "assigned_rate_limits": { 00:22:21.254 "rw_ios_per_sec": 0, 00:22:21.254 "rw_mbytes_per_sec": 0, 00:22:21.254 "r_mbytes_per_sec": 0, 00:22:21.254 "w_mbytes_per_sec": 0 00:22:21.254 }, 00:22:21.254 "claimed": true, 00:22:21.254 "claim_type": "exclusive_write", 00:22:21.254 "zoned": false, 00:22:21.254 "supported_io_types": { 00:22:21.254 "read": true, 00:22:21.254 "write": true, 00:22:21.254 "unmap": true, 00:22:21.254 "flush": true, 00:22:21.254 "reset": true, 00:22:21.254 "nvme_admin": false, 00:22:21.254 "nvme_io": false, 00:22:21.254 "nvme_io_md": false, 00:22:21.254 "write_zeroes": true, 00:22:21.254 "zcopy": true, 00:22:21.254 "get_zone_info": false, 00:22:21.254 "zone_management": false, 00:22:21.254 "zone_append": false, 00:22:21.254 "compare": false, 00:22:21.254 "compare_and_write": false, 00:22:21.254 "abort": true, 00:22:21.254 "seek_hole": false, 00:22:21.254 "seek_data": false, 00:22:21.254 "copy": true, 00:22:21.254 "nvme_iov_md": false 00:22:21.254 }, 00:22:21.254 "memory_domains": [ 00:22:21.254 { 00:22:21.254 "dma_device_id": "system", 00:22:21.254 "dma_device_type": 1 00:22:21.254 }, 00:22:21.254 { 00:22:21.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.254 "dma_device_type": 2 00:22:21.254 } 00:22:21.254 ], 00:22:21.254 "driver_specific": {} 00:22:21.254 }' 00:22:21.254 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:21.254 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:21.254 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:21.254 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.254 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.512 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:21.513 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.513 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.513 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:21.513 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.513 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.513 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:21.513 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:21.513 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:21.513 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:21.772 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:21.772 "name": "BaseBdev3", 00:22:21.772 "aliases": [ 00:22:21.772 "52ff38f3-4876-4da9-b81c-0e513577632a" 00:22:21.772 ], 00:22:21.772 "product_name": "Malloc disk", 00:22:21.772 "block_size": 512, 00:22:21.772 "num_blocks": 65536, 00:22:21.772 "uuid": "52ff38f3-4876-4da9-b81c-0e513577632a", 00:22:21.772 "assigned_rate_limits": { 00:22:21.772 "rw_ios_per_sec": 0, 00:22:21.772 "rw_mbytes_per_sec": 0, 00:22:21.772 "r_mbytes_per_sec": 0, 00:22:21.772 "w_mbytes_per_sec": 0 00:22:21.772 }, 00:22:21.772 "claimed": true, 00:22:21.772 "claim_type": "exclusive_write", 00:22:21.772 "zoned": false, 00:22:21.772 "supported_io_types": { 00:22:21.772 "read": true, 00:22:21.772 "write": true, 00:22:21.772 "unmap": true, 00:22:21.772 "flush": true, 00:22:21.772 "reset": true, 00:22:21.772 "nvme_admin": false, 00:22:21.772 "nvme_io": false, 00:22:21.772 "nvme_io_md": false, 00:22:21.772 "write_zeroes": true, 00:22:21.772 "zcopy": true, 00:22:21.772 "get_zone_info": false, 00:22:21.772 "zone_management": false, 00:22:21.772 "zone_append": false, 00:22:21.772 "compare": false, 00:22:21.772 "compare_and_write": false, 00:22:21.772 "abort": true, 00:22:21.772 "seek_hole": false, 00:22:21.772 "seek_data": false, 00:22:21.772 "copy": true, 00:22:21.772 "nvme_iov_md": false 00:22:21.772 }, 00:22:21.772 "memory_domains": [ 00:22:21.772 { 00:22:21.772 "dma_device_id": "system", 00:22:21.772 "dma_device_type": 1 00:22:21.772 }, 00:22:21.772 { 00:22:21.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.772 "dma_device_type": 2 00:22:21.772 } 00:22:21.772 ], 00:22:21.772 "driver_specific": {} 00:22:21.772 }' 00:22:21.772 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:21.772 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:21.772 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:21.772 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.772 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.772 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:21.772 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.772 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.772 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:21.772 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.772 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.772 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:21.772 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:21.772 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:21.772 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:22.031 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:22.031 "name": "BaseBdev4", 00:22:22.031 "aliases": [ 00:22:22.031 "5ab0ecfc-8f64-4644-a5b8-613d6d56ba7e" 00:22:22.031 ], 00:22:22.031 "product_name": "Malloc disk", 00:22:22.031 "block_size": 512, 00:22:22.031 "num_blocks": 65536, 00:22:22.031 "uuid": "5ab0ecfc-8f64-4644-a5b8-613d6d56ba7e", 00:22:22.031 "assigned_rate_limits": { 00:22:22.031 "rw_ios_per_sec": 0, 00:22:22.031 "rw_mbytes_per_sec": 0, 00:22:22.031 "r_mbytes_per_sec": 0, 00:22:22.031 "w_mbytes_per_sec": 0 00:22:22.031 }, 00:22:22.031 "claimed": true, 00:22:22.031 "claim_type": "exclusive_write", 00:22:22.031 "zoned": false, 00:22:22.031 "supported_io_types": { 00:22:22.031 "read": true, 00:22:22.031 "write": true, 00:22:22.031 "unmap": true, 00:22:22.031 "flush": true, 00:22:22.031 "reset": true, 00:22:22.031 "nvme_admin": false, 00:22:22.031 "nvme_io": false, 00:22:22.031 "nvme_io_md": false, 00:22:22.031 "write_zeroes": true, 00:22:22.031 "zcopy": true, 00:22:22.031 "get_zone_info": false, 00:22:22.031 "zone_management": false, 00:22:22.031 "zone_append": false, 00:22:22.031 "compare": false, 00:22:22.031 "compare_and_write": false, 00:22:22.031 "abort": true, 00:22:22.031 "seek_hole": false, 00:22:22.031 "seek_data": false, 00:22:22.031 "copy": true, 00:22:22.031 "nvme_iov_md": false 00:22:22.031 }, 00:22:22.031 "memory_domains": [ 00:22:22.031 { 00:22:22.031 "dma_device_id": "system", 00:22:22.031 "dma_device_type": 1 00:22:22.031 }, 00:22:22.031 { 00:22:22.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:22.031 "dma_device_type": 2 00:22:22.031 } 00:22:22.031 ], 00:22:22.031 "driver_specific": {} 00:22:22.031 }' 00:22:22.031 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:22.031 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:22.031 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:22.031 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:22.031 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:22.031 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:22.031 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:22.031 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:22.031 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:22.031 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:22.031 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:22.031 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:22.031 00:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:22.290 [2024-07-25 00:06:18.013042] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:22.290 [2024-07-25 00:06:18.013083] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:22.290 [2024-07-25 00:06:18.013185] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:22.290 [2024-07-25 00:06:18.013295] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:22.290 [2024-07-25 00:06:18.013311] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name Existed_Raid, state offline 00:22:22.290 00:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 91864 00:22:22.290 00:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 91864 ']' 00:22:22.291 00:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 91864 00:22:22.291 00:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:22:22.291 00:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:22.291 00:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91864 00:22:22.291 killing process with pid 91864 00:22:22.291 00:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:22.291 00:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:22.291 00:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91864' 00:22:22.291 00:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 91864 00:22:22.291 [2024-07-25 00:06:18.064634] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:22.291 00:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 91864 00:22:22.550 [2024-07-25 00:06:18.365191] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:23.953 00:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:22:23.953 00:22:23.953 real 0m27.425s 00:22:23.953 user 0m48.026s 00:22:23.953 sys 0m4.263s 00:22:23.953 00:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:23.953 00:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.953 ************************************ 00:22:23.953 END TEST raid_state_function_test_sb 00:22:23.953 ************************************ 00:22:23.953 00:06:19 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:22:23.953 00:06:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:23.953 00:06:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:23.953 00:06:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:23.953 ************************************ 00:22:23.953 START TEST raid_superblock_test 00:22:23.953 ************************************ 00:22:23.953 00:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:22:23.953 00:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:22:23.953 00:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:22:23.953 00:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:22:23.953 00:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:22:23.953 00:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:22:23.953 00:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:22:23.953 00:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:22:23.953 00:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:22:23.953 00:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:22:23.953 00:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:22:23.953 00:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:22:23.953 00:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:22:23.953 00:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:22:23.953 00:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:22:23.953 00:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:22:23.953 00:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:22:23.953 00:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=92852 00:22:23.953 00:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:23.953 00:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 92852 /var/tmp/spdk-raid.sock 00:22:23.953 00:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 92852 ']' 00:22:23.954 00:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:23.954 00:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:23.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:23.954 00:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:23.954 00:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:23.954 00:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.954 [2024-07-25 00:06:19.543846] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:22:23.954 [2024-07-25 00:06:19.544070] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92852 ] 00:22:23.954 [2024-07-25 00:06:19.719134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.213 [2024-07-25 00:06:19.944756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.472 [2024-07-25 00:06:20.119470] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:24.731 00:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:24.731 00:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:22:24.731 00:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:22:24.731 00:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:24.731 00:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:22:24.731 00:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:22:24.731 00:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:24.731 00:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:24.731 00:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:24.731 00:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:24.731 00:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:24.990 malloc1 00:22:24.990 00:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:25.249 [2024-07-25 00:06:20.957638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:25.249 [2024-07-25 00:06:20.957930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.249 [2024-07-25 00:06:20.958011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:22:25.249 [2024-07-25 00:06:20.958248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.249 [2024-07-25 00:06:20.960878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.249 [2024-07-25 00:06:20.961051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:25.249 pt1 00:22:25.249 00:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:25.249 00:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:25.249 00:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:22:25.249 00:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:22:25.249 00:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:25.249 00:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:25.249 00:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:25.249 00:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:25.249 00:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:25.508 malloc2 00:22:25.508 00:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:25.767 [2024-07-25 00:06:21.465923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:25.767 [2024-07-25 00:06:21.466010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.767 [2024-07-25 00:06:21.466044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:22:25.767 [2024-07-25 00:06:21.466058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.767 [2024-07-25 00:06:21.468557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.767 [2024-07-25 00:06:21.468599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:25.767 pt2 00:22:25.767 00:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:25.767 00:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:25.767 00:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:22:25.767 00:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:22:25.767 00:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:25.767 00:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:25.767 00:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:25.767 00:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:25.767 00:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:26.026 malloc3 00:22:26.026 00:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:26.284 [2024-07-25 00:06:21.930139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:26.284 [2024-07-25 00:06:21.930255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:26.284 [2024-07-25 00:06:21.930290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:22:26.284 [2024-07-25 00:06:21.930305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:26.285 [2024-07-25 00:06:21.933123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:26.285 [2024-07-25 00:06:21.933165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:26.285 pt3 00:22:26.285 00:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:26.285 00:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:26.285 00:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:22:26.285 00:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:22:26.285 00:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:26.285 00:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:26.285 00:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:26.285 00:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:26.285 00:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:22:26.544 malloc4 00:22:26.544 00:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:26.544 [2024-07-25 00:06:22.411090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:26.544 [2024-07-25 00:06:22.411229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:26.544 [2024-07-25 00:06:22.411299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:22:26.544 [2024-07-25 00:06:22.411314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:26.802 [2024-07-25 00:06:22.413897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:26.803 [2024-07-25 00:06:22.413966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:26.803 pt4 00:22:26.803 00:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:26.803 00:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:26.803 00:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:22:26.803 [2024-07-25 00:06:22.623132] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:26.803 [2024-07-25 00:06:22.625187] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:26.803 [2024-07-25 00:06:22.625286] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:26.803 [2024-07-25 00:06:22.625370] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:26.803 [2024-07-25 00:06:22.625626] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009680 00:22:26.803 [2024-07-25 00:06:22.625642] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:26.803 [2024-07-25 00:06:22.625781] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:22:26.803 [2024-07-25 00:06:22.626367] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009680 00:22:26.803 [2024-07-25 00:06:22.626550] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009680 00:22:26.803 [2024-07-25 00:06:22.627060] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:26.803 00:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:26.803 00:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:26.803 00:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:26.803 00:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:26.803 00:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:26.803 00:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:26.803 00:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:26.803 00:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:26.803 00:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:26.803 00:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:26.803 00:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.803 00:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.062 00:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:27.062 "name": "raid_bdev1", 00:22:27.062 "uuid": "6e0b33c6-97be-4261-90a1-243048cce2e9", 00:22:27.062 "strip_size_kb": 64, 00:22:27.062 "state": "online", 00:22:27.062 "raid_level": "concat", 00:22:27.062 "superblock": true, 00:22:27.062 "num_base_bdevs": 4, 00:22:27.062 "num_base_bdevs_discovered": 4, 00:22:27.062 "num_base_bdevs_operational": 4, 00:22:27.062 "base_bdevs_list": [ 00:22:27.062 { 00:22:27.062 "name": "pt1", 00:22:27.062 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:27.062 "is_configured": true, 00:22:27.062 "data_offset": 2048, 00:22:27.062 "data_size": 63488 00:22:27.062 }, 00:22:27.062 { 00:22:27.062 "name": "pt2", 00:22:27.062 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:27.062 "is_configured": true, 00:22:27.062 "data_offset": 2048, 00:22:27.062 "data_size": 63488 00:22:27.062 }, 00:22:27.062 { 00:22:27.062 "name": "pt3", 00:22:27.062 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:27.062 "is_configured": true, 00:22:27.062 "data_offset": 2048, 00:22:27.062 "data_size": 63488 00:22:27.062 }, 00:22:27.062 { 00:22:27.062 "name": "pt4", 00:22:27.062 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:27.062 "is_configured": true, 00:22:27.062 "data_offset": 2048, 00:22:27.062 "data_size": 63488 00:22:27.062 } 00:22:27.062 ] 00:22:27.062 }' 00:22:27.062 00:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:27.062 00:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.321 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:22:27.321 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:27.321 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:27.321 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:27.321 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:27.321 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:27.321 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:27.321 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:27.580 [2024-07-25 00:06:23.371730] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:27.580 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:27.580 "name": "raid_bdev1", 00:22:27.580 "aliases": [ 00:22:27.580 "6e0b33c6-97be-4261-90a1-243048cce2e9" 00:22:27.580 ], 00:22:27.580 "product_name": "Raid Volume", 00:22:27.580 "block_size": 512, 00:22:27.580 "num_blocks": 253952, 00:22:27.580 "uuid": "6e0b33c6-97be-4261-90a1-243048cce2e9", 00:22:27.580 "assigned_rate_limits": { 00:22:27.580 "rw_ios_per_sec": 0, 00:22:27.580 "rw_mbytes_per_sec": 0, 00:22:27.580 "r_mbytes_per_sec": 0, 00:22:27.580 "w_mbytes_per_sec": 0 00:22:27.580 }, 00:22:27.580 "claimed": false, 00:22:27.580 "zoned": false, 00:22:27.580 "supported_io_types": { 00:22:27.580 "read": true, 00:22:27.580 "write": true, 00:22:27.580 "unmap": true, 00:22:27.580 "flush": true, 00:22:27.580 "reset": true, 00:22:27.580 "nvme_admin": false, 00:22:27.580 "nvme_io": false, 00:22:27.580 "nvme_io_md": false, 00:22:27.580 "write_zeroes": true, 00:22:27.580 "zcopy": false, 00:22:27.580 "get_zone_info": false, 00:22:27.580 "zone_management": false, 00:22:27.580 "zone_append": false, 00:22:27.580 "compare": false, 00:22:27.580 "compare_and_write": false, 00:22:27.580 "abort": false, 00:22:27.580 "seek_hole": false, 00:22:27.580 "seek_data": false, 00:22:27.580 "copy": false, 00:22:27.580 "nvme_iov_md": false 00:22:27.580 }, 00:22:27.580 "memory_domains": [ 00:22:27.580 { 00:22:27.580 "dma_device_id": "system", 00:22:27.580 "dma_device_type": 1 00:22:27.580 }, 00:22:27.580 { 00:22:27.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.580 "dma_device_type": 2 00:22:27.580 }, 00:22:27.580 { 00:22:27.580 "dma_device_id": "system", 00:22:27.580 "dma_device_type": 1 00:22:27.580 }, 00:22:27.580 { 00:22:27.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.580 "dma_device_type": 2 00:22:27.580 }, 00:22:27.580 { 00:22:27.580 "dma_device_id": "system", 00:22:27.580 "dma_device_type": 1 00:22:27.580 }, 00:22:27.580 { 00:22:27.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.580 "dma_device_type": 2 00:22:27.580 }, 00:22:27.580 { 00:22:27.580 "dma_device_id": "system", 00:22:27.580 "dma_device_type": 1 00:22:27.580 }, 00:22:27.580 { 00:22:27.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.581 "dma_device_type": 2 00:22:27.581 } 00:22:27.581 ], 00:22:27.581 "driver_specific": { 00:22:27.581 "raid": { 00:22:27.581 "uuid": "6e0b33c6-97be-4261-90a1-243048cce2e9", 00:22:27.581 "strip_size_kb": 64, 00:22:27.581 "state": "online", 00:22:27.581 "raid_level": "concat", 00:22:27.581 "superblock": true, 00:22:27.581 "num_base_bdevs": 4, 00:22:27.581 "num_base_bdevs_discovered": 4, 00:22:27.581 "num_base_bdevs_operational": 4, 00:22:27.581 "base_bdevs_list": [ 00:22:27.581 { 00:22:27.581 "name": "pt1", 00:22:27.581 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:27.581 "is_configured": true, 00:22:27.581 "data_offset": 2048, 00:22:27.581 "data_size": 63488 00:22:27.581 }, 00:22:27.581 { 00:22:27.581 "name": "pt2", 00:22:27.581 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:27.581 "is_configured": true, 00:22:27.581 "data_offset": 2048, 00:22:27.581 "data_size": 63488 00:22:27.581 }, 00:22:27.581 { 00:22:27.581 "name": "pt3", 00:22:27.581 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:27.581 "is_configured": true, 00:22:27.581 "data_offset": 2048, 00:22:27.581 "data_size": 63488 00:22:27.581 }, 00:22:27.581 { 00:22:27.581 "name": "pt4", 00:22:27.581 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:27.581 "is_configured": true, 00:22:27.581 "data_offset": 2048, 00:22:27.581 "data_size": 63488 00:22:27.581 } 00:22:27.581 ] 00:22:27.581 } 00:22:27.581 } 00:22:27.581 }' 00:22:27.581 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:27.581 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:27.581 pt2 00:22:27.581 pt3 00:22:27.581 pt4' 00:22:27.581 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:27.581 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:27.581 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:27.839 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:27.839 "name": "pt1", 00:22:27.839 "aliases": [ 00:22:27.839 "00000000-0000-0000-0000-000000000001" 00:22:27.839 ], 00:22:27.839 "product_name": "passthru", 00:22:27.840 "block_size": 512, 00:22:27.840 "num_blocks": 65536, 00:22:27.840 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:27.840 "assigned_rate_limits": { 00:22:27.840 "rw_ios_per_sec": 0, 00:22:27.840 "rw_mbytes_per_sec": 0, 00:22:27.840 "r_mbytes_per_sec": 0, 00:22:27.840 "w_mbytes_per_sec": 0 00:22:27.840 }, 00:22:27.840 "claimed": true, 00:22:27.840 "claim_type": "exclusive_write", 00:22:27.840 "zoned": false, 00:22:27.840 "supported_io_types": { 00:22:27.840 "read": true, 00:22:27.840 "write": true, 00:22:27.840 "unmap": true, 00:22:27.840 "flush": true, 00:22:27.840 "reset": true, 00:22:27.840 "nvme_admin": false, 00:22:27.840 "nvme_io": false, 00:22:27.840 "nvme_io_md": false, 00:22:27.840 "write_zeroes": true, 00:22:27.840 "zcopy": true, 00:22:27.840 "get_zone_info": false, 00:22:27.840 "zone_management": false, 00:22:27.840 "zone_append": false, 00:22:27.840 "compare": false, 00:22:27.840 "compare_and_write": false, 00:22:27.840 "abort": true, 00:22:27.840 "seek_hole": false, 00:22:27.840 "seek_data": false, 00:22:27.840 "copy": true, 00:22:27.840 "nvme_iov_md": false 00:22:27.840 }, 00:22:27.840 "memory_domains": [ 00:22:27.840 { 00:22:27.840 "dma_device_id": "system", 00:22:27.840 "dma_device_type": 1 00:22:27.840 }, 00:22:27.840 { 00:22:27.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.840 "dma_device_type": 2 00:22:27.840 } 00:22:27.840 ], 00:22:27.840 "driver_specific": { 00:22:27.840 "passthru": { 00:22:27.840 "name": "pt1", 00:22:27.840 "base_bdev_name": "malloc1" 00:22:27.840 } 00:22:27.840 } 00:22:27.840 }' 00:22:27.840 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:27.840 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:27.840 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:27.840 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:27.840 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:27.840 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:27.840 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:27.840 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:27.840 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:27.840 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:27.840 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:27.840 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:27.840 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:27.840 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:27.840 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:28.098 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:28.098 "name": "pt2", 00:22:28.098 "aliases": [ 00:22:28.098 "00000000-0000-0000-0000-000000000002" 00:22:28.098 ], 00:22:28.098 "product_name": "passthru", 00:22:28.098 "block_size": 512, 00:22:28.098 "num_blocks": 65536, 00:22:28.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:28.098 "assigned_rate_limits": { 00:22:28.098 "rw_ios_per_sec": 0, 00:22:28.098 "rw_mbytes_per_sec": 0, 00:22:28.098 "r_mbytes_per_sec": 0, 00:22:28.098 "w_mbytes_per_sec": 0 00:22:28.098 }, 00:22:28.098 "claimed": true, 00:22:28.098 "claim_type": "exclusive_write", 00:22:28.098 "zoned": false, 00:22:28.098 "supported_io_types": { 00:22:28.098 "read": true, 00:22:28.098 "write": true, 00:22:28.098 "unmap": true, 00:22:28.098 "flush": true, 00:22:28.098 "reset": true, 00:22:28.098 "nvme_admin": false, 00:22:28.098 "nvme_io": false, 00:22:28.098 "nvme_io_md": false, 00:22:28.098 "write_zeroes": true, 00:22:28.098 "zcopy": true, 00:22:28.098 "get_zone_info": false, 00:22:28.098 "zone_management": false, 00:22:28.098 "zone_append": false, 00:22:28.098 "compare": false, 00:22:28.098 "compare_and_write": false, 00:22:28.098 "abort": true, 00:22:28.098 "seek_hole": false, 00:22:28.098 "seek_data": false, 00:22:28.098 "copy": true, 00:22:28.098 "nvme_iov_md": false 00:22:28.098 }, 00:22:28.098 "memory_domains": [ 00:22:28.098 { 00:22:28.098 "dma_device_id": "system", 00:22:28.098 "dma_device_type": 1 00:22:28.098 }, 00:22:28.098 { 00:22:28.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.098 "dma_device_type": 2 00:22:28.098 } 00:22:28.098 ], 00:22:28.098 "driver_specific": { 00:22:28.098 "passthru": { 00:22:28.098 "name": "pt2", 00:22:28.098 "base_bdev_name": "malloc2" 00:22:28.098 } 00:22:28.098 } 00:22:28.099 }' 00:22:28.099 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:28.099 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:28.099 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:28.099 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:28.357 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:28.357 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:28.357 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:28.357 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:28.357 00:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:28.357 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.357 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.357 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:28.357 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:28.357 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:28.357 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:28.616 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:28.616 "name": "pt3", 00:22:28.616 "aliases": [ 00:22:28.616 "00000000-0000-0000-0000-000000000003" 00:22:28.616 ], 00:22:28.616 "product_name": "passthru", 00:22:28.616 "block_size": 512, 00:22:28.616 "num_blocks": 65536, 00:22:28.616 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:28.616 "assigned_rate_limits": { 00:22:28.616 "rw_ios_per_sec": 0, 00:22:28.616 "rw_mbytes_per_sec": 0, 00:22:28.616 "r_mbytes_per_sec": 0, 00:22:28.616 "w_mbytes_per_sec": 0 00:22:28.616 }, 00:22:28.616 "claimed": true, 00:22:28.616 "claim_type": "exclusive_write", 00:22:28.616 "zoned": false, 00:22:28.616 "supported_io_types": { 00:22:28.616 "read": true, 00:22:28.616 "write": true, 00:22:28.616 "unmap": true, 00:22:28.616 "flush": true, 00:22:28.616 "reset": true, 00:22:28.616 "nvme_admin": false, 00:22:28.616 "nvme_io": false, 00:22:28.616 "nvme_io_md": false, 00:22:28.616 "write_zeroes": true, 00:22:28.616 "zcopy": true, 00:22:28.616 "get_zone_info": false, 00:22:28.616 "zone_management": false, 00:22:28.616 "zone_append": false, 00:22:28.616 "compare": false, 00:22:28.616 "compare_and_write": false, 00:22:28.616 "abort": true, 00:22:28.616 "seek_hole": false, 00:22:28.616 "seek_data": false, 00:22:28.616 "copy": true, 00:22:28.616 "nvme_iov_md": false 00:22:28.616 }, 00:22:28.616 "memory_domains": [ 00:22:28.616 { 00:22:28.616 "dma_device_id": "system", 00:22:28.616 "dma_device_type": 1 00:22:28.616 }, 00:22:28.616 { 00:22:28.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.616 "dma_device_type": 2 00:22:28.616 } 00:22:28.616 ], 00:22:28.616 "driver_specific": { 00:22:28.616 "passthru": { 00:22:28.616 "name": "pt3", 00:22:28.616 "base_bdev_name": "malloc3" 00:22:28.616 } 00:22:28.616 } 00:22:28.616 }' 00:22:28.616 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:28.616 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:28.616 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:28.616 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:28.616 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:28.616 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:28.616 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:28.616 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:28.616 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:28.616 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.616 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.616 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:28.616 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:28.616 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:28.616 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:28.875 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:28.875 "name": "pt4", 00:22:28.875 "aliases": [ 00:22:28.875 "00000000-0000-0000-0000-000000000004" 00:22:28.875 ], 00:22:28.875 "product_name": "passthru", 00:22:28.875 "block_size": 512, 00:22:28.875 "num_blocks": 65536, 00:22:28.875 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:28.875 "assigned_rate_limits": { 00:22:28.875 "rw_ios_per_sec": 0, 00:22:28.875 "rw_mbytes_per_sec": 0, 00:22:28.875 "r_mbytes_per_sec": 0, 00:22:28.875 "w_mbytes_per_sec": 0 00:22:28.875 }, 00:22:28.875 "claimed": true, 00:22:28.875 "claim_type": "exclusive_write", 00:22:28.875 "zoned": false, 00:22:28.875 "supported_io_types": { 00:22:28.875 "read": true, 00:22:28.875 "write": true, 00:22:28.875 "unmap": true, 00:22:28.875 "flush": true, 00:22:28.875 "reset": true, 00:22:28.875 "nvme_admin": false, 00:22:28.875 "nvme_io": false, 00:22:28.875 "nvme_io_md": false, 00:22:28.875 "write_zeroes": true, 00:22:28.875 "zcopy": true, 00:22:28.875 "get_zone_info": false, 00:22:28.875 "zone_management": false, 00:22:28.875 "zone_append": false, 00:22:28.875 "compare": false, 00:22:28.875 "compare_and_write": false, 00:22:28.875 "abort": true, 00:22:28.875 "seek_hole": false, 00:22:28.875 "seek_data": false, 00:22:28.875 "copy": true, 00:22:28.875 "nvme_iov_md": false 00:22:28.875 }, 00:22:28.875 "memory_domains": [ 00:22:28.875 { 00:22:28.875 "dma_device_id": "system", 00:22:28.875 "dma_device_type": 1 00:22:28.875 }, 00:22:28.875 { 00:22:28.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.875 "dma_device_type": 2 00:22:28.875 } 00:22:28.875 ], 00:22:28.875 "driver_specific": { 00:22:28.875 "passthru": { 00:22:28.875 "name": "pt4", 00:22:28.875 "base_bdev_name": "malloc4" 00:22:28.875 } 00:22:28.875 } 00:22:28.875 }' 00:22:28.875 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:28.875 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:28.875 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:28.875 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:28.875 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:28.875 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:28.875 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:28.875 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:28.875 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:28.875 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.875 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.875 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:28.875 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:28.875 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:22:29.134 [2024-07-25 00:06:24.924261] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:29.134 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=6e0b33c6-97be-4261-90a1-243048cce2e9 00:22:29.134 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 6e0b33c6-97be-4261-90a1-243048cce2e9 ']' 00:22:29.134 00:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:29.394 [2024-07-25 00:06:25.198223] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:29.394 [2024-07-25 00:06:25.198458] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:29.394 [2024-07-25 00:06:25.198593] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:29.394 [2024-07-25 00:06:25.198733] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:29.394 [2024-07-25 00:06:25.198761] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009680 name raid_bdev1, state offline 00:22:29.394 00:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:22:29.394 00:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.653 00:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:22:29.653 00:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:22:29.653 00:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:29.653 00:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:29.912 00:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:29.912 00:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:30.170 00:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:30.170 00:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:30.429 00:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:30.429 00:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:30.429 00:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:30.429 00:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:30.688 00:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:22:30.688 00:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:30.688 00:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:22:30.688 00:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:30.688 00:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:30.688 00:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.688 00:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:30.688 00:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.688 00:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:30.688 00:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.688 00:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:30.688 00:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:30.688 00:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:30.947 [2024-07-25 00:06:26.763533] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:30.947 [2024-07-25 00:06:26.765519] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:30.947 [2024-07-25 00:06:26.765590] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:30.947 [2024-07-25 00:06:26.765641] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:30.947 [2024-07-25 00:06:26.765704] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:30.947 [2024-07-25 00:06:26.765787] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:30.947 [2024-07-25 00:06:26.765830] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:30.947 [2024-07-25 00:06:26.765863] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:30.947 [2024-07-25 00:06:26.765882] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:30.947 [2024-07-25 00:06:26.765899] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009c80 name raid_bdev1, state configuring 00:22:30.947 request: 00:22:30.947 { 00:22:30.947 "name": "raid_bdev1", 00:22:30.947 "raid_level": "concat", 00:22:30.947 "base_bdevs": [ 00:22:30.947 "malloc1", 00:22:30.947 "malloc2", 00:22:30.947 "malloc3", 00:22:30.948 "malloc4" 00:22:30.948 ], 00:22:30.948 "strip_size_kb": 64, 00:22:30.948 "superblock": false, 00:22:30.948 "method": "bdev_raid_create", 00:22:30.948 "req_id": 1 00:22:30.948 } 00:22:30.948 Got JSON-RPC error response 00:22:30.948 response: 00:22:30.948 { 00:22:30.948 "code": -17, 00:22:30.948 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:30.948 } 00:22:30.948 00:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:22:30.948 00:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:30.948 00:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:30.948 00:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:30.948 00:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.948 00:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:22:31.221 00:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:22:31.221 00:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:22:31.221 00:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:31.489 [2024-07-25 00:06:27.187579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:31.489 [2024-07-25 00:06:27.187841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:31.489 [2024-07-25 00:06:27.187999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:22:31.489 [2024-07-25 00:06:27.188121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:31.489 [2024-07-25 00:06:27.190545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:31.489 [2024-07-25 00:06:27.190767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:31.489 [2024-07-25 00:06:27.190907] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:31.489 [2024-07-25 00:06:27.191024] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:31.489 pt1 00:22:31.489 00:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:22:31.489 00:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:31.489 00:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:31.489 00:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:31.489 00:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:31.489 00:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:31.489 00:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:31.489 00:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:31.489 00:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:31.489 00:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:31.489 00:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.489 00:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.747 00:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:31.747 "name": "raid_bdev1", 00:22:31.747 "uuid": "6e0b33c6-97be-4261-90a1-243048cce2e9", 00:22:31.747 "strip_size_kb": 64, 00:22:31.747 "state": "configuring", 00:22:31.747 "raid_level": "concat", 00:22:31.747 "superblock": true, 00:22:31.747 "num_base_bdevs": 4, 00:22:31.747 "num_base_bdevs_discovered": 1, 00:22:31.747 "num_base_bdevs_operational": 4, 00:22:31.747 "base_bdevs_list": [ 00:22:31.747 { 00:22:31.747 "name": "pt1", 00:22:31.747 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:31.747 "is_configured": true, 00:22:31.747 "data_offset": 2048, 00:22:31.747 "data_size": 63488 00:22:31.747 }, 00:22:31.747 { 00:22:31.747 "name": null, 00:22:31.747 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:31.747 "is_configured": false, 00:22:31.747 "data_offset": 2048, 00:22:31.748 "data_size": 63488 00:22:31.748 }, 00:22:31.748 { 00:22:31.748 "name": null, 00:22:31.748 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:31.748 "is_configured": false, 00:22:31.748 "data_offset": 2048, 00:22:31.748 "data_size": 63488 00:22:31.748 }, 00:22:31.748 { 00:22:31.748 "name": null, 00:22:31.748 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:31.748 "is_configured": false, 00:22:31.748 "data_offset": 2048, 00:22:31.748 "data_size": 63488 00:22:31.748 } 00:22:31.748 ] 00:22:31.748 }' 00:22:31.748 00:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:31.748 00:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.006 00:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:22:32.006 00:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:32.265 [2024-07-25 00:06:28.003787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:32.265 [2024-07-25 00:06:28.004124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:32.265 [2024-07-25 00:06:28.004166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:22:32.265 [2024-07-25 00:06:28.004186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:32.265 [2024-07-25 00:06:28.004690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:32.265 [2024-07-25 00:06:28.004718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:32.265 [2024-07-25 00:06:28.004812] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:32.265 [2024-07-25 00:06:28.004860] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:32.265 pt2 00:22:32.265 00:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:32.523 [2024-07-25 00:06:28.271890] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:32.523 00:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:22:32.523 00:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:32.523 00:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:32.523 00:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:32.523 00:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:32.523 00:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:32.523 00:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:32.523 00:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:32.523 00:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:32.524 00:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:32.524 00:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.524 00:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.782 00:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:32.782 "name": "raid_bdev1", 00:22:32.782 "uuid": "6e0b33c6-97be-4261-90a1-243048cce2e9", 00:22:32.782 "strip_size_kb": 64, 00:22:32.782 "state": "configuring", 00:22:32.782 "raid_level": "concat", 00:22:32.782 "superblock": true, 00:22:32.782 "num_base_bdevs": 4, 00:22:32.782 "num_base_bdevs_discovered": 1, 00:22:32.782 "num_base_bdevs_operational": 4, 00:22:32.782 "base_bdevs_list": [ 00:22:32.782 { 00:22:32.782 "name": "pt1", 00:22:32.782 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:32.782 "is_configured": true, 00:22:32.782 "data_offset": 2048, 00:22:32.782 "data_size": 63488 00:22:32.782 }, 00:22:32.782 { 00:22:32.782 "name": null, 00:22:32.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:32.782 "is_configured": false, 00:22:32.782 "data_offset": 2048, 00:22:32.782 "data_size": 63488 00:22:32.782 }, 00:22:32.782 { 00:22:32.782 "name": null, 00:22:32.782 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:32.782 "is_configured": false, 00:22:32.782 "data_offset": 2048, 00:22:32.782 "data_size": 63488 00:22:32.782 }, 00:22:32.782 { 00:22:32.782 "name": null, 00:22:32.782 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:32.782 "is_configured": false, 00:22:32.782 "data_offset": 2048, 00:22:32.782 "data_size": 63488 00:22:32.782 } 00:22:32.782 ] 00:22:32.782 }' 00:22:32.782 00:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:32.782 00:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.041 00:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:22:33.041 00:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:33.041 00:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:33.300 [2024-07-25 00:06:29.052071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:33.300 [2024-07-25 00:06:29.052163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:33.300 [2024-07-25 00:06:29.052193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:22:33.300 [2024-07-25 00:06:29.052207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:33.300 [2024-07-25 00:06:29.052703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:33.300 [2024-07-25 00:06:29.052742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:33.300 [2024-07-25 00:06:29.052846] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:33.300 [2024-07-25 00:06:29.052913] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:33.300 pt2 00:22:33.300 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:22:33.300 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:33.300 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:33.558 [2024-07-25 00:06:29.316134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:33.558 [2024-07-25 00:06:29.316197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:33.558 [2024-07-25 00:06:29.316235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:22:33.558 [2024-07-25 00:06:29.316251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:33.558 [2024-07-25 00:06:29.316748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:33.558 [2024-07-25 00:06:29.316777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:33.558 [2024-07-25 00:06:29.316913] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:33.558 [2024-07-25 00:06:29.316941] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:33.558 pt3 00:22:33.558 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:22:33.558 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:33.559 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:33.817 [2024-07-25 00:06:29.530216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:33.817 [2024-07-25 00:06:29.530849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:33.817 [2024-07-25 00:06:29.530923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:22:33.817 [2024-07-25 00:06:29.530947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:33.817 [2024-07-25 00:06:29.531563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:33.817 [2024-07-25 00:06:29.531594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:33.817 [2024-07-25 00:06:29.531723] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:33.817 [2024-07-25 00:06:29.531760] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:33.817 [2024-07-25 00:06:29.532026] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a880 00:22:33.817 [2024-07-25 00:06:29.532047] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:33.817 [2024-07-25 00:06:29.532515] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:22:33.817 [2024-07-25 00:06:29.533005] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a880 00:22:33.817 [2024-07-25 00:06:29.533031] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a880 00:22:33.817 [2024-07-25 00:06:29.533468] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:33.817 pt4 00:22:33.817 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:22:33.817 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:33.817 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:33.817 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:33.817 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:33.817 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:33.817 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:33.817 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:33.817 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:33.817 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:33.817 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:33.817 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:33.817 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.817 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.076 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:34.076 "name": "raid_bdev1", 00:22:34.076 "uuid": "6e0b33c6-97be-4261-90a1-243048cce2e9", 00:22:34.076 "strip_size_kb": 64, 00:22:34.076 "state": "online", 00:22:34.076 "raid_level": "concat", 00:22:34.076 "superblock": true, 00:22:34.076 "num_base_bdevs": 4, 00:22:34.076 "num_base_bdevs_discovered": 4, 00:22:34.076 "num_base_bdevs_operational": 4, 00:22:34.076 "base_bdevs_list": [ 00:22:34.076 { 00:22:34.076 "name": "pt1", 00:22:34.076 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:34.076 "is_configured": true, 00:22:34.077 "data_offset": 2048, 00:22:34.077 "data_size": 63488 00:22:34.077 }, 00:22:34.077 { 00:22:34.077 "name": "pt2", 00:22:34.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:34.077 "is_configured": true, 00:22:34.077 "data_offset": 2048, 00:22:34.077 "data_size": 63488 00:22:34.077 }, 00:22:34.077 { 00:22:34.077 "name": "pt3", 00:22:34.077 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:34.077 "is_configured": true, 00:22:34.077 "data_offset": 2048, 00:22:34.077 "data_size": 63488 00:22:34.077 }, 00:22:34.077 { 00:22:34.077 "name": "pt4", 00:22:34.077 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:34.077 "is_configured": true, 00:22:34.077 "data_offset": 2048, 00:22:34.077 "data_size": 63488 00:22:34.077 } 00:22:34.077 ] 00:22:34.077 }' 00:22:34.077 00:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:34.077 00:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.335 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:22:34.335 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:34.335 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:34.335 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:34.335 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:34.335 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:34.335 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:34.335 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:34.594 [2024-07-25 00:06:30.326833] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:34.594 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:34.594 "name": "raid_bdev1", 00:22:34.594 "aliases": [ 00:22:34.594 "6e0b33c6-97be-4261-90a1-243048cce2e9" 00:22:34.594 ], 00:22:34.594 "product_name": "Raid Volume", 00:22:34.594 "block_size": 512, 00:22:34.594 "num_blocks": 253952, 00:22:34.594 "uuid": "6e0b33c6-97be-4261-90a1-243048cce2e9", 00:22:34.594 "assigned_rate_limits": { 00:22:34.594 "rw_ios_per_sec": 0, 00:22:34.594 "rw_mbytes_per_sec": 0, 00:22:34.594 "r_mbytes_per_sec": 0, 00:22:34.594 "w_mbytes_per_sec": 0 00:22:34.594 }, 00:22:34.594 "claimed": false, 00:22:34.594 "zoned": false, 00:22:34.594 "supported_io_types": { 00:22:34.594 "read": true, 00:22:34.594 "write": true, 00:22:34.594 "unmap": true, 00:22:34.594 "flush": true, 00:22:34.594 "reset": true, 00:22:34.594 "nvme_admin": false, 00:22:34.594 "nvme_io": false, 00:22:34.594 "nvme_io_md": false, 00:22:34.594 "write_zeroes": true, 00:22:34.594 "zcopy": false, 00:22:34.594 "get_zone_info": false, 00:22:34.594 "zone_management": false, 00:22:34.594 "zone_append": false, 00:22:34.594 "compare": false, 00:22:34.594 "compare_and_write": false, 00:22:34.594 "abort": false, 00:22:34.594 "seek_hole": false, 00:22:34.594 "seek_data": false, 00:22:34.594 "copy": false, 00:22:34.594 "nvme_iov_md": false 00:22:34.594 }, 00:22:34.594 "memory_domains": [ 00:22:34.594 { 00:22:34.594 "dma_device_id": "system", 00:22:34.594 "dma_device_type": 1 00:22:34.595 }, 00:22:34.595 { 00:22:34.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.595 "dma_device_type": 2 00:22:34.595 }, 00:22:34.595 { 00:22:34.595 "dma_device_id": "system", 00:22:34.595 "dma_device_type": 1 00:22:34.595 }, 00:22:34.595 { 00:22:34.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.595 "dma_device_type": 2 00:22:34.595 }, 00:22:34.595 { 00:22:34.595 "dma_device_id": "system", 00:22:34.595 "dma_device_type": 1 00:22:34.595 }, 00:22:34.595 { 00:22:34.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.595 "dma_device_type": 2 00:22:34.595 }, 00:22:34.595 { 00:22:34.595 "dma_device_id": "system", 00:22:34.595 "dma_device_type": 1 00:22:34.595 }, 00:22:34.595 { 00:22:34.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.595 "dma_device_type": 2 00:22:34.595 } 00:22:34.595 ], 00:22:34.595 "driver_specific": { 00:22:34.595 "raid": { 00:22:34.595 "uuid": "6e0b33c6-97be-4261-90a1-243048cce2e9", 00:22:34.595 "strip_size_kb": 64, 00:22:34.595 "state": "online", 00:22:34.595 "raid_level": "concat", 00:22:34.595 "superblock": true, 00:22:34.595 "num_base_bdevs": 4, 00:22:34.595 "num_base_bdevs_discovered": 4, 00:22:34.595 "num_base_bdevs_operational": 4, 00:22:34.595 "base_bdevs_list": [ 00:22:34.595 { 00:22:34.595 "name": "pt1", 00:22:34.595 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:34.595 "is_configured": true, 00:22:34.595 "data_offset": 2048, 00:22:34.595 "data_size": 63488 00:22:34.595 }, 00:22:34.595 { 00:22:34.595 "name": "pt2", 00:22:34.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:34.595 "is_configured": true, 00:22:34.595 "data_offset": 2048, 00:22:34.595 "data_size": 63488 00:22:34.595 }, 00:22:34.595 { 00:22:34.595 "name": "pt3", 00:22:34.595 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:34.595 "is_configured": true, 00:22:34.595 "data_offset": 2048, 00:22:34.595 "data_size": 63488 00:22:34.595 }, 00:22:34.595 { 00:22:34.595 "name": "pt4", 00:22:34.595 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:34.595 "is_configured": true, 00:22:34.595 "data_offset": 2048, 00:22:34.595 "data_size": 63488 00:22:34.595 } 00:22:34.595 ] 00:22:34.595 } 00:22:34.595 } 00:22:34.595 }' 00:22:34.595 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:34.595 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:34.595 pt2 00:22:34.595 pt3 00:22:34.595 pt4' 00:22:34.595 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:34.595 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:34.595 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:34.853 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:34.853 "name": "pt1", 00:22:34.853 "aliases": [ 00:22:34.853 "00000000-0000-0000-0000-000000000001" 00:22:34.853 ], 00:22:34.853 "product_name": "passthru", 00:22:34.853 "block_size": 512, 00:22:34.853 "num_blocks": 65536, 00:22:34.853 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:34.853 "assigned_rate_limits": { 00:22:34.853 "rw_ios_per_sec": 0, 00:22:34.853 "rw_mbytes_per_sec": 0, 00:22:34.853 "r_mbytes_per_sec": 0, 00:22:34.853 "w_mbytes_per_sec": 0 00:22:34.853 }, 00:22:34.853 "claimed": true, 00:22:34.853 "claim_type": "exclusive_write", 00:22:34.853 "zoned": false, 00:22:34.853 "supported_io_types": { 00:22:34.853 "read": true, 00:22:34.853 "write": true, 00:22:34.853 "unmap": true, 00:22:34.853 "flush": true, 00:22:34.853 "reset": true, 00:22:34.854 "nvme_admin": false, 00:22:34.854 "nvme_io": false, 00:22:34.854 "nvme_io_md": false, 00:22:34.854 "write_zeroes": true, 00:22:34.854 "zcopy": true, 00:22:34.854 "get_zone_info": false, 00:22:34.854 "zone_management": false, 00:22:34.854 "zone_append": false, 00:22:34.854 "compare": false, 00:22:34.854 "compare_and_write": false, 00:22:34.854 "abort": true, 00:22:34.854 "seek_hole": false, 00:22:34.854 "seek_data": false, 00:22:34.854 "copy": true, 00:22:34.854 "nvme_iov_md": false 00:22:34.854 }, 00:22:34.854 "memory_domains": [ 00:22:34.854 { 00:22:34.854 "dma_device_id": "system", 00:22:34.854 "dma_device_type": 1 00:22:34.854 }, 00:22:34.854 { 00:22:34.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.854 "dma_device_type": 2 00:22:34.854 } 00:22:34.854 ], 00:22:34.854 "driver_specific": { 00:22:34.854 "passthru": { 00:22:34.854 "name": "pt1", 00:22:34.854 "base_bdev_name": "malloc1" 00:22:34.854 } 00:22:34.854 } 00:22:34.854 }' 00:22:34.854 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:34.854 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:34.854 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:34.854 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:34.854 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:34.854 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:34.854 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.854 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.854 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:34.854 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.112 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.112 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:35.112 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:35.112 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:35.112 00:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:35.371 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:35.371 "name": "pt2", 00:22:35.371 "aliases": [ 00:22:35.371 "00000000-0000-0000-0000-000000000002" 00:22:35.371 ], 00:22:35.371 "product_name": "passthru", 00:22:35.371 "block_size": 512, 00:22:35.371 "num_blocks": 65536, 00:22:35.371 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:35.371 "assigned_rate_limits": { 00:22:35.371 "rw_ios_per_sec": 0, 00:22:35.371 "rw_mbytes_per_sec": 0, 00:22:35.371 "r_mbytes_per_sec": 0, 00:22:35.371 "w_mbytes_per_sec": 0 00:22:35.371 }, 00:22:35.371 "claimed": true, 00:22:35.371 "claim_type": "exclusive_write", 00:22:35.371 "zoned": false, 00:22:35.371 "supported_io_types": { 00:22:35.371 "read": true, 00:22:35.371 "write": true, 00:22:35.371 "unmap": true, 00:22:35.371 "flush": true, 00:22:35.371 "reset": true, 00:22:35.371 "nvme_admin": false, 00:22:35.371 "nvme_io": false, 00:22:35.371 "nvme_io_md": false, 00:22:35.371 "write_zeroes": true, 00:22:35.371 "zcopy": true, 00:22:35.371 "get_zone_info": false, 00:22:35.371 "zone_management": false, 00:22:35.371 "zone_append": false, 00:22:35.371 "compare": false, 00:22:35.371 "compare_and_write": false, 00:22:35.371 "abort": true, 00:22:35.371 "seek_hole": false, 00:22:35.371 "seek_data": false, 00:22:35.371 "copy": true, 00:22:35.371 "nvme_iov_md": false 00:22:35.371 }, 00:22:35.371 "memory_domains": [ 00:22:35.371 { 00:22:35.371 "dma_device_id": "system", 00:22:35.371 "dma_device_type": 1 00:22:35.371 }, 00:22:35.371 { 00:22:35.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.371 "dma_device_type": 2 00:22:35.371 } 00:22:35.371 ], 00:22:35.371 "driver_specific": { 00:22:35.371 "passthru": { 00:22:35.371 "name": "pt2", 00:22:35.371 "base_bdev_name": "malloc2" 00:22:35.371 } 00:22:35.371 } 00:22:35.371 }' 00:22:35.371 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.371 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.371 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:35.371 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.371 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.371 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:35.371 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.371 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.371 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:35.371 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.371 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.371 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:35.371 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:35.371 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:35.371 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:35.630 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:35.630 "name": "pt3", 00:22:35.630 "aliases": [ 00:22:35.630 "00000000-0000-0000-0000-000000000003" 00:22:35.630 ], 00:22:35.630 "product_name": "passthru", 00:22:35.630 "block_size": 512, 00:22:35.630 "num_blocks": 65536, 00:22:35.630 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:35.630 "assigned_rate_limits": { 00:22:35.630 "rw_ios_per_sec": 0, 00:22:35.630 "rw_mbytes_per_sec": 0, 00:22:35.630 "r_mbytes_per_sec": 0, 00:22:35.630 "w_mbytes_per_sec": 0 00:22:35.630 }, 00:22:35.630 "claimed": true, 00:22:35.630 "claim_type": "exclusive_write", 00:22:35.630 "zoned": false, 00:22:35.630 "supported_io_types": { 00:22:35.630 "read": true, 00:22:35.630 "write": true, 00:22:35.630 "unmap": true, 00:22:35.630 "flush": true, 00:22:35.630 "reset": true, 00:22:35.630 "nvme_admin": false, 00:22:35.630 "nvme_io": false, 00:22:35.630 "nvme_io_md": false, 00:22:35.630 "write_zeroes": true, 00:22:35.630 "zcopy": true, 00:22:35.630 "get_zone_info": false, 00:22:35.630 "zone_management": false, 00:22:35.630 "zone_append": false, 00:22:35.630 "compare": false, 00:22:35.630 "compare_and_write": false, 00:22:35.630 "abort": true, 00:22:35.630 "seek_hole": false, 00:22:35.630 "seek_data": false, 00:22:35.630 "copy": true, 00:22:35.630 "nvme_iov_md": false 00:22:35.630 }, 00:22:35.630 "memory_domains": [ 00:22:35.630 { 00:22:35.630 "dma_device_id": "system", 00:22:35.630 "dma_device_type": 1 00:22:35.630 }, 00:22:35.630 { 00:22:35.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.630 "dma_device_type": 2 00:22:35.630 } 00:22:35.630 ], 00:22:35.630 "driver_specific": { 00:22:35.630 "passthru": { 00:22:35.630 "name": "pt3", 00:22:35.630 "base_bdev_name": "malloc3" 00:22:35.630 } 00:22:35.630 } 00:22:35.630 }' 00:22:35.630 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.630 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.630 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:35.630 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.630 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.630 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:35.630 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.630 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.630 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:35.630 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.630 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.630 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:35.630 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:35.631 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:35.631 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:35.889 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:35.889 "name": "pt4", 00:22:35.889 "aliases": [ 00:22:35.889 "00000000-0000-0000-0000-000000000004" 00:22:35.889 ], 00:22:35.889 "product_name": "passthru", 00:22:35.889 "block_size": 512, 00:22:35.889 "num_blocks": 65536, 00:22:35.889 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:35.889 "assigned_rate_limits": { 00:22:35.889 "rw_ios_per_sec": 0, 00:22:35.889 "rw_mbytes_per_sec": 0, 00:22:35.889 "r_mbytes_per_sec": 0, 00:22:35.889 "w_mbytes_per_sec": 0 00:22:35.889 }, 00:22:35.889 "claimed": true, 00:22:35.889 "claim_type": "exclusive_write", 00:22:35.889 "zoned": false, 00:22:35.889 "supported_io_types": { 00:22:35.889 "read": true, 00:22:35.889 "write": true, 00:22:35.889 "unmap": true, 00:22:35.889 "flush": true, 00:22:35.889 "reset": true, 00:22:35.889 "nvme_admin": false, 00:22:35.889 "nvme_io": false, 00:22:35.889 "nvme_io_md": false, 00:22:35.889 "write_zeroes": true, 00:22:35.889 "zcopy": true, 00:22:35.889 "get_zone_info": false, 00:22:35.889 "zone_management": false, 00:22:35.889 "zone_append": false, 00:22:35.889 "compare": false, 00:22:35.889 "compare_and_write": false, 00:22:35.889 "abort": true, 00:22:35.889 "seek_hole": false, 00:22:35.889 "seek_data": false, 00:22:35.889 "copy": true, 00:22:35.889 "nvme_iov_md": false 00:22:35.889 }, 00:22:35.889 "memory_domains": [ 00:22:35.889 { 00:22:35.889 "dma_device_id": "system", 00:22:35.889 "dma_device_type": 1 00:22:35.889 }, 00:22:35.889 { 00:22:35.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.889 "dma_device_type": 2 00:22:35.889 } 00:22:35.889 ], 00:22:35.889 "driver_specific": { 00:22:35.889 "passthru": { 00:22:35.889 "name": "pt4", 00:22:35.889 "base_bdev_name": "malloc4" 00:22:35.889 } 00:22:35.889 } 00:22:35.889 }' 00:22:35.889 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.889 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.889 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:35.889 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.889 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.889 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:35.889 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:36.147 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:36.147 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:36.147 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:36.147 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:36.147 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:36.147 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:36.147 00:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:22:36.147 [2024-07-25 00:06:31.999648] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:36.406 00:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 6e0b33c6-97be-4261-90a1-243048cce2e9 '!=' 6e0b33c6-97be-4261-90a1-243048cce2e9 ']' 00:22:36.406 00:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:22:36.406 00:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:36.406 00:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:36.406 00:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 92852 00:22:36.406 00:06:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 92852 ']' 00:22:36.406 00:06:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 92852 00:22:36.406 00:06:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:22:36.406 00:06:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:36.406 00:06:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92852 00:22:36.406 killing process with pid 92852 00:22:36.406 00:06:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:36.406 00:06:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:36.406 00:06:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92852' 00:22:36.406 00:06:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 92852 00:22:36.406 00:06:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 92852 00:22:36.406 [2024-07-25 00:06:32.056282] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:36.406 [2024-07-25 00:06:32.056368] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:36.406 [2024-07-25 00:06:32.056495] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:36.406 [2024-07-25 00:06:32.056518] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state offline 00:22:36.665 [2024-07-25 00:06:32.348803] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:37.606 ************************************ 00:22:37.606 END TEST raid_superblock_test 00:22:37.606 ************************************ 00:22:37.606 00:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:22:37.606 00:22:37.606 real 0m13.863s 00:22:37.606 user 0m23.463s 00:22:37.606 sys 0m2.182s 00:22:37.606 00:06:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:37.606 00:06:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.606 00:06:33 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:22:37.606 00:06:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:37.606 00:06:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:37.606 00:06:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:37.606 ************************************ 00:22:37.606 START TEST raid_read_error_test 00:22:37.606 ************************************ 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:22:37.606 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:22:37.607 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:22:37.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:37.607 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.NEedv3QhTo 00:22:37.607 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=93341 00:22:37.607 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 93341 /var/tmp/spdk-raid.sock 00:22:37.607 00:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 93341 ']' 00:22:37.607 00:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:37.607 00:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:37.607 00:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:37.607 00:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:37.607 00:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:37.607 00:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.865 [2024-07-25 00:06:33.482291] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:22:37.865 [2024-07-25 00:06:33.482494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93341 ] 00:22:37.865 [2024-07-25 00:06:33.652472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.122 [2024-07-25 00:06:33.814855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.122 [2024-07-25 00:06:33.971116] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:38.688 00:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:38.688 00:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:22:38.688 00:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:38.688 00:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:38.946 BaseBdev1_malloc 00:22:38.946 00:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:39.204 true 00:22:39.204 00:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:39.462 [2024-07-25 00:06:35.137978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:39.462 [2024-07-25 00:06:35.138084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.462 [2024-07-25 00:06:35.138118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:22:39.462 [2024-07-25 00:06:35.138135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.462 [2024-07-25 00:06:35.140978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.462 [2024-07-25 00:06:35.141029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:39.462 BaseBdev1 00:22:39.462 00:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:39.462 00:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:39.719 BaseBdev2_malloc 00:22:39.719 00:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:39.978 true 00:22:39.978 00:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:39.978 [2024-07-25 00:06:35.840071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:39.978 [2024-07-25 00:06:35.840384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.978 [2024-07-25 00:06:35.840546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:22:39.978 [2024-07-25 00:06:35.840678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.978 [2024-07-25 00:06:35.843283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.978 [2024-07-25 00:06:35.843535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:39.978 BaseBdev2 00:22:40.246 00:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:40.246 00:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:40.246 BaseBdev3_malloc 00:22:40.519 00:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:40.519 true 00:22:40.519 00:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:40.776 [2024-07-25 00:06:36.527987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:40.776 [2024-07-25 00:06:36.528075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.776 [2024-07-25 00:06:36.528105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:22:40.776 [2024-07-25 00:06:36.528120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.776 [2024-07-25 00:06:36.530598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.776 [2024-07-25 00:06:36.530644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:40.776 BaseBdev3 00:22:40.776 00:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:40.776 00:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:41.033 BaseBdev4_malloc 00:22:41.033 00:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:22:41.291 true 00:22:41.291 00:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:41.549 [2024-07-25 00:06:37.186478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:41.549 [2024-07-25 00:06:37.186569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.549 [2024-07-25 00:06:37.186600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:22:41.549 [2024-07-25 00:06:37.186615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.549 [2024-07-25 00:06:37.189055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.549 [2024-07-25 00:06:37.189103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:41.549 BaseBdev4 00:22:41.550 00:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:22:41.550 [2024-07-25 00:06:37.402616] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:41.550 [2024-07-25 00:06:37.404717] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:41.550 [2024-07-25 00:06:37.404805] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:41.550 [2024-07-25 00:06:37.404902] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:41.550 [2024-07-25 00:06:37.405165] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a280 00:22:41.550 [2024-07-25 00:06:37.405187] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:41.550 [2024-07-25 00:06:37.405364] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:22:41.550 [2024-07-25 00:06:37.405964] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a280 00:22:41.550 [2024-07-25 00:06:37.406003] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a280 00:22:41.550 [2024-07-25 00:06:37.406163] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.808 00:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:41.808 00:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:41.808 00:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:41.808 00:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:41.808 00:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:41.808 00:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:41.808 00:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:41.808 00:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:41.808 00:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:41.808 00:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:41.808 00:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.808 00:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.808 00:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:41.808 "name": "raid_bdev1", 00:22:41.808 "uuid": "b929afcc-93f2-4f94-bcec-893eff055f01", 00:22:41.808 "strip_size_kb": 64, 00:22:41.808 "state": "online", 00:22:41.808 "raid_level": "concat", 00:22:41.808 "superblock": true, 00:22:41.808 "num_base_bdevs": 4, 00:22:41.808 "num_base_bdevs_discovered": 4, 00:22:41.808 "num_base_bdevs_operational": 4, 00:22:41.808 "base_bdevs_list": [ 00:22:41.808 { 00:22:41.808 "name": "BaseBdev1", 00:22:41.808 "uuid": "fe511402-86be-503a-b784-74f16feab71e", 00:22:41.808 "is_configured": true, 00:22:41.808 "data_offset": 2048, 00:22:41.808 "data_size": 63488 00:22:41.808 }, 00:22:41.808 { 00:22:41.808 "name": "BaseBdev2", 00:22:41.808 "uuid": "088e074d-4d96-5775-bea2-02e3ab2bc5b5", 00:22:41.808 "is_configured": true, 00:22:41.808 "data_offset": 2048, 00:22:41.808 "data_size": 63488 00:22:41.808 }, 00:22:41.808 { 00:22:41.808 "name": "BaseBdev3", 00:22:41.808 "uuid": "95f18114-179e-5f12-aaf2-772bbe232f43", 00:22:41.808 "is_configured": true, 00:22:41.808 "data_offset": 2048, 00:22:41.808 "data_size": 63488 00:22:41.808 }, 00:22:41.808 { 00:22:41.808 "name": "BaseBdev4", 00:22:41.808 "uuid": "1f98ec0d-d7fe-5acb-a99b-6f2d1c512404", 00:22:41.808 "is_configured": true, 00:22:41.808 "data_offset": 2048, 00:22:41.808 "data_size": 63488 00:22:41.808 } 00:22:41.808 ] 00:22:41.808 }' 00:22:41.808 00:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:41.808 00:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.373 00:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:22:42.373 00:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:42.373 [2024-07-25 00:06:38.063963] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ba0 00:22:43.308 00:06:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:22:43.565 00:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:22:43.565 00:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:22:43.566 00:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:22:43.566 00:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:43.566 00:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:43.566 00:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:43.566 00:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:43.566 00:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:43.566 00:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:43.566 00:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:43.566 00:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:43.566 00:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:43.566 00:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:43.566 00:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.566 00:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.823 00:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:43.823 "name": "raid_bdev1", 00:22:43.823 "uuid": "b929afcc-93f2-4f94-bcec-893eff055f01", 00:22:43.823 "strip_size_kb": 64, 00:22:43.823 "state": "online", 00:22:43.823 "raid_level": "concat", 00:22:43.823 "superblock": true, 00:22:43.823 "num_base_bdevs": 4, 00:22:43.823 "num_base_bdevs_discovered": 4, 00:22:43.823 "num_base_bdevs_operational": 4, 00:22:43.823 "base_bdevs_list": [ 00:22:43.823 { 00:22:43.823 "name": "BaseBdev1", 00:22:43.823 "uuid": "fe511402-86be-503a-b784-74f16feab71e", 00:22:43.823 "is_configured": true, 00:22:43.823 "data_offset": 2048, 00:22:43.823 "data_size": 63488 00:22:43.823 }, 00:22:43.823 { 00:22:43.823 "name": "BaseBdev2", 00:22:43.823 "uuid": "088e074d-4d96-5775-bea2-02e3ab2bc5b5", 00:22:43.823 "is_configured": true, 00:22:43.823 "data_offset": 2048, 00:22:43.823 "data_size": 63488 00:22:43.823 }, 00:22:43.823 { 00:22:43.823 "name": "BaseBdev3", 00:22:43.823 "uuid": "95f18114-179e-5f12-aaf2-772bbe232f43", 00:22:43.823 "is_configured": true, 00:22:43.823 "data_offset": 2048, 00:22:43.823 "data_size": 63488 00:22:43.823 }, 00:22:43.823 { 00:22:43.823 "name": "BaseBdev4", 00:22:43.823 "uuid": "1f98ec0d-d7fe-5acb-a99b-6f2d1c512404", 00:22:43.823 "is_configured": true, 00:22:43.823 "data_offset": 2048, 00:22:43.823 "data_size": 63488 00:22:43.823 } 00:22:43.823 ] 00:22:43.823 }' 00:22:43.823 00:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:43.823 00:06:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.081 00:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:44.339 [2024-07-25 00:06:40.043351] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:44.339 [2024-07-25 00:06:40.043590] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:44.339 [2024-07-25 00:06:40.046918] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:44.339 [2024-07-25 00:06:40.047116] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:44.339 [2024-07-25 00:06:40.047218] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:44.339 [2024-07-25 00:06:40.047448] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a280 name raid_bdev1, state offline 00:22:44.339 0 00:22:44.339 00:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 93341 00:22:44.339 00:06:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 93341 ']' 00:22:44.339 00:06:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 93341 00:22:44.339 00:06:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:22:44.339 00:06:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:44.339 00:06:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93341 00:22:44.339 killing process with pid 93341 00:22:44.339 00:06:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:44.339 00:06:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:44.339 00:06:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93341' 00:22:44.339 00:06:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 93341 00:22:44.339 [2024-07-25 00:06:40.098820] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:44.339 00:06:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 93341 00:22:44.597 [2024-07-25 00:06:40.357174] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:45.970 00:06:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:22:45.970 00:06:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.NEedv3QhTo 00:22:45.970 00:06:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:22:45.970 00:06:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.51 00:22:45.970 00:06:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:22:45.970 ************************************ 00:22:45.970 END TEST raid_read_error_test 00:22:45.970 ************************************ 00:22:45.970 00:06:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:45.970 00:06:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:45.970 00:06:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.51 != \0\.\0\0 ]] 00:22:45.970 00:22:45.970 real 0m8.087s 00:22:45.970 user 0m12.143s 00:22:45.970 sys 0m0.973s 00:22:45.970 00:06:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:45.970 00:06:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.970 00:06:41 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:22:45.970 00:06:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:45.970 00:06:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:45.970 00:06:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:45.970 ************************************ 00:22:45.970 START TEST raid_write_error_test 00:22:45.970 ************************************ 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.K4C61W0TLX 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=93531 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 93531 /var/tmp/spdk-raid.sock 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 93531 ']' 00:22:45.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:45.970 00:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.970 [2024-07-25 00:06:41.598188] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:22:45.970 [2024-07-25 00:06:41.598378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93531 ] 00:22:45.970 [2024-07-25 00:06:41.763232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.229 [2024-07-25 00:06:41.945851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.487 [2024-07-25 00:06:42.115590] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:46.745 00:06:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:46.745 00:06:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:22:46.745 00:06:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:46.745 00:06:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:47.005 BaseBdev1_malloc 00:22:47.005 00:06:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:47.263 true 00:22:47.264 00:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:47.523 [2024-07-25 00:06:43.272777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:47.523 [2024-07-25 00:06:43.272918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.523 [2024-07-25 00:06:43.272952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:22:47.523 [2024-07-25 00:06:43.272969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.523 [2024-07-25 00:06:43.275644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.523 BaseBdev1 00:22:47.523 [2024-07-25 00:06:43.275896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:47.523 00:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:47.523 00:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:47.780 BaseBdev2_malloc 00:22:47.780 00:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:48.037 true 00:22:48.037 00:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:48.294 [2024-07-25 00:06:43.936593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:48.294 [2024-07-25 00:06:43.936694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:48.294 [2024-07-25 00:06:43.936724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:22:48.294 [2024-07-25 00:06:43.936741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:48.294 [2024-07-25 00:06:43.939296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:48.294 [2024-07-25 00:06:43.939358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:48.294 BaseBdev2 00:22:48.294 00:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:48.294 00:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:48.551 BaseBdev3_malloc 00:22:48.551 00:06:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:48.551 true 00:22:48.809 00:06:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:48.810 [2024-07-25 00:06:44.624766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:48.810 [2024-07-25 00:06:44.624882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:48.810 [2024-07-25 00:06:44.624913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:22:48.810 [2024-07-25 00:06:44.624928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:48.810 [2024-07-25 00:06:44.627693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:48.810 BaseBdev3 00:22:48.810 [2024-07-25 00:06:44.627948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:48.810 00:06:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:48.810 00:06:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:49.068 BaseBdev4_malloc 00:22:49.327 00:06:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:22:49.585 true 00:22:49.585 00:06:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:49.585 [2024-07-25 00:06:45.433544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:49.585 [2024-07-25 00:06:45.433883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:49.585 [2024-07-25 00:06:45.434099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:22:49.585 [2024-07-25 00:06:45.434252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:49.585 [2024-07-25 00:06:45.436929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:49.585 [2024-07-25 00:06:45.437128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:49.585 BaseBdev4 00:22:49.585 00:06:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:22:49.843 [2024-07-25 00:06:45.685775] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:49.843 [2024-07-25 00:06:45.688375] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:49.843 [2024-07-25 00:06:45.688703] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:49.843 [2024-07-25 00:06:45.688988] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:49.843 [2024-07-25 00:06:45.689554] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a280 00:22:49.843 [2024-07-25 00:06:45.689827] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:49.843 [2024-07-25 00:06:45.690042] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:22:49.843 [2024-07-25 00:06:45.690778] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a280 00:22:49.844 [2024-07-25 00:06:45.690946] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a280 00:22:49.844 [2024-07-25 00:06:45.691415] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:49.844 00:06:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:49.844 00:06:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:49.844 00:06:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:49.844 00:06:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:49.844 00:06:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:49.844 00:06:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:49.844 00:06:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:49.844 00:06:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:49.844 00:06:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:49.844 00:06:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:49.844 00:06:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.102 00:06:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.361 00:06:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:50.361 "name": "raid_bdev1", 00:22:50.361 "uuid": "827cf366-c5f1-449b-841b-4c422006a0f2", 00:22:50.361 "strip_size_kb": 64, 00:22:50.361 "state": "online", 00:22:50.361 "raid_level": "concat", 00:22:50.361 "superblock": true, 00:22:50.361 "num_base_bdevs": 4, 00:22:50.361 "num_base_bdevs_discovered": 4, 00:22:50.361 "num_base_bdevs_operational": 4, 00:22:50.361 "base_bdevs_list": [ 00:22:50.361 { 00:22:50.361 "name": "BaseBdev1", 00:22:50.361 "uuid": "e07f6b77-1282-5cbd-b187-8b92e5ee5885", 00:22:50.361 "is_configured": true, 00:22:50.361 "data_offset": 2048, 00:22:50.361 "data_size": 63488 00:22:50.361 }, 00:22:50.361 { 00:22:50.361 "name": "BaseBdev2", 00:22:50.361 "uuid": "a88d9561-56c1-59d1-a511-3f7f5b9a13c1", 00:22:50.361 "is_configured": true, 00:22:50.361 "data_offset": 2048, 00:22:50.361 "data_size": 63488 00:22:50.361 }, 00:22:50.361 { 00:22:50.361 "name": "BaseBdev3", 00:22:50.361 "uuid": "7258f427-300c-562b-bb3d-db98e07a0db9", 00:22:50.361 "is_configured": true, 00:22:50.361 "data_offset": 2048, 00:22:50.361 "data_size": 63488 00:22:50.361 }, 00:22:50.361 { 00:22:50.361 "name": "BaseBdev4", 00:22:50.361 "uuid": "894b2cf6-8c66-52f5-b60a-83e8f57690cd", 00:22:50.361 "is_configured": true, 00:22:50.361 "data_offset": 2048, 00:22:50.361 "data_size": 63488 00:22:50.361 } 00:22:50.361 ] 00:22:50.361 }' 00:22:50.361 00:06:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:50.361 00:06:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.620 00:06:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:22:50.620 00:06:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:50.620 [2024-07-25 00:06:46.435208] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ba0 00:22:51.556 00:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:51.814 00:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:22:51.814 00:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:22:51.814 00:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:22:51.814 00:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:51.814 00:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:51.814 00:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:51.814 00:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:51.814 00:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:51.814 00:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:51.814 00:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:51.814 00:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:51.814 00:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:51.814 00:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:51.814 00:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.814 00:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.072 00:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:52.072 "name": "raid_bdev1", 00:22:52.072 "uuid": "827cf366-c5f1-449b-841b-4c422006a0f2", 00:22:52.072 "strip_size_kb": 64, 00:22:52.072 "state": "online", 00:22:52.072 "raid_level": "concat", 00:22:52.072 "superblock": true, 00:22:52.072 "num_base_bdevs": 4, 00:22:52.072 "num_base_bdevs_discovered": 4, 00:22:52.072 "num_base_bdevs_operational": 4, 00:22:52.072 "base_bdevs_list": [ 00:22:52.072 { 00:22:52.072 "name": "BaseBdev1", 00:22:52.072 "uuid": "e07f6b77-1282-5cbd-b187-8b92e5ee5885", 00:22:52.072 "is_configured": true, 00:22:52.072 "data_offset": 2048, 00:22:52.072 "data_size": 63488 00:22:52.072 }, 00:22:52.072 { 00:22:52.072 "name": "BaseBdev2", 00:22:52.072 "uuid": "a88d9561-56c1-59d1-a511-3f7f5b9a13c1", 00:22:52.072 "is_configured": true, 00:22:52.072 "data_offset": 2048, 00:22:52.072 "data_size": 63488 00:22:52.072 }, 00:22:52.072 { 00:22:52.072 "name": "BaseBdev3", 00:22:52.072 "uuid": "7258f427-300c-562b-bb3d-db98e07a0db9", 00:22:52.072 "is_configured": true, 00:22:52.072 "data_offset": 2048, 00:22:52.072 "data_size": 63488 00:22:52.072 }, 00:22:52.072 { 00:22:52.072 "name": "BaseBdev4", 00:22:52.072 "uuid": "894b2cf6-8c66-52f5-b60a-83e8f57690cd", 00:22:52.072 "is_configured": true, 00:22:52.072 "data_offset": 2048, 00:22:52.072 "data_size": 63488 00:22:52.072 } 00:22:52.072 ] 00:22:52.072 }' 00:22:52.072 00:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:52.072 00:06:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.331 00:06:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:52.589 [2024-07-25 00:06:48.429784] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:52.589 [2024-07-25 00:06:48.430059] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:52.589 [2024-07-25 00:06:48.433175] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:52.589 [2024-07-25 00:06:48.433416] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.589 [2024-07-25 00:06:48.433512] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:52.589 [2024-07-25 00:06:48.433731] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a280 name raid_bdev1, state offline 00:22:52.589 0 00:22:52.589 00:06:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 93531 00:22:52.589 00:06:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 93531 ']' 00:22:52.589 00:06:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 93531 00:22:52.589 00:06:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:22:52.589 00:06:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:52.848 00:06:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93531 00:22:52.848 00:06:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:52.848 00:06:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:52.848 00:06:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93531' 00:22:52.848 killing process with pid 93531 00:22:52.848 00:06:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 93531 00:22:52.848 [2024-07-25 00:06:48.479287] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:52.848 00:06:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 93531 00:22:53.106 [2024-07-25 00:06:48.736289] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:54.040 00:06:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.K4C61W0TLX 00:22:54.040 00:06:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:22:54.040 00:06:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:22:54.040 ************************************ 00:22:54.040 END TEST raid_write_error_test 00:22:54.040 ************************************ 00:22:54.040 00:06:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.50 00:22:54.040 00:06:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:22:54.040 00:06:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:54.040 00:06:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:54.040 00:06:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.50 != \0\.\0\0 ]] 00:22:54.040 00:22:54.040 real 0m8.299s 00:22:54.040 user 0m12.603s 00:22:54.040 sys 0m0.999s 00:22:54.040 00:06:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:54.040 00:06:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.040 00:06:49 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:22:54.040 00:06:49 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:22:54.040 00:06:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:54.040 00:06:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:54.040 00:06:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:54.040 ************************************ 00:22:54.040 START TEST raid_state_function_test 00:22:54.040 ************************************ 00:22:54.040 00:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:22:54.040 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:22:54.040 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:22:54.040 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:22:54.040 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:54.040 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:54.040 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:54.040 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:22:54.040 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:54.040 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:54.040 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:22:54.040 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:54.040 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:54.040 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:22:54.040 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:54.040 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:54.040 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:22:54.040 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:54.040 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:54.041 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:54.041 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:54.041 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:54.041 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:54.041 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:54.041 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:54.041 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:22:54.041 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:22:54.041 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:22:54.041 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:22:54.041 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=93722 00:22:54.041 Process raid pid: 93722 00:22:54.041 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 93722' 00:22:54.041 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:54.041 00:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 93722 /var/tmp/spdk-raid.sock 00:22:54.041 00:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 93722 ']' 00:22:54.041 00:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:54.041 00:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:54.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:54.041 00:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:54.041 00:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:54.041 00:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.299 [2024-07-25 00:06:49.963269] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:22:54.299 [2024-07-25 00:06:49.963454] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.299 [2024-07-25 00:06:50.139563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.557 [2024-07-25 00:06:50.318026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.815 [2024-07-25 00:06:50.479969] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:55.074 00:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:55.074 00:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:22:55.074 00:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:55.332 [2024-07-25 00:06:51.132615] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:55.332 [2024-07-25 00:06:51.132688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:55.332 [2024-07-25 00:06:51.132702] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:55.332 [2024-07-25 00:06:51.132716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:55.332 [2024-07-25 00:06:51.132725] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:55.332 [2024-07-25 00:06:51.132736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:55.332 [2024-07-25 00:06:51.132745] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:55.332 [2024-07-25 00:06:51.132756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:55.332 00:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:55.332 00:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:55.332 00:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:55.332 00:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:55.332 00:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:55.332 00:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:55.332 00:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:55.332 00:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:55.332 00:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:55.332 00:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:55.332 00:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.332 00:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:55.590 00:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:55.590 "name": "Existed_Raid", 00:22:55.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.590 "strip_size_kb": 0, 00:22:55.590 "state": "configuring", 00:22:55.590 "raid_level": "raid1", 00:22:55.590 "superblock": false, 00:22:55.590 "num_base_bdevs": 4, 00:22:55.590 "num_base_bdevs_discovered": 0, 00:22:55.590 "num_base_bdevs_operational": 4, 00:22:55.591 "base_bdevs_list": [ 00:22:55.591 { 00:22:55.591 "name": "BaseBdev1", 00:22:55.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.591 "is_configured": false, 00:22:55.591 "data_offset": 0, 00:22:55.591 "data_size": 0 00:22:55.591 }, 00:22:55.591 { 00:22:55.591 "name": "BaseBdev2", 00:22:55.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.591 "is_configured": false, 00:22:55.591 "data_offset": 0, 00:22:55.591 "data_size": 0 00:22:55.591 }, 00:22:55.591 { 00:22:55.591 "name": "BaseBdev3", 00:22:55.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.591 "is_configured": false, 00:22:55.591 "data_offset": 0, 00:22:55.591 "data_size": 0 00:22:55.591 }, 00:22:55.591 { 00:22:55.591 "name": "BaseBdev4", 00:22:55.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.591 "is_configured": false, 00:22:55.591 "data_offset": 0, 00:22:55.591 "data_size": 0 00:22:55.591 } 00:22:55.591 ] 00:22:55.591 }' 00:22:55.591 00:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:55.591 00:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.848 00:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:56.106 [2024-07-25 00:06:51.876676] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:56.106 [2024-07-25 00:06:51.876756] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:22:56.106 00:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:56.365 [2024-07-25 00:06:52.076732] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:56.365 [2024-07-25 00:06:52.076797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:56.365 [2024-07-25 00:06:52.076821] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:56.365 [2024-07-25 00:06:52.076837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:56.365 [2024-07-25 00:06:52.076848] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:56.365 [2024-07-25 00:06:52.076860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:56.365 [2024-07-25 00:06:52.076868] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:56.365 [2024-07-25 00:06:52.076879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:56.365 00:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:56.623 [2024-07-25 00:06:52.316662] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:56.623 BaseBdev1 00:22:56.623 00:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:56.623 00:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:22:56.623 00:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:56.623 00:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:56.623 00:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:56.623 00:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:56.623 00:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:56.882 00:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:57.179 [ 00:22:57.179 { 00:22:57.179 "name": "BaseBdev1", 00:22:57.179 "aliases": [ 00:22:57.179 "8855da8f-a726-4a90-8e57-303e6d0745d2" 00:22:57.179 ], 00:22:57.179 "product_name": "Malloc disk", 00:22:57.179 "block_size": 512, 00:22:57.180 "num_blocks": 65536, 00:22:57.180 "uuid": "8855da8f-a726-4a90-8e57-303e6d0745d2", 00:22:57.180 "assigned_rate_limits": { 00:22:57.180 "rw_ios_per_sec": 0, 00:22:57.180 "rw_mbytes_per_sec": 0, 00:22:57.180 "r_mbytes_per_sec": 0, 00:22:57.180 "w_mbytes_per_sec": 0 00:22:57.180 }, 00:22:57.180 "claimed": true, 00:22:57.180 "claim_type": "exclusive_write", 00:22:57.180 "zoned": false, 00:22:57.180 "supported_io_types": { 00:22:57.180 "read": true, 00:22:57.180 "write": true, 00:22:57.180 "unmap": true, 00:22:57.180 "flush": true, 00:22:57.180 "reset": true, 00:22:57.180 "nvme_admin": false, 00:22:57.180 "nvme_io": false, 00:22:57.180 "nvme_io_md": false, 00:22:57.180 "write_zeroes": true, 00:22:57.180 "zcopy": true, 00:22:57.180 "get_zone_info": false, 00:22:57.180 "zone_management": false, 00:22:57.180 "zone_append": false, 00:22:57.180 "compare": false, 00:22:57.180 "compare_and_write": false, 00:22:57.180 "abort": true, 00:22:57.180 "seek_hole": false, 00:22:57.180 "seek_data": false, 00:22:57.180 "copy": true, 00:22:57.180 "nvme_iov_md": false 00:22:57.180 }, 00:22:57.180 "memory_domains": [ 00:22:57.180 { 00:22:57.180 "dma_device_id": "system", 00:22:57.180 "dma_device_type": 1 00:22:57.180 }, 00:22:57.180 { 00:22:57.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.180 "dma_device_type": 2 00:22:57.180 } 00:22:57.180 ], 00:22:57.180 "driver_specific": {} 00:22:57.180 } 00:22:57.180 ] 00:22:57.180 00:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:57.180 00:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:57.180 00:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:57.180 00:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:57.180 00:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:57.180 00:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:57.180 00:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:57.180 00:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:57.180 00:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:57.180 00:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:57.180 00:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:57.180 00:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.180 00:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:57.438 00:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:57.438 "name": "Existed_Raid", 00:22:57.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.439 "strip_size_kb": 0, 00:22:57.439 "state": "configuring", 00:22:57.439 "raid_level": "raid1", 00:22:57.439 "superblock": false, 00:22:57.439 "num_base_bdevs": 4, 00:22:57.439 "num_base_bdevs_discovered": 1, 00:22:57.439 "num_base_bdevs_operational": 4, 00:22:57.439 "base_bdevs_list": [ 00:22:57.439 { 00:22:57.439 "name": "BaseBdev1", 00:22:57.439 "uuid": "8855da8f-a726-4a90-8e57-303e6d0745d2", 00:22:57.439 "is_configured": true, 00:22:57.439 "data_offset": 0, 00:22:57.439 "data_size": 65536 00:22:57.439 }, 00:22:57.439 { 00:22:57.439 "name": "BaseBdev2", 00:22:57.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.439 "is_configured": false, 00:22:57.439 "data_offset": 0, 00:22:57.439 "data_size": 0 00:22:57.439 }, 00:22:57.439 { 00:22:57.439 "name": "BaseBdev3", 00:22:57.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.439 "is_configured": false, 00:22:57.439 "data_offset": 0, 00:22:57.439 "data_size": 0 00:22:57.439 }, 00:22:57.439 { 00:22:57.439 "name": "BaseBdev4", 00:22:57.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.439 "is_configured": false, 00:22:57.439 "data_offset": 0, 00:22:57.439 "data_size": 0 00:22:57.439 } 00:22:57.439 ] 00:22:57.439 }' 00:22:57.439 00:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:57.439 00:06:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.696 00:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:57.954 [2024-07-25 00:06:53.621247] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:57.954 [2024-07-25 00:06:53.621320] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:22:57.954 00:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:58.212 [2024-07-25 00:06:53.893362] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:58.212 [2024-07-25 00:06:53.895593] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:58.212 [2024-07-25 00:06:53.895657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:58.212 [2024-07-25 00:06:53.895670] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:58.212 [2024-07-25 00:06:53.895685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:58.212 [2024-07-25 00:06:53.895694] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:58.212 [2024-07-25 00:06:53.895709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:58.212 00:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:58.212 00:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:58.212 00:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:58.212 00:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:58.212 00:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:58.212 00:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:58.212 00:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:58.212 00:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:58.212 00:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:58.212 00:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:58.212 00:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:58.212 00:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:58.212 00:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.212 00:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:58.470 00:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:58.470 "name": "Existed_Raid", 00:22:58.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.470 "strip_size_kb": 0, 00:22:58.470 "state": "configuring", 00:22:58.470 "raid_level": "raid1", 00:22:58.470 "superblock": false, 00:22:58.470 "num_base_bdevs": 4, 00:22:58.470 "num_base_bdevs_discovered": 1, 00:22:58.470 "num_base_bdevs_operational": 4, 00:22:58.470 "base_bdevs_list": [ 00:22:58.470 { 00:22:58.470 "name": "BaseBdev1", 00:22:58.470 "uuid": "8855da8f-a726-4a90-8e57-303e6d0745d2", 00:22:58.470 "is_configured": true, 00:22:58.470 "data_offset": 0, 00:22:58.470 "data_size": 65536 00:22:58.470 }, 00:22:58.470 { 00:22:58.470 "name": "BaseBdev2", 00:22:58.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.470 "is_configured": false, 00:22:58.470 "data_offset": 0, 00:22:58.470 "data_size": 0 00:22:58.470 }, 00:22:58.470 { 00:22:58.470 "name": "BaseBdev3", 00:22:58.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.470 "is_configured": false, 00:22:58.470 "data_offset": 0, 00:22:58.470 "data_size": 0 00:22:58.470 }, 00:22:58.470 { 00:22:58.470 "name": "BaseBdev4", 00:22:58.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.470 "is_configured": false, 00:22:58.470 "data_offset": 0, 00:22:58.470 "data_size": 0 00:22:58.470 } 00:22:58.470 ] 00:22:58.470 }' 00:22:58.470 00:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:58.470 00:06:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.728 00:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:58.987 [2024-07-25 00:06:54.703724] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:58.987 BaseBdev2 00:22:58.987 00:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:58.987 00:06:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:22:58.987 00:06:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:58.987 00:06:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:58.987 00:06:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:58.987 00:06:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:58.987 00:06:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:59.245 00:06:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:59.503 [ 00:22:59.503 { 00:22:59.503 "name": "BaseBdev2", 00:22:59.503 "aliases": [ 00:22:59.503 "7bbedd94-a76d-4f87-b7bd-2c71986e2a9b" 00:22:59.503 ], 00:22:59.503 "product_name": "Malloc disk", 00:22:59.503 "block_size": 512, 00:22:59.503 "num_blocks": 65536, 00:22:59.503 "uuid": "7bbedd94-a76d-4f87-b7bd-2c71986e2a9b", 00:22:59.503 "assigned_rate_limits": { 00:22:59.503 "rw_ios_per_sec": 0, 00:22:59.503 "rw_mbytes_per_sec": 0, 00:22:59.503 "r_mbytes_per_sec": 0, 00:22:59.503 "w_mbytes_per_sec": 0 00:22:59.503 }, 00:22:59.503 "claimed": true, 00:22:59.503 "claim_type": "exclusive_write", 00:22:59.503 "zoned": false, 00:22:59.503 "supported_io_types": { 00:22:59.503 "read": true, 00:22:59.503 "write": true, 00:22:59.503 "unmap": true, 00:22:59.503 "flush": true, 00:22:59.503 "reset": true, 00:22:59.503 "nvme_admin": false, 00:22:59.503 "nvme_io": false, 00:22:59.503 "nvme_io_md": false, 00:22:59.503 "write_zeroes": true, 00:22:59.503 "zcopy": true, 00:22:59.503 "get_zone_info": false, 00:22:59.503 "zone_management": false, 00:22:59.503 "zone_append": false, 00:22:59.503 "compare": false, 00:22:59.503 "compare_and_write": false, 00:22:59.503 "abort": true, 00:22:59.503 "seek_hole": false, 00:22:59.503 "seek_data": false, 00:22:59.503 "copy": true, 00:22:59.503 "nvme_iov_md": false 00:22:59.503 }, 00:22:59.503 "memory_domains": [ 00:22:59.503 { 00:22:59.503 "dma_device_id": "system", 00:22:59.503 "dma_device_type": 1 00:22:59.503 }, 00:22:59.503 { 00:22:59.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:59.503 "dma_device_type": 2 00:22:59.503 } 00:22:59.503 ], 00:22:59.503 "driver_specific": {} 00:22:59.503 } 00:22:59.503 ] 00:22:59.504 00:06:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:59.504 00:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:59.504 00:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:59.504 00:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:59.504 00:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:59.504 00:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:59.504 00:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:59.504 00:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:59.504 00:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:59.504 00:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:59.504 00:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:59.504 00:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:59.504 00:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:59.504 00:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.504 00:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:59.762 00:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:59.762 "name": "Existed_Raid", 00:22:59.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.762 "strip_size_kb": 0, 00:22:59.762 "state": "configuring", 00:22:59.762 "raid_level": "raid1", 00:22:59.762 "superblock": false, 00:22:59.762 "num_base_bdevs": 4, 00:22:59.762 "num_base_bdevs_discovered": 2, 00:22:59.762 "num_base_bdevs_operational": 4, 00:22:59.762 "base_bdevs_list": [ 00:22:59.762 { 00:22:59.762 "name": "BaseBdev1", 00:22:59.762 "uuid": "8855da8f-a726-4a90-8e57-303e6d0745d2", 00:22:59.762 "is_configured": true, 00:22:59.762 "data_offset": 0, 00:22:59.762 "data_size": 65536 00:22:59.762 }, 00:22:59.762 { 00:22:59.762 "name": "BaseBdev2", 00:22:59.762 "uuid": "7bbedd94-a76d-4f87-b7bd-2c71986e2a9b", 00:22:59.762 "is_configured": true, 00:22:59.762 "data_offset": 0, 00:22:59.762 "data_size": 65536 00:22:59.762 }, 00:22:59.762 { 00:22:59.762 "name": "BaseBdev3", 00:22:59.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.762 "is_configured": false, 00:22:59.762 "data_offset": 0, 00:22:59.762 "data_size": 0 00:22:59.762 }, 00:22:59.762 { 00:22:59.762 "name": "BaseBdev4", 00:22:59.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.762 "is_configured": false, 00:22:59.762 "data_offset": 0, 00:22:59.762 "data_size": 0 00:22:59.762 } 00:22:59.762 ] 00:22:59.762 }' 00:22:59.762 00:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:59.762 00:06:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.021 00:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:00.280 [2024-07-25 00:06:55.997206] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:00.280 BaseBdev3 00:23:00.280 00:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:00.280 00:06:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:23:00.280 00:06:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:00.280 00:06:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:00.280 00:06:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:00.280 00:06:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:00.280 00:06:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:00.538 00:06:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:00.797 [ 00:23:00.797 { 00:23:00.797 "name": "BaseBdev3", 00:23:00.797 "aliases": [ 00:23:00.797 "b9266932-1ebc-40e1-a80e-c3b5671c8e64" 00:23:00.797 ], 00:23:00.797 "product_name": "Malloc disk", 00:23:00.797 "block_size": 512, 00:23:00.797 "num_blocks": 65536, 00:23:00.797 "uuid": "b9266932-1ebc-40e1-a80e-c3b5671c8e64", 00:23:00.797 "assigned_rate_limits": { 00:23:00.797 "rw_ios_per_sec": 0, 00:23:00.797 "rw_mbytes_per_sec": 0, 00:23:00.797 "r_mbytes_per_sec": 0, 00:23:00.797 "w_mbytes_per_sec": 0 00:23:00.797 }, 00:23:00.797 "claimed": true, 00:23:00.797 "claim_type": "exclusive_write", 00:23:00.797 "zoned": false, 00:23:00.797 "supported_io_types": { 00:23:00.797 "read": true, 00:23:00.797 "write": true, 00:23:00.797 "unmap": true, 00:23:00.797 "flush": true, 00:23:00.797 "reset": true, 00:23:00.797 "nvme_admin": false, 00:23:00.797 "nvme_io": false, 00:23:00.797 "nvme_io_md": false, 00:23:00.797 "write_zeroes": true, 00:23:00.797 "zcopy": true, 00:23:00.797 "get_zone_info": false, 00:23:00.797 "zone_management": false, 00:23:00.797 "zone_append": false, 00:23:00.797 "compare": false, 00:23:00.797 "compare_and_write": false, 00:23:00.797 "abort": true, 00:23:00.797 "seek_hole": false, 00:23:00.797 "seek_data": false, 00:23:00.797 "copy": true, 00:23:00.797 "nvme_iov_md": false 00:23:00.797 }, 00:23:00.797 "memory_domains": [ 00:23:00.797 { 00:23:00.797 "dma_device_id": "system", 00:23:00.797 "dma_device_type": 1 00:23:00.798 }, 00:23:00.798 { 00:23:00.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.798 "dma_device_type": 2 00:23:00.798 } 00:23:00.798 ], 00:23:00.798 "driver_specific": {} 00:23:00.798 } 00:23:00.798 ] 00:23:00.798 00:06:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:00.798 00:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:00.798 00:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:00.798 00:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:00.798 00:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:00.798 00:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:00.798 00:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:00.798 00:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:00.798 00:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:00.798 00:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:00.798 00:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:00.798 00:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:00.798 00:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:00.798 00:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.798 00:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:00.798 00:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:00.798 "name": "Existed_Raid", 00:23:00.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.798 "strip_size_kb": 0, 00:23:00.798 "state": "configuring", 00:23:00.798 "raid_level": "raid1", 00:23:00.798 "superblock": false, 00:23:00.798 "num_base_bdevs": 4, 00:23:00.798 "num_base_bdevs_discovered": 3, 00:23:00.798 "num_base_bdevs_operational": 4, 00:23:00.798 "base_bdevs_list": [ 00:23:00.798 { 00:23:00.798 "name": "BaseBdev1", 00:23:00.798 "uuid": "8855da8f-a726-4a90-8e57-303e6d0745d2", 00:23:00.798 "is_configured": true, 00:23:00.798 "data_offset": 0, 00:23:00.798 "data_size": 65536 00:23:00.798 }, 00:23:00.798 { 00:23:00.798 "name": "BaseBdev2", 00:23:00.798 "uuid": "7bbedd94-a76d-4f87-b7bd-2c71986e2a9b", 00:23:00.798 "is_configured": true, 00:23:00.798 "data_offset": 0, 00:23:00.798 "data_size": 65536 00:23:00.798 }, 00:23:00.798 { 00:23:00.798 "name": "BaseBdev3", 00:23:00.798 "uuid": "b9266932-1ebc-40e1-a80e-c3b5671c8e64", 00:23:00.798 "is_configured": true, 00:23:00.798 "data_offset": 0, 00:23:00.798 "data_size": 65536 00:23:00.798 }, 00:23:00.798 { 00:23:00.798 "name": "BaseBdev4", 00:23:00.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.798 "is_configured": false, 00:23:00.798 "data_offset": 0, 00:23:00.798 "data_size": 0 00:23:00.798 } 00:23:00.798 ] 00:23:00.798 }' 00:23:00.798 00:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:00.798 00:06:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.365 00:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:01.365 [2024-07-25 00:06:57.166764] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:01.365 [2024-07-25 00:06:57.166907] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:23:01.365 [2024-07-25 00:06:57.166919] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:01.365 [2024-07-25 00:06:57.167039] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:23:01.365 [2024-07-25 00:06:57.167436] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:23:01.365 [2024-07-25 00:06:57.167468] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:23:01.365 [2024-07-25 00:06:57.167730] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:01.365 BaseBdev4 00:23:01.365 00:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:23:01.365 00:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:23:01.365 00:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:01.365 00:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:01.365 00:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:01.365 00:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:01.365 00:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:01.624 00:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:01.883 [ 00:23:01.883 { 00:23:01.883 "name": "BaseBdev4", 00:23:01.883 "aliases": [ 00:23:01.883 "7bbe5721-477f-4570-953b-d175fc9469d3" 00:23:01.883 ], 00:23:01.883 "product_name": "Malloc disk", 00:23:01.883 "block_size": 512, 00:23:01.883 "num_blocks": 65536, 00:23:01.883 "uuid": "7bbe5721-477f-4570-953b-d175fc9469d3", 00:23:01.883 "assigned_rate_limits": { 00:23:01.883 "rw_ios_per_sec": 0, 00:23:01.883 "rw_mbytes_per_sec": 0, 00:23:01.883 "r_mbytes_per_sec": 0, 00:23:01.883 "w_mbytes_per_sec": 0 00:23:01.883 }, 00:23:01.883 "claimed": true, 00:23:01.883 "claim_type": "exclusive_write", 00:23:01.883 "zoned": false, 00:23:01.883 "supported_io_types": { 00:23:01.883 "read": true, 00:23:01.883 "write": true, 00:23:01.883 "unmap": true, 00:23:01.883 "flush": true, 00:23:01.883 "reset": true, 00:23:01.883 "nvme_admin": false, 00:23:01.883 "nvme_io": false, 00:23:01.883 "nvme_io_md": false, 00:23:01.883 "write_zeroes": true, 00:23:01.883 "zcopy": true, 00:23:01.883 "get_zone_info": false, 00:23:01.883 "zone_management": false, 00:23:01.883 "zone_append": false, 00:23:01.883 "compare": false, 00:23:01.883 "compare_and_write": false, 00:23:01.883 "abort": true, 00:23:01.883 "seek_hole": false, 00:23:01.883 "seek_data": false, 00:23:01.883 "copy": true, 00:23:01.883 "nvme_iov_md": false 00:23:01.883 }, 00:23:01.883 "memory_domains": [ 00:23:01.883 { 00:23:01.883 "dma_device_id": "system", 00:23:01.883 "dma_device_type": 1 00:23:01.883 }, 00:23:01.883 { 00:23:01.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:01.883 "dma_device_type": 2 00:23:01.883 } 00:23:01.883 ], 00:23:01.883 "driver_specific": {} 00:23:01.883 } 00:23:01.883 ] 00:23:01.883 00:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:01.883 00:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:01.883 00:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:01.883 00:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:23:01.883 00:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:01.883 00:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:01.883 00:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:01.883 00:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:01.883 00:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:01.883 00:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:01.883 00:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:01.883 00:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:01.883 00:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:01.883 00:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.883 00:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:02.142 00:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:02.142 "name": "Existed_Raid", 00:23:02.142 "uuid": "b817f2eb-0ed4-406b-b3cc-e8853e94a0bd", 00:23:02.142 "strip_size_kb": 0, 00:23:02.142 "state": "online", 00:23:02.142 "raid_level": "raid1", 00:23:02.142 "superblock": false, 00:23:02.142 "num_base_bdevs": 4, 00:23:02.142 "num_base_bdevs_discovered": 4, 00:23:02.142 "num_base_bdevs_operational": 4, 00:23:02.142 "base_bdevs_list": [ 00:23:02.142 { 00:23:02.142 "name": "BaseBdev1", 00:23:02.142 "uuid": "8855da8f-a726-4a90-8e57-303e6d0745d2", 00:23:02.142 "is_configured": true, 00:23:02.142 "data_offset": 0, 00:23:02.142 "data_size": 65536 00:23:02.142 }, 00:23:02.142 { 00:23:02.142 "name": "BaseBdev2", 00:23:02.142 "uuid": "7bbedd94-a76d-4f87-b7bd-2c71986e2a9b", 00:23:02.142 "is_configured": true, 00:23:02.142 "data_offset": 0, 00:23:02.142 "data_size": 65536 00:23:02.142 }, 00:23:02.142 { 00:23:02.142 "name": "BaseBdev3", 00:23:02.142 "uuid": "b9266932-1ebc-40e1-a80e-c3b5671c8e64", 00:23:02.142 "is_configured": true, 00:23:02.142 "data_offset": 0, 00:23:02.142 "data_size": 65536 00:23:02.142 }, 00:23:02.142 { 00:23:02.142 "name": "BaseBdev4", 00:23:02.142 "uuid": "7bbe5721-477f-4570-953b-d175fc9469d3", 00:23:02.142 "is_configured": true, 00:23:02.142 "data_offset": 0, 00:23:02.142 "data_size": 65536 00:23:02.142 } 00:23:02.142 ] 00:23:02.142 }' 00:23:02.142 00:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:02.142 00:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.401 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:02.401 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:02.401 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:02.401 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:02.401 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:02.401 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:02.401 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:02.401 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:02.660 [2024-07-25 00:06:58.467862] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:02.660 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:02.660 "name": "Existed_Raid", 00:23:02.660 "aliases": [ 00:23:02.660 "b817f2eb-0ed4-406b-b3cc-e8853e94a0bd" 00:23:02.660 ], 00:23:02.660 "product_name": "Raid Volume", 00:23:02.660 "block_size": 512, 00:23:02.660 "num_blocks": 65536, 00:23:02.660 "uuid": "b817f2eb-0ed4-406b-b3cc-e8853e94a0bd", 00:23:02.660 "assigned_rate_limits": { 00:23:02.660 "rw_ios_per_sec": 0, 00:23:02.660 "rw_mbytes_per_sec": 0, 00:23:02.660 "r_mbytes_per_sec": 0, 00:23:02.660 "w_mbytes_per_sec": 0 00:23:02.660 }, 00:23:02.660 "claimed": false, 00:23:02.660 "zoned": false, 00:23:02.660 "supported_io_types": { 00:23:02.660 "read": true, 00:23:02.660 "write": true, 00:23:02.660 "unmap": false, 00:23:02.660 "flush": false, 00:23:02.660 "reset": true, 00:23:02.660 "nvme_admin": false, 00:23:02.660 "nvme_io": false, 00:23:02.660 "nvme_io_md": false, 00:23:02.660 "write_zeroes": true, 00:23:02.660 "zcopy": false, 00:23:02.660 "get_zone_info": false, 00:23:02.660 "zone_management": false, 00:23:02.660 "zone_append": false, 00:23:02.660 "compare": false, 00:23:02.660 "compare_and_write": false, 00:23:02.660 "abort": false, 00:23:02.660 "seek_hole": false, 00:23:02.660 "seek_data": false, 00:23:02.660 "copy": false, 00:23:02.660 "nvme_iov_md": false 00:23:02.660 }, 00:23:02.660 "memory_domains": [ 00:23:02.660 { 00:23:02.660 "dma_device_id": "system", 00:23:02.660 "dma_device_type": 1 00:23:02.660 }, 00:23:02.660 { 00:23:02.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.660 "dma_device_type": 2 00:23:02.660 }, 00:23:02.660 { 00:23:02.660 "dma_device_id": "system", 00:23:02.660 "dma_device_type": 1 00:23:02.660 }, 00:23:02.660 { 00:23:02.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.660 "dma_device_type": 2 00:23:02.660 }, 00:23:02.660 { 00:23:02.660 "dma_device_id": "system", 00:23:02.660 "dma_device_type": 1 00:23:02.660 }, 00:23:02.660 { 00:23:02.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.660 "dma_device_type": 2 00:23:02.660 }, 00:23:02.660 { 00:23:02.660 "dma_device_id": "system", 00:23:02.660 "dma_device_type": 1 00:23:02.660 }, 00:23:02.660 { 00:23:02.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.660 "dma_device_type": 2 00:23:02.660 } 00:23:02.660 ], 00:23:02.660 "driver_specific": { 00:23:02.660 "raid": { 00:23:02.660 "uuid": "b817f2eb-0ed4-406b-b3cc-e8853e94a0bd", 00:23:02.660 "strip_size_kb": 0, 00:23:02.660 "state": "online", 00:23:02.660 "raid_level": "raid1", 00:23:02.660 "superblock": false, 00:23:02.660 "num_base_bdevs": 4, 00:23:02.660 "num_base_bdevs_discovered": 4, 00:23:02.661 "num_base_bdevs_operational": 4, 00:23:02.661 "base_bdevs_list": [ 00:23:02.661 { 00:23:02.661 "name": "BaseBdev1", 00:23:02.661 "uuid": "8855da8f-a726-4a90-8e57-303e6d0745d2", 00:23:02.661 "is_configured": true, 00:23:02.661 "data_offset": 0, 00:23:02.661 "data_size": 65536 00:23:02.661 }, 00:23:02.661 { 00:23:02.661 "name": "BaseBdev2", 00:23:02.661 "uuid": "7bbedd94-a76d-4f87-b7bd-2c71986e2a9b", 00:23:02.661 "is_configured": true, 00:23:02.661 "data_offset": 0, 00:23:02.661 "data_size": 65536 00:23:02.661 }, 00:23:02.661 { 00:23:02.661 "name": "BaseBdev3", 00:23:02.661 "uuid": "b9266932-1ebc-40e1-a80e-c3b5671c8e64", 00:23:02.661 "is_configured": true, 00:23:02.661 "data_offset": 0, 00:23:02.661 "data_size": 65536 00:23:02.661 }, 00:23:02.661 { 00:23:02.661 "name": "BaseBdev4", 00:23:02.661 "uuid": "7bbe5721-477f-4570-953b-d175fc9469d3", 00:23:02.661 "is_configured": true, 00:23:02.661 "data_offset": 0, 00:23:02.661 "data_size": 65536 00:23:02.661 } 00:23:02.661 ] 00:23:02.661 } 00:23:02.661 } 00:23:02.661 }' 00:23:02.661 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:02.661 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:02.661 BaseBdev2 00:23:02.661 BaseBdev3 00:23:02.661 BaseBdev4' 00:23:02.661 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:02.661 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:02.661 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:02.919 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:02.919 "name": "BaseBdev1", 00:23:02.919 "aliases": [ 00:23:02.919 "8855da8f-a726-4a90-8e57-303e6d0745d2" 00:23:02.919 ], 00:23:02.919 "product_name": "Malloc disk", 00:23:02.919 "block_size": 512, 00:23:02.919 "num_blocks": 65536, 00:23:02.919 "uuid": "8855da8f-a726-4a90-8e57-303e6d0745d2", 00:23:02.919 "assigned_rate_limits": { 00:23:02.919 "rw_ios_per_sec": 0, 00:23:02.919 "rw_mbytes_per_sec": 0, 00:23:02.919 "r_mbytes_per_sec": 0, 00:23:02.919 "w_mbytes_per_sec": 0 00:23:02.919 }, 00:23:02.919 "claimed": true, 00:23:02.919 "claim_type": "exclusive_write", 00:23:02.919 "zoned": false, 00:23:02.919 "supported_io_types": { 00:23:02.919 "read": true, 00:23:02.919 "write": true, 00:23:02.919 "unmap": true, 00:23:02.919 "flush": true, 00:23:02.920 "reset": true, 00:23:02.920 "nvme_admin": false, 00:23:02.920 "nvme_io": false, 00:23:02.920 "nvme_io_md": false, 00:23:02.920 "write_zeroes": true, 00:23:02.920 "zcopy": true, 00:23:02.920 "get_zone_info": false, 00:23:02.920 "zone_management": false, 00:23:02.920 "zone_append": false, 00:23:02.920 "compare": false, 00:23:02.920 "compare_and_write": false, 00:23:02.920 "abort": true, 00:23:02.920 "seek_hole": false, 00:23:02.920 "seek_data": false, 00:23:02.920 "copy": true, 00:23:02.920 "nvme_iov_md": false 00:23:02.920 }, 00:23:02.920 "memory_domains": [ 00:23:02.920 { 00:23:02.920 "dma_device_id": "system", 00:23:02.920 "dma_device_type": 1 00:23:02.920 }, 00:23:02.920 { 00:23:02.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.920 "dma_device_type": 2 00:23:02.920 } 00:23:02.920 ], 00:23:02.920 "driver_specific": {} 00:23:02.920 }' 00:23:02.920 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:02.920 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:02.920 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:02.920 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:02.920 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:02.920 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:02.920 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:02.920 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:02.920 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:03.178 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.178 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.178 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:03.178 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:03.178 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:03.178 00:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:03.437 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:03.437 "name": "BaseBdev2", 00:23:03.437 "aliases": [ 00:23:03.437 "7bbedd94-a76d-4f87-b7bd-2c71986e2a9b" 00:23:03.437 ], 00:23:03.437 "product_name": "Malloc disk", 00:23:03.437 "block_size": 512, 00:23:03.437 "num_blocks": 65536, 00:23:03.437 "uuid": "7bbedd94-a76d-4f87-b7bd-2c71986e2a9b", 00:23:03.437 "assigned_rate_limits": { 00:23:03.437 "rw_ios_per_sec": 0, 00:23:03.437 "rw_mbytes_per_sec": 0, 00:23:03.437 "r_mbytes_per_sec": 0, 00:23:03.437 "w_mbytes_per_sec": 0 00:23:03.437 }, 00:23:03.437 "claimed": true, 00:23:03.437 "claim_type": "exclusive_write", 00:23:03.437 "zoned": false, 00:23:03.437 "supported_io_types": { 00:23:03.437 "read": true, 00:23:03.437 "write": true, 00:23:03.437 "unmap": true, 00:23:03.437 "flush": true, 00:23:03.437 "reset": true, 00:23:03.437 "nvme_admin": false, 00:23:03.437 "nvme_io": false, 00:23:03.437 "nvme_io_md": false, 00:23:03.437 "write_zeroes": true, 00:23:03.437 "zcopy": true, 00:23:03.437 "get_zone_info": false, 00:23:03.437 "zone_management": false, 00:23:03.437 "zone_append": false, 00:23:03.437 "compare": false, 00:23:03.437 "compare_and_write": false, 00:23:03.437 "abort": true, 00:23:03.437 "seek_hole": false, 00:23:03.437 "seek_data": false, 00:23:03.437 "copy": true, 00:23:03.437 "nvme_iov_md": false 00:23:03.437 }, 00:23:03.437 "memory_domains": [ 00:23:03.437 { 00:23:03.437 "dma_device_id": "system", 00:23:03.437 "dma_device_type": 1 00:23:03.437 }, 00:23:03.437 { 00:23:03.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.437 "dma_device_type": 2 00:23:03.437 } 00:23:03.437 ], 00:23:03.437 "driver_specific": {} 00:23:03.437 }' 00:23:03.437 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.437 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.437 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:03.437 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.437 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.437 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:03.437 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.437 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.437 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:03.437 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.437 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.437 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:03.437 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:03.437 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:03.437 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:03.695 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:03.695 "name": "BaseBdev3", 00:23:03.695 "aliases": [ 00:23:03.695 "b9266932-1ebc-40e1-a80e-c3b5671c8e64" 00:23:03.695 ], 00:23:03.695 "product_name": "Malloc disk", 00:23:03.695 "block_size": 512, 00:23:03.695 "num_blocks": 65536, 00:23:03.695 "uuid": "b9266932-1ebc-40e1-a80e-c3b5671c8e64", 00:23:03.695 "assigned_rate_limits": { 00:23:03.695 "rw_ios_per_sec": 0, 00:23:03.695 "rw_mbytes_per_sec": 0, 00:23:03.695 "r_mbytes_per_sec": 0, 00:23:03.695 "w_mbytes_per_sec": 0 00:23:03.695 }, 00:23:03.695 "claimed": true, 00:23:03.695 "claim_type": "exclusive_write", 00:23:03.695 "zoned": false, 00:23:03.695 "supported_io_types": { 00:23:03.696 "read": true, 00:23:03.696 "write": true, 00:23:03.696 "unmap": true, 00:23:03.696 "flush": true, 00:23:03.696 "reset": true, 00:23:03.696 "nvme_admin": false, 00:23:03.696 "nvme_io": false, 00:23:03.696 "nvme_io_md": false, 00:23:03.696 "write_zeroes": true, 00:23:03.696 "zcopy": true, 00:23:03.696 "get_zone_info": false, 00:23:03.696 "zone_management": false, 00:23:03.696 "zone_append": false, 00:23:03.696 "compare": false, 00:23:03.696 "compare_and_write": false, 00:23:03.696 "abort": true, 00:23:03.696 "seek_hole": false, 00:23:03.696 "seek_data": false, 00:23:03.696 "copy": true, 00:23:03.696 "nvme_iov_md": false 00:23:03.696 }, 00:23:03.696 "memory_domains": [ 00:23:03.696 { 00:23:03.696 "dma_device_id": "system", 00:23:03.696 "dma_device_type": 1 00:23:03.696 }, 00:23:03.696 { 00:23:03.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.696 "dma_device_type": 2 00:23:03.696 } 00:23:03.696 ], 00:23:03.696 "driver_specific": {} 00:23:03.696 }' 00:23:03.696 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.696 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.696 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:03.696 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.696 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.696 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:03.696 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.696 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.696 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:03.696 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.696 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.696 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:03.696 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:03.696 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:03.696 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:03.955 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:03.955 "name": "BaseBdev4", 00:23:03.955 "aliases": [ 00:23:03.955 "7bbe5721-477f-4570-953b-d175fc9469d3" 00:23:03.955 ], 00:23:03.955 "product_name": "Malloc disk", 00:23:03.955 "block_size": 512, 00:23:03.955 "num_blocks": 65536, 00:23:03.955 "uuid": "7bbe5721-477f-4570-953b-d175fc9469d3", 00:23:03.955 "assigned_rate_limits": { 00:23:03.955 "rw_ios_per_sec": 0, 00:23:03.955 "rw_mbytes_per_sec": 0, 00:23:03.955 "r_mbytes_per_sec": 0, 00:23:03.955 "w_mbytes_per_sec": 0 00:23:03.955 }, 00:23:03.955 "claimed": true, 00:23:03.955 "claim_type": "exclusive_write", 00:23:03.955 "zoned": false, 00:23:03.955 "supported_io_types": { 00:23:03.955 "read": true, 00:23:03.955 "write": true, 00:23:03.955 "unmap": true, 00:23:03.955 "flush": true, 00:23:03.955 "reset": true, 00:23:03.955 "nvme_admin": false, 00:23:03.955 "nvme_io": false, 00:23:03.955 "nvme_io_md": false, 00:23:03.955 "write_zeroes": true, 00:23:03.955 "zcopy": true, 00:23:03.955 "get_zone_info": false, 00:23:03.955 "zone_management": false, 00:23:03.955 "zone_append": false, 00:23:03.955 "compare": false, 00:23:03.955 "compare_and_write": false, 00:23:03.955 "abort": true, 00:23:03.955 "seek_hole": false, 00:23:03.955 "seek_data": false, 00:23:03.955 "copy": true, 00:23:03.955 "nvme_iov_md": false 00:23:03.955 }, 00:23:03.955 "memory_domains": [ 00:23:03.955 { 00:23:03.955 "dma_device_id": "system", 00:23:03.955 "dma_device_type": 1 00:23:03.955 }, 00:23:03.955 { 00:23:03.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.955 "dma_device_type": 2 00:23:03.955 } 00:23:03.955 ], 00:23:03.955 "driver_specific": {} 00:23:03.955 }' 00:23:03.955 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.955 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.955 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:03.955 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.955 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.955 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:03.955 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:04.214 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:04.214 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:04.214 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:04.214 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:04.214 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:04.214 00:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:04.473 [2024-07-25 00:07:00.123934] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:04.473 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:04.473 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:23:04.473 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:04.473 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:23:04.473 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:23:04.473 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:23:04.473 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:04.473 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:04.473 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:04.473 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:04.473 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:04.473 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:04.473 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:04.473 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:04.473 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:04.473 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:04.473 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.731 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:04.731 "name": "Existed_Raid", 00:23:04.731 "uuid": "b817f2eb-0ed4-406b-b3cc-e8853e94a0bd", 00:23:04.731 "strip_size_kb": 0, 00:23:04.731 "state": "online", 00:23:04.731 "raid_level": "raid1", 00:23:04.731 "superblock": false, 00:23:04.731 "num_base_bdevs": 4, 00:23:04.731 "num_base_bdevs_discovered": 3, 00:23:04.732 "num_base_bdevs_operational": 3, 00:23:04.732 "base_bdevs_list": [ 00:23:04.732 { 00:23:04.732 "name": null, 00:23:04.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.732 "is_configured": false, 00:23:04.732 "data_offset": 0, 00:23:04.732 "data_size": 65536 00:23:04.732 }, 00:23:04.732 { 00:23:04.732 "name": "BaseBdev2", 00:23:04.732 "uuid": "7bbedd94-a76d-4f87-b7bd-2c71986e2a9b", 00:23:04.732 "is_configured": true, 00:23:04.732 "data_offset": 0, 00:23:04.732 "data_size": 65536 00:23:04.732 }, 00:23:04.732 { 00:23:04.732 "name": "BaseBdev3", 00:23:04.732 "uuid": "b9266932-1ebc-40e1-a80e-c3b5671c8e64", 00:23:04.732 "is_configured": true, 00:23:04.732 "data_offset": 0, 00:23:04.732 "data_size": 65536 00:23:04.732 }, 00:23:04.732 { 00:23:04.732 "name": "BaseBdev4", 00:23:04.732 "uuid": "7bbe5721-477f-4570-953b-d175fc9469d3", 00:23:04.732 "is_configured": true, 00:23:04.732 "data_offset": 0, 00:23:04.732 "data_size": 65536 00:23:04.732 } 00:23:04.732 ] 00:23:04.732 }' 00:23:04.732 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:04.732 00:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.990 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:04.990 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:04.990 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.990 00:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:05.560 00:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:05.560 00:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:05.560 00:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:05.560 [2024-07-25 00:07:01.364960] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:05.817 00:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:05.817 00:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:05.817 00:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.817 00:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:06.076 00:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:06.076 00:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:06.076 00:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:06.076 [2024-07-25 00:07:01.915813] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:06.334 00:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:06.334 00:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:06.334 00:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.334 00:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:06.593 00:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:06.593 00:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:06.593 00:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:06.593 [2024-07-25 00:07:02.428194] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:06.593 [2024-07-25 00:07:02.428301] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:06.851 [2024-07-25 00:07:02.499334] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:06.851 [2024-07-25 00:07:02.499386] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:06.851 [2024-07-25 00:07:02.499403] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:23:06.851 00:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:06.851 00:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:06.851 00:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.851 00:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:07.109 00:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:07.109 00:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:07.109 00:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:23:07.109 00:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:23:07.109 00:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:07.109 00:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:07.366 BaseBdev2 00:23:07.366 00:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:23:07.366 00:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:23:07.366 00:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:07.366 00:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:07.366 00:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:07.366 00:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:07.366 00:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:07.623 00:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:07.623 [ 00:23:07.623 { 00:23:07.623 "name": "BaseBdev2", 00:23:07.623 "aliases": [ 00:23:07.623 "6d33cdd5-593a-4144-a954-7f30d5983d6b" 00:23:07.623 ], 00:23:07.623 "product_name": "Malloc disk", 00:23:07.623 "block_size": 512, 00:23:07.623 "num_blocks": 65536, 00:23:07.623 "uuid": "6d33cdd5-593a-4144-a954-7f30d5983d6b", 00:23:07.623 "assigned_rate_limits": { 00:23:07.623 "rw_ios_per_sec": 0, 00:23:07.623 "rw_mbytes_per_sec": 0, 00:23:07.623 "r_mbytes_per_sec": 0, 00:23:07.623 "w_mbytes_per_sec": 0 00:23:07.623 }, 00:23:07.623 "claimed": false, 00:23:07.623 "zoned": false, 00:23:07.623 "supported_io_types": { 00:23:07.623 "read": true, 00:23:07.623 "write": true, 00:23:07.623 "unmap": true, 00:23:07.623 "flush": true, 00:23:07.623 "reset": true, 00:23:07.623 "nvme_admin": false, 00:23:07.623 "nvme_io": false, 00:23:07.623 "nvme_io_md": false, 00:23:07.623 "write_zeroes": true, 00:23:07.623 "zcopy": true, 00:23:07.623 "get_zone_info": false, 00:23:07.623 "zone_management": false, 00:23:07.623 "zone_append": false, 00:23:07.623 "compare": false, 00:23:07.623 "compare_and_write": false, 00:23:07.623 "abort": true, 00:23:07.623 "seek_hole": false, 00:23:07.623 "seek_data": false, 00:23:07.623 "copy": true, 00:23:07.623 "nvme_iov_md": false 00:23:07.623 }, 00:23:07.623 "memory_domains": [ 00:23:07.623 { 00:23:07.623 "dma_device_id": "system", 00:23:07.623 "dma_device_type": 1 00:23:07.623 }, 00:23:07.623 { 00:23:07.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.623 "dma_device_type": 2 00:23:07.623 } 00:23:07.623 ], 00:23:07.623 "driver_specific": {} 00:23:07.623 } 00:23:07.623 ] 00:23:07.623 00:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:07.623 00:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:07.623 00:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:07.623 00:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:07.881 BaseBdev3 00:23:07.881 00:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:07.881 00:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:23:07.881 00:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:07.881 00:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:07.881 00:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:07.881 00:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:07.881 00:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:08.139 00:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:08.396 [ 00:23:08.396 { 00:23:08.396 "name": "BaseBdev3", 00:23:08.396 "aliases": [ 00:23:08.396 "7fa36093-2fda-4f88-8817-6473c8b6ed7a" 00:23:08.396 ], 00:23:08.396 "product_name": "Malloc disk", 00:23:08.396 "block_size": 512, 00:23:08.396 "num_blocks": 65536, 00:23:08.396 "uuid": "7fa36093-2fda-4f88-8817-6473c8b6ed7a", 00:23:08.396 "assigned_rate_limits": { 00:23:08.396 "rw_ios_per_sec": 0, 00:23:08.396 "rw_mbytes_per_sec": 0, 00:23:08.396 "r_mbytes_per_sec": 0, 00:23:08.396 "w_mbytes_per_sec": 0 00:23:08.396 }, 00:23:08.396 "claimed": false, 00:23:08.396 "zoned": false, 00:23:08.396 "supported_io_types": { 00:23:08.396 "read": true, 00:23:08.396 "write": true, 00:23:08.396 "unmap": true, 00:23:08.396 "flush": true, 00:23:08.396 "reset": true, 00:23:08.396 "nvme_admin": false, 00:23:08.396 "nvme_io": false, 00:23:08.396 "nvme_io_md": false, 00:23:08.396 "write_zeroes": true, 00:23:08.396 "zcopy": true, 00:23:08.396 "get_zone_info": false, 00:23:08.396 "zone_management": false, 00:23:08.396 "zone_append": false, 00:23:08.396 "compare": false, 00:23:08.396 "compare_and_write": false, 00:23:08.396 "abort": true, 00:23:08.396 "seek_hole": false, 00:23:08.396 "seek_data": false, 00:23:08.397 "copy": true, 00:23:08.397 "nvme_iov_md": false 00:23:08.397 }, 00:23:08.397 "memory_domains": [ 00:23:08.397 { 00:23:08.397 "dma_device_id": "system", 00:23:08.397 "dma_device_type": 1 00:23:08.397 }, 00:23:08.397 { 00:23:08.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.397 "dma_device_type": 2 00:23:08.397 } 00:23:08.397 ], 00:23:08.397 "driver_specific": {} 00:23:08.397 } 00:23:08.397 ] 00:23:08.397 00:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:08.397 00:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:08.397 00:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:08.397 00:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:08.654 BaseBdev4 00:23:08.654 00:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:23:08.654 00:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:23:08.654 00:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:08.654 00:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:08.654 00:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:08.654 00:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:08.654 00:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:08.654 00:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:08.912 [ 00:23:08.912 { 00:23:08.912 "name": "BaseBdev4", 00:23:08.912 "aliases": [ 00:23:08.912 "974e268d-a69b-4805-9e4a-7a3c0e988a7c" 00:23:08.912 ], 00:23:08.912 "product_name": "Malloc disk", 00:23:08.912 "block_size": 512, 00:23:08.912 "num_blocks": 65536, 00:23:08.912 "uuid": "974e268d-a69b-4805-9e4a-7a3c0e988a7c", 00:23:08.912 "assigned_rate_limits": { 00:23:08.913 "rw_ios_per_sec": 0, 00:23:08.913 "rw_mbytes_per_sec": 0, 00:23:08.913 "r_mbytes_per_sec": 0, 00:23:08.913 "w_mbytes_per_sec": 0 00:23:08.913 }, 00:23:08.913 "claimed": false, 00:23:08.913 "zoned": false, 00:23:08.913 "supported_io_types": { 00:23:08.913 "read": true, 00:23:08.913 "write": true, 00:23:08.913 "unmap": true, 00:23:08.913 "flush": true, 00:23:08.913 "reset": true, 00:23:08.913 "nvme_admin": false, 00:23:08.913 "nvme_io": false, 00:23:08.913 "nvme_io_md": false, 00:23:08.913 "write_zeroes": true, 00:23:08.913 "zcopy": true, 00:23:08.913 "get_zone_info": false, 00:23:08.913 "zone_management": false, 00:23:08.913 "zone_append": false, 00:23:08.913 "compare": false, 00:23:08.913 "compare_and_write": false, 00:23:08.913 "abort": true, 00:23:08.913 "seek_hole": false, 00:23:08.913 "seek_data": false, 00:23:08.913 "copy": true, 00:23:08.913 "nvme_iov_md": false 00:23:08.913 }, 00:23:08.913 "memory_domains": [ 00:23:08.913 { 00:23:08.913 "dma_device_id": "system", 00:23:08.913 "dma_device_type": 1 00:23:08.913 }, 00:23:08.913 { 00:23:08.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.913 "dma_device_type": 2 00:23:08.913 } 00:23:08.913 ], 00:23:08.913 "driver_specific": {} 00:23:08.913 } 00:23:08.913 ] 00:23:08.913 00:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:08.913 00:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:08.913 00:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:08.913 00:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:09.171 [2024-07-25 00:07:04.903578] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:09.171 [2024-07-25 00:07:04.903650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:09.171 [2024-07-25 00:07:04.903675] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:09.171 [2024-07-25 00:07:04.905661] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:09.171 [2024-07-25 00:07:04.905715] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:09.171 00:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:09.171 00:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:09.171 00:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:09.171 00:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:09.171 00:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:09.171 00:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:09.171 00:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:09.171 00:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:09.171 00:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:09.171 00:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:09.171 00:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.171 00:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:09.429 00:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:09.429 "name": "Existed_Raid", 00:23:09.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.429 "strip_size_kb": 0, 00:23:09.429 "state": "configuring", 00:23:09.429 "raid_level": "raid1", 00:23:09.429 "superblock": false, 00:23:09.429 "num_base_bdevs": 4, 00:23:09.429 "num_base_bdevs_discovered": 3, 00:23:09.429 "num_base_bdevs_operational": 4, 00:23:09.429 "base_bdevs_list": [ 00:23:09.429 { 00:23:09.429 "name": "BaseBdev1", 00:23:09.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.430 "is_configured": false, 00:23:09.430 "data_offset": 0, 00:23:09.430 "data_size": 0 00:23:09.430 }, 00:23:09.430 { 00:23:09.430 "name": "BaseBdev2", 00:23:09.430 "uuid": "6d33cdd5-593a-4144-a954-7f30d5983d6b", 00:23:09.430 "is_configured": true, 00:23:09.430 "data_offset": 0, 00:23:09.430 "data_size": 65536 00:23:09.430 }, 00:23:09.430 { 00:23:09.430 "name": "BaseBdev3", 00:23:09.430 "uuid": "7fa36093-2fda-4f88-8817-6473c8b6ed7a", 00:23:09.430 "is_configured": true, 00:23:09.430 "data_offset": 0, 00:23:09.430 "data_size": 65536 00:23:09.430 }, 00:23:09.430 { 00:23:09.430 "name": "BaseBdev4", 00:23:09.430 "uuid": "974e268d-a69b-4805-9e4a-7a3c0e988a7c", 00:23:09.430 "is_configured": true, 00:23:09.430 "data_offset": 0, 00:23:09.430 "data_size": 65536 00:23:09.430 } 00:23:09.430 ] 00:23:09.430 }' 00:23:09.430 00:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:09.430 00:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.688 00:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:09.945 [2024-07-25 00:07:05.723754] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:09.945 00:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:09.945 00:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:09.945 00:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:09.945 00:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:09.945 00:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:09.945 00:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:09.945 00:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:09.945 00:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:09.945 00:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:09.945 00:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:09.945 00:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.945 00:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:10.204 00:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:10.204 "name": "Existed_Raid", 00:23:10.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.204 "strip_size_kb": 0, 00:23:10.204 "state": "configuring", 00:23:10.204 "raid_level": "raid1", 00:23:10.204 "superblock": false, 00:23:10.204 "num_base_bdevs": 4, 00:23:10.204 "num_base_bdevs_discovered": 2, 00:23:10.204 "num_base_bdevs_operational": 4, 00:23:10.204 "base_bdevs_list": [ 00:23:10.204 { 00:23:10.204 "name": "BaseBdev1", 00:23:10.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.204 "is_configured": false, 00:23:10.204 "data_offset": 0, 00:23:10.204 "data_size": 0 00:23:10.204 }, 00:23:10.204 { 00:23:10.204 "name": null, 00:23:10.204 "uuid": "6d33cdd5-593a-4144-a954-7f30d5983d6b", 00:23:10.204 "is_configured": false, 00:23:10.204 "data_offset": 0, 00:23:10.204 "data_size": 65536 00:23:10.204 }, 00:23:10.204 { 00:23:10.204 "name": "BaseBdev3", 00:23:10.204 "uuid": "7fa36093-2fda-4f88-8817-6473c8b6ed7a", 00:23:10.204 "is_configured": true, 00:23:10.204 "data_offset": 0, 00:23:10.204 "data_size": 65536 00:23:10.204 }, 00:23:10.204 { 00:23:10.204 "name": "BaseBdev4", 00:23:10.204 "uuid": "974e268d-a69b-4805-9e4a-7a3c0e988a7c", 00:23:10.204 "is_configured": true, 00:23:10.204 "data_offset": 0, 00:23:10.204 "data_size": 65536 00:23:10.204 } 00:23:10.204 ] 00:23:10.204 }' 00:23:10.204 00:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:10.204 00:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.462 00:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:10.462 00:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.720 00:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:10.720 00:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:10.978 [2024-07-25 00:07:06.794041] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:10.978 BaseBdev1 00:23:10.978 00:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:10.978 00:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:23:10.978 00:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:10.978 00:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:10.978 00:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:10.978 00:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:10.978 00:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:11.235 00:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:11.493 [ 00:23:11.493 { 00:23:11.493 "name": "BaseBdev1", 00:23:11.493 "aliases": [ 00:23:11.493 "b1e6c3fb-0d48-499e-b923-20d1af212b4e" 00:23:11.493 ], 00:23:11.493 "product_name": "Malloc disk", 00:23:11.493 "block_size": 512, 00:23:11.493 "num_blocks": 65536, 00:23:11.493 "uuid": "b1e6c3fb-0d48-499e-b923-20d1af212b4e", 00:23:11.493 "assigned_rate_limits": { 00:23:11.493 "rw_ios_per_sec": 0, 00:23:11.493 "rw_mbytes_per_sec": 0, 00:23:11.493 "r_mbytes_per_sec": 0, 00:23:11.493 "w_mbytes_per_sec": 0 00:23:11.493 }, 00:23:11.493 "claimed": true, 00:23:11.493 "claim_type": "exclusive_write", 00:23:11.493 "zoned": false, 00:23:11.493 "supported_io_types": { 00:23:11.493 "read": true, 00:23:11.493 "write": true, 00:23:11.493 "unmap": true, 00:23:11.493 "flush": true, 00:23:11.493 "reset": true, 00:23:11.493 "nvme_admin": false, 00:23:11.493 "nvme_io": false, 00:23:11.493 "nvme_io_md": false, 00:23:11.493 "write_zeroes": true, 00:23:11.493 "zcopy": true, 00:23:11.493 "get_zone_info": false, 00:23:11.493 "zone_management": false, 00:23:11.493 "zone_append": false, 00:23:11.493 "compare": false, 00:23:11.493 "compare_and_write": false, 00:23:11.493 "abort": true, 00:23:11.493 "seek_hole": false, 00:23:11.493 "seek_data": false, 00:23:11.493 "copy": true, 00:23:11.493 "nvme_iov_md": false 00:23:11.493 }, 00:23:11.493 "memory_domains": [ 00:23:11.493 { 00:23:11.493 "dma_device_id": "system", 00:23:11.493 "dma_device_type": 1 00:23:11.493 }, 00:23:11.493 { 00:23:11.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.493 "dma_device_type": 2 00:23:11.493 } 00:23:11.493 ], 00:23:11.493 "driver_specific": {} 00:23:11.493 } 00:23:11.493 ] 00:23:11.493 00:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:11.493 00:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:11.493 00:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:11.493 00:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:11.493 00:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:11.493 00:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:11.493 00:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:11.493 00:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:11.493 00:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:11.493 00:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:11.493 00:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:11.493 00:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.493 00:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:11.751 00:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:11.751 "name": "Existed_Raid", 00:23:11.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.751 "strip_size_kb": 0, 00:23:11.751 "state": "configuring", 00:23:11.751 "raid_level": "raid1", 00:23:11.751 "superblock": false, 00:23:11.751 "num_base_bdevs": 4, 00:23:11.751 "num_base_bdevs_discovered": 3, 00:23:11.751 "num_base_bdevs_operational": 4, 00:23:11.751 "base_bdevs_list": [ 00:23:11.751 { 00:23:11.751 "name": "BaseBdev1", 00:23:11.751 "uuid": "b1e6c3fb-0d48-499e-b923-20d1af212b4e", 00:23:11.751 "is_configured": true, 00:23:11.751 "data_offset": 0, 00:23:11.751 "data_size": 65536 00:23:11.751 }, 00:23:11.751 { 00:23:11.751 "name": null, 00:23:11.751 "uuid": "6d33cdd5-593a-4144-a954-7f30d5983d6b", 00:23:11.751 "is_configured": false, 00:23:11.751 "data_offset": 0, 00:23:11.751 "data_size": 65536 00:23:11.751 }, 00:23:11.751 { 00:23:11.751 "name": "BaseBdev3", 00:23:11.751 "uuid": "7fa36093-2fda-4f88-8817-6473c8b6ed7a", 00:23:11.751 "is_configured": true, 00:23:11.751 "data_offset": 0, 00:23:11.751 "data_size": 65536 00:23:11.751 }, 00:23:11.751 { 00:23:11.751 "name": "BaseBdev4", 00:23:11.751 "uuid": "974e268d-a69b-4805-9e4a-7a3c0e988a7c", 00:23:11.751 "is_configured": true, 00:23:11.751 "data_offset": 0, 00:23:11.751 "data_size": 65536 00:23:11.751 } 00:23:11.751 ] 00:23:11.751 }' 00:23:11.751 00:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:11.751 00:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.010 00:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.010 00:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:12.268 00:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:12.268 00:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:12.526 [2024-07-25 00:07:08.282635] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:12.527 00:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:12.527 00:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:12.527 00:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:12.527 00:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:12.527 00:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:12.527 00:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:12.527 00:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:12.527 00:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:12.527 00:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:12.527 00:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:12.527 00:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.527 00:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:12.786 00:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:12.786 "name": "Existed_Raid", 00:23:12.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.786 "strip_size_kb": 0, 00:23:12.786 "state": "configuring", 00:23:12.786 "raid_level": "raid1", 00:23:12.786 "superblock": false, 00:23:12.786 "num_base_bdevs": 4, 00:23:12.786 "num_base_bdevs_discovered": 2, 00:23:12.786 "num_base_bdevs_operational": 4, 00:23:12.786 "base_bdevs_list": [ 00:23:12.786 { 00:23:12.786 "name": "BaseBdev1", 00:23:12.786 "uuid": "b1e6c3fb-0d48-499e-b923-20d1af212b4e", 00:23:12.786 "is_configured": true, 00:23:12.786 "data_offset": 0, 00:23:12.786 "data_size": 65536 00:23:12.786 }, 00:23:12.786 { 00:23:12.786 "name": null, 00:23:12.786 "uuid": "6d33cdd5-593a-4144-a954-7f30d5983d6b", 00:23:12.786 "is_configured": false, 00:23:12.786 "data_offset": 0, 00:23:12.786 "data_size": 65536 00:23:12.786 }, 00:23:12.786 { 00:23:12.786 "name": null, 00:23:12.786 "uuid": "7fa36093-2fda-4f88-8817-6473c8b6ed7a", 00:23:12.786 "is_configured": false, 00:23:12.786 "data_offset": 0, 00:23:12.786 "data_size": 65536 00:23:12.786 }, 00:23:12.786 { 00:23:12.786 "name": "BaseBdev4", 00:23:12.786 "uuid": "974e268d-a69b-4805-9e4a-7a3c0e988a7c", 00:23:12.786 "is_configured": true, 00:23:12.786 "data_offset": 0, 00:23:12.786 "data_size": 65536 00:23:12.786 } 00:23:12.786 ] 00:23:12.786 }' 00:23:12.786 00:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:12.786 00:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.044 00:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.044 00:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:13.316 00:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:13.316 00:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:13.575 [2024-07-25 00:07:09.331107] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:13.575 00:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:13.575 00:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:13.575 00:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:13.575 00:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:13.575 00:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:13.575 00:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:13.575 00:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:13.575 00:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:13.575 00:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:13.575 00:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:13.575 00:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.575 00:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:13.834 00:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:13.834 "name": "Existed_Raid", 00:23:13.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.834 "strip_size_kb": 0, 00:23:13.834 "state": "configuring", 00:23:13.834 "raid_level": "raid1", 00:23:13.834 "superblock": false, 00:23:13.834 "num_base_bdevs": 4, 00:23:13.834 "num_base_bdevs_discovered": 3, 00:23:13.834 "num_base_bdevs_operational": 4, 00:23:13.834 "base_bdevs_list": [ 00:23:13.834 { 00:23:13.834 "name": "BaseBdev1", 00:23:13.834 "uuid": "b1e6c3fb-0d48-499e-b923-20d1af212b4e", 00:23:13.834 "is_configured": true, 00:23:13.834 "data_offset": 0, 00:23:13.834 "data_size": 65536 00:23:13.834 }, 00:23:13.834 { 00:23:13.834 "name": null, 00:23:13.834 "uuid": "6d33cdd5-593a-4144-a954-7f30d5983d6b", 00:23:13.834 "is_configured": false, 00:23:13.834 "data_offset": 0, 00:23:13.834 "data_size": 65536 00:23:13.834 }, 00:23:13.834 { 00:23:13.834 "name": "BaseBdev3", 00:23:13.834 "uuid": "7fa36093-2fda-4f88-8817-6473c8b6ed7a", 00:23:13.834 "is_configured": true, 00:23:13.834 "data_offset": 0, 00:23:13.834 "data_size": 65536 00:23:13.834 }, 00:23:13.834 { 00:23:13.834 "name": "BaseBdev4", 00:23:13.834 "uuid": "974e268d-a69b-4805-9e4a-7a3c0e988a7c", 00:23:13.834 "is_configured": true, 00:23:13.834 "data_offset": 0, 00:23:13.834 "data_size": 65536 00:23:13.834 } 00:23:13.834 ] 00:23:13.834 }' 00:23:13.834 00:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:13.834 00:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.092 00:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.092 00:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:14.350 00:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:14.351 00:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:14.609 [2024-07-25 00:07:10.391541] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:14.868 00:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:14.868 00:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:14.868 00:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:14.868 00:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:14.868 00:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:14.868 00:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:14.868 00:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:14.868 00:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:14.868 00:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:14.868 00:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:14.868 00:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.868 00:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:14.868 00:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:14.868 "name": "Existed_Raid", 00:23:14.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.868 "strip_size_kb": 0, 00:23:14.868 "state": "configuring", 00:23:14.868 "raid_level": "raid1", 00:23:14.868 "superblock": false, 00:23:14.868 "num_base_bdevs": 4, 00:23:14.868 "num_base_bdevs_discovered": 2, 00:23:14.868 "num_base_bdevs_operational": 4, 00:23:14.868 "base_bdevs_list": [ 00:23:14.868 { 00:23:14.868 "name": null, 00:23:14.868 "uuid": "b1e6c3fb-0d48-499e-b923-20d1af212b4e", 00:23:14.868 "is_configured": false, 00:23:14.868 "data_offset": 0, 00:23:14.868 "data_size": 65536 00:23:14.868 }, 00:23:14.868 { 00:23:14.868 "name": null, 00:23:14.868 "uuid": "6d33cdd5-593a-4144-a954-7f30d5983d6b", 00:23:14.868 "is_configured": false, 00:23:14.868 "data_offset": 0, 00:23:14.868 "data_size": 65536 00:23:14.868 }, 00:23:14.868 { 00:23:14.868 "name": "BaseBdev3", 00:23:14.868 "uuid": "7fa36093-2fda-4f88-8817-6473c8b6ed7a", 00:23:14.868 "is_configured": true, 00:23:14.868 "data_offset": 0, 00:23:14.868 "data_size": 65536 00:23:14.868 }, 00:23:14.868 { 00:23:14.868 "name": "BaseBdev4", 00:23:14.868 "uuid": "974e268d-a69b-4805-9e4a-7a3c0e988a7c", 00:23:14.868 "is_configured": true, 00:23:14.868 "data_offset": 0, 00:23:14.868 "data_size": 65536 00:23:14.868 } 00:23:14.868 ] 00:23:14.868 }' 00:23:14.868 00:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:14.868 00:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.436 00:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.436 00:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:15.436 00:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:15.436 00:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:15.695 [2024-07-25 00:07:11.442267] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:15.695 00:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:15.695 00:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:15.695 00:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:15.695 00:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:15.695 00:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:15.695 00:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:15.695 00:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:15.695 00:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:15.695 00:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:15.695 00:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:15.695 00:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:15.695 00:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.954 00:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:15.954 "name": "Existed_Raid", 00:23:15.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.954 "strip_size_kb": 0, 00:23:15.954 "state": "configuring", 00:23:15.954 "raid_level": "raid1", 00:23:15.954 "superblock": false, 00:23:15.954 "num_base_bdevs": 4, 00:23:15.954 "num_base_bdevs_discovered": 3, 00:23:15.954 "num_base_bdevs_operational": 4, 00:23:15.954 "base_bdevs_list": [ 00:23:15.954 { 00:23:15.954 "name": null, 00:23:15.954 "uuid": "b1e6c3fb-0d48-499e-b923-20d1af212b4e", 00:23:15.954 "is_configured": false, 00:23:15.954 "data_offset": 0, 00:23:15.954 "data_size": 65536 00:23:15.954 }, 00:23:15.954 { 00:23:15.954 "name": "BaseBdev2", 00:23:15.954 "uuid": "6d33cdd5-593a-4144-a954-7f30d5983d6b", 00:23:15.954 "is_configured": true, 00:23:15.954 "data_offset": 0, 00:23:15.954 "data_size": 65536 00:23:15.954 }, 00:23:15.954 { 00:23:15.954 "name": "BaseBdev3", 00:23:15.954 "uuid": "7fa36093-2fda-4f88-8817-6473c8b6ed7a", 00:23:15.954 "is_configured": true, 00:23:15.954 "data_offset": 0, 00:23:15.954 "data_size": 65536 00:23:15.954 }, 00:23:15.954 { 00:23:15.954 "name": "BaseBdev4", 00:23:15.954 "uuid": "974e268d-a69b-4805-9e4a-7a3c0e988a7c", 00:23:15.954 "is_configured": true, 00:23:15.954 "data_offset": 0, 00:23:15.954 "data_size": 65536 00:23:15.954 } 00:23:15.954 ] 00:23:15.954 }' 00:23:15.954 00:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:15.954 00:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.212 00:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.212 00:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:16.472 00:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:23:16.472 00:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.472 00:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:16.732 00:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b1e6c3fb-0d48-499e-b923-20d1af212b4e 00:23:16.991 [2024-07-25 00:07:12.812253] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:16.991 [2024-07-25 00:07:12.812302] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:23:16.991 [2024-07-25 00:07:12.812319] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:16.991 [2024-07-25 00:07:12.812469] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ee0 00:23:16.991 [2024-07-25 00:07:12.812854] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:23:16.991 [2024-07-25 00:07:12.812909] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000009380 00:23:16.991 [2024-07-25 00:07:12.813204] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:16.991 NewBaseBdev 00:23:16.991 00:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:23:16.991 00:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:23:16.991 00:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:16.991 00:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:16.991 00:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:16.991 00:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:16.991 00:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:17.250 00:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:17.509 [ 00:23:17.509 { 00:23:17.509 "name": "NewBaseBdev", 00:23:17.509 "aliases": [ 00:23:17.509 "b1e6c3fb-0d48-499e-b923-20d1af212b4e" 00:23:17.509 ], 00:23:17.509 "product_name": "Malloc disk", 00:23:17.509 "block_size": 512, 00:23:17.509 "num_blocks": 65536, 00:23:17.509 "uuid": "b1e6c3fb-0d48-499e-b923-20d1af212b4e", 00:23:17.509 "assigned_rate_limits": { 00:23:17.509 "rw_ios_per_sec": 0, 00:23:17.509 "rw_mbytes_per_sec": 0, 00:23:17.509 "r_mbytes_per_sec": 0, 00:23:17.509 "w_mbytes_per_sec": 0 00:23:17.509 }, 00:23:17.509 "claimed": true, 00:23:17.509 "claim_type": "exclusive_write", 00:23:17.509 "zoned": false, 00:23:17.509 "supported_io_types": { 00:23:17.509 "read": true, 00:23:17.509 "write": true, 00:23:17.509 "unmap": true, 00:23:17.509 "flush": true, 00:23:17.509 "reset": true, 00:23:17.509 "nvme_admin": false, 00:23:17.509 "nvme_io": false, 00:23:17.509 "nvme_io_md": false, 00:23:17.509 "write_zeroes": true, 00:23:17.509 "zcopy": true, 00:23:17.509 "get_zone_info": false, 00:23:17.509 "zone_management": false, 00:23:17.509 "zone_append": false, 00:23:17.509 "compare": false, 00:23:17.509 "compare_and_write": false, 00:23:17.509 "abort": true, 00:23:17.509 "seek_hole": false, 00:23:17.509 "seek_data": false, 00:23:17.509 "copy": true, 00:23:17.509 "nvme_iov_md": false 00:23:17.509 }, 00:23:17.509 "memory_domains": [ 00:23:17.509 { 00:23:17.509 "dma_device_id": "system", 00:23:17.509 "dma_device_type": 1 00:23:17.509 }, 00:23:17.509 { 00:23:17.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.509 "dma_device_type": 2 00:23:17.509 } 00:23:17.509 ], 00:23:17.509 "driver_specific": {} 00:23:17.509 } 00:23:17.509 ] 00:23:17.509 00:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:17.509 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:23:17.509 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:17.509 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:17.509 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:17.509 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:17.509 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:17.509 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:17.509 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:17.509 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:17.509 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:17.509 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.509 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:17.767 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:17.767 "name": "Existed_Raid", 00:23:17.767 "uuid": "b93a3f16-641c-4aa7-973a-74dce08ff8de", 00:23:17.767 "strip_size_kb": 0, 00:23:17.767 "state": "online", 00:23:17.767 "raid_level": "raid1", 00:23:17.767 "superblock": false, 00:23:17.767 "num_base_bdevs": 4, 00:23:17.767 "num_base_bdevs_discovered": 4, 00:23:17.767 "num_base_bdevs_operational": 4, 00:23:17.767 "base_bdevs_list": [ 00:23:17.767 { 00:23:17.767 "name": "NewBaseBdev", 00:23:17.767 "uuid": "b1e6c3fb-0d48-499e-b923-20d1af212b4e", 00:23:17.767 "is_configured": true, 00:23:17.767 "data_offset": 0, 00:23:17.767 "data_size": 65536 00:23:17.767 }, 00:23:17.767 { 00:23:17.767 "name": "BaseBdev2", 00:23:17.767 "uuid": "6d33cdd5-593a-4144-a954-7f30d5983d6b", 00:23:17.767 "is_configured": true, 00:23:17.767 "data_offset": 0, 00:23:17.767 "data_size": 65536 00:23:17.767 }, 00:23:17.767 { 00:23:17.767 "name": "BaseBdev3", 00:23:17.767 "uuid": "7fa36093-2fda-4f88-8817-6473c8b6ed7a", 00:23:17.767 "is_configured": true, 00:23:17.767 "data_offset": 0, 00:23:17.767 "data_size": 65536 00:23:17.767 }, 00:23:17.767 { 00:23:17.767 "name": "BaseBdev4", 00:23:17.767 "uuid": "974e268d-a69b-4805-9e4a-7a3c0e988a7c", 00:23:17.767 "is_configured": true, 00:23:17.767 "data_offset": 0, 00:23:17.767 "data_size": 65536 00:23:17.767 } 00:23:17.767 ] 00:23:17.767 }' 00:23:17.767 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:17.767 00:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.026 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:18.026 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:18.026 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:18.026 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:18.026 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:18.026 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:18.026 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:18.026 00:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:18.284 [2024-07-25 00:07:14.076982] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:18.284 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:18.284 "name": "Existed_Raid", 00:23:18.284 "aliases": [ 00:23:18.284 "b93a3f16-641c-4aa7-973a-74dce08ff8de" 00:23:18.284 ], 00:23:18.284 "product_name": "Raid Volume", 00:23:18.284 "block_size": 512, 00:23:18.284 "num_blocks": 65536, 00:23:18.284 "uuid": "b93a3f16-641c-4aa7-973a-74dce08ff8de", 00:23:18.284 "assigned_rate_limits": { 00:23:18.284 "rw_ios_per_sec": 0, 00:23:18.284 "rw_mbytes_per_sec": 0, 00:23:18.284 "r_mbytes_per_sec": 0, 00:23:18.284 "w_mbytes_per_sec": 0 00:23:18.284 }, 00:23:18.284 "claimed": false, 00:23:18.284 "zoned": false, 00:23:18.284 "supported_io_types": { 00:23:18.284 "read": true, 00:23:18.284 "write": true, 00:23:18.284 "unmap": false, 00:23:18.284 "flush": false, 00:23:18.284 "reset": true, 00:23:18.284 "nvme_admin": false, 00:23:18.284 "nvme_io": false, 00:23:18.284 "nvme_io_md": false, 00:23:18.284 "write_zeroes": true, 00:23:18.284 "zcopy": false, 00:23:18.284 "get_zone_info": false, 00:23:18.284 "zone_management": false, 00:23:18.284 "zone_append": false, 00:23:18.285 "compare": false, 00:23:18.285 "compare_and_write": false, 00:23:18.285 "abort": false, 00:23:18.285 "seek_hole": false, 00:23:18.285 "seek_data": false, 00:23:18.285 "copy": false, 00:23:18.285 "nvme_iov_md": false 00:23:18.285 }, 00:23:18.285 "memory_domains": [ 00:23:18.285 { 00:23:18.285 "dma_device_id": "system", 00:23:18.285 "dma_device_type": 1 00:23:18.285 }, 00:23:18.285 { 00:23:18.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.285 "dma_device_type": 2 00:23:18.285 }, 00:23:18.285 { 00:23:18.285 "dma_device_id": "system", 00:23:18.285 "dma_device_type": 1 00:23:18.285 }, 00:23:18.285 { 00:23:18.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.285 "dma_device_type": 2 00:23:18.285 }, 00:23:18.285 { 00:23:18.285 "dma_device_id": "system", 00:23:18.285 "dma_device_type": 1 00:23:18.285 }, 00:23:18.285 { 00:23:18.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.285 "dma_device_type": 2 00:23:18.285 }, 00:23:18.285 { 00:23:18.285 "dma_device_id": "system", 00:23:18.285 "dma_device_type": 1 00:23:18.285 }, 00:23:18.285 { 00:23:18.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.285 "dma_device_type": 2 00:23:18.285 } 00:23:18.285 ], 00:23:18.285 "driver_specific": { 00:23:18.285 "raid": { 00:23:18.285 "uuid": "b93a3f16-641c-4aa7-973a-74dce08ff8de", 00:23:18.285 "strip_size_kb": 0, 00:23:18.285 "state": "online", 00:23:18.285 "raid_level": "raid1", 00:23:18.285 "superblock": false, 00:23:18.285 "num_base_bdevs": 4, 00:23:18.285 "num_base_bdevs_discovered": 4, 00:23:18.285 "num_base_bdevs_operational": 4, 00:23:18.285 "base_bdevs_list": [ 00:23:18.285 { 00:23:18.285 "name": "NewBaseBdev", 00:23:18.285 "uuid": "b1e6c3fb-0d48-499e-b923-20d1af212b4e", 00:23:18.285 "is_configured": true, 00:23:18.285 "data_offset": 0, 00:23:18.285 "data_size": 65536 00:23:18.285 }, 00:23:18.285 { 00:23:18.285 "name": "BaseBdev2", 00:23:18.285 "uuid": "6d33cdd5-593a-4144-a954-7f30d5983d6b", 00:23:18.285 "is_configured": true, 00:23:18.285 "data_offset": 0, 00:23:18.285 "data_size": 65536 00:23:18.285 }, 00:23:18.285 { 00:23:18.285 "name": "BaseBdev3", 00:23:18.285 "uuid": "7fa36093-2fda-4f88-8817-6473c8b6ed7a", 00:23:18.285 "is_configured": true, 00:23:18.285 "data_offset": 0, 00:23:18.285 "data_size": 65536 00:23:18.285 }, 00:23:18.285 { 00:23:18.285 "name": "BaseBdev4", 00:23:18.285 "uuid": "974e268d-a69b-4805-9e4a-7a3c0e988a7c", 00:23:18.285 "is_configured": true, 00:23:18.285 "data_offset": 0, 00:23:18.285 "data_size": 65536 00:23:18.285 } 00:23:18.285 ] 00:23:18.285 } 00:23:18.285 } 00:23:18.285 }' 00:23:18.285 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:18.285 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:18.285 BaseBdev2 00:23:18.285 BaseBdev3 00:23:18.285 BaseBdev4' 00:23:18.285 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:18.285 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:18.285 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:18.544 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:18.544 "name": "NewBaseBdev", 00:23:18.544 "aliases": [ 00:23:18.544 "b1e6c3fb-0d48-499e-b923-20d1af212b4e" 00:23:18.544 ], 00:23:18.544 "product_name": "Malloc disk", 00:23:18.544 "block_size": 512, 00:23:18.544 "num_blocks": 65536, 00:23:18.544 "uuid": "b1e6c3fb-0d48-499e-b923-20d1af212b4e", 00:23:18.544 "assigned_rate_limits": { 00:23:18.544 "rw_ios_per_sec": 0, 00:23:18.544 "rw_mbytes_per_sec": 0, 00:23:18.544 "r_mbytes_per_sec": 0, 00:23:18.544 "w_mbytes_per_sec": 0 00:23:18.544 }, 00:23:18.544 "claimed": true, 00:23:18.544 "claim_type": "exclusive_write", 00:23:18.544 "zoned": false, 00:23:18.544 "supported_io_types": { 00:23:18.544 "read": true, 00:23:18.544 "write": true, 00:23:18.544 "unmap": true, 00:23:18.544 "flush": true, 00:23:18.544 "reset": true, 00:23:18.544 "nvme_admin": false, 00:23:18.544 "nvme_io": false, 00:23:18.544 "nvme_io_md": false, 00:23:18.544 "write_zeroes": true, 00:23:18.544 "zcopy": true, 00:23:18.544 "get_zone_info": false, 00:23:18.544 "zone_management": false, 00:23:18.544 "zone_append": false, 00:23:18.544 "compare": false, 00:23:18.544 "compare_and_write": false, 00:23:18.544 "abort": true, 00:23:18.544 "seek_hole": false, 00:23:18.544 "seek_data": false, 00:23:18.544 "copy": true, 00:23:18.544 "nvme_iov_md": false 00:23:18.544 }, 00:23:18.544 "memory_domains": [ 00:23:18.544 { 00:23:18.544 "dma_device_id": "system", 00:23:18.544 "dma_device_type": 1 00:23:18.544 }, 00:23:18.544 { 00:23:18.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.544 "dma_device_type": 2 00:23:18.544 } 00:23:18.544 ], 00:23:18.544 "driver_specific": {} 00:23:18.544 }' 00:23:18.544 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:18.544 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:18.544 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:18.544 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:18.544 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:18.544 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:18.544 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:18.544 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:18.544 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:18.544 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:18.803 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:18.803 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:18.803 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:18.803 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:18.803 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:18.803 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:18.803 "name": "BaseBdev2", 00:23:18.803 "aliases": [ 00:23:18.803 "6d33cdd5-593a-4144-a954-7f30d5983d6b" 00:23:18.803 ], 00:23:18.803 "product_name": "Malloc disk", 00:23:18.803 "block_size": 512, 00:23:18.803 "num_blocks": 65536, 00:23:18.803 "uuid": "6d33cdd5-593a-4144-a954-7f30d5983d6b", 00:23:18.803 "assigned_rate_limits": { 00:23:18.803 "rw_ios_per_sec": 0, 00:23:18.803 "rw_mbytes_per_sec": 0, 00:23:18.803 "r_mbytes_per_sec": 0, 00:23:18.803 "w_mbytes_per_sec": 0 00:23:18.803 }, 00:23:18.803 "claimed": true, 00:23:18.803 "claim_type": "exclusive_write", 00:23:18.803 "zoned": false, 00:23:18.803 "supported_io_types": { 00:23:18.803 "read": true, 00:23:18.803 "write": true, 00:23:18.803 "unmap": true, 00:23:18.803 "flush": true, 00:23:18.803 "reset": true, 00:23:18.803 "nvme_admin": false, 00:23:18.803 "nvme_io": false, 00:23:18.803 "nvme_io_md": false, 00:23:18.803 "write_zeroes": true, 00:23:18.803 "zcopy": true, 00:23:18.803 "get_zone_info": false, 00:23:18.803 "zone_management": false, 00:23:18.803 "zone_append": false, 00:23:18.803 "compare": false, 00:23:18.803 "compare_and_write": false, 00:23:18.803 "abort": true, 00:23:18.803 "seek_hole": false, 00:23:18.803 "seek_data": false, 00:23:18.803 "copy": true, 00:23:18.803 "nvme_iov_md": false 00:23:18.803 }, 00:23:18.803 "memory_domains": [ 00:23:18.803 { 00:23:18.803 "dma_device_id": "system", 00:23:18.803 "dma_device_type": 1 00:23:18.803 }, 00:23:18.803 { 00:23:18.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.803 "dma_device_type": 2 00:23:18.803 } 00:23:18.803 ], 00:23:18.803 "driver_specific": {} 00:23:18.803 }' 00:23:18.803 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:19.061 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:19.061 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:19.061 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:19.061 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:19.061 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:19.061 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:19.061 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:19.061 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:19.061 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:19.061 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:19.061 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:19.061 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:19.061 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:19.061 00:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:19.319 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:19.319 "name": "BaseBdev3", 00:23:19.319 "aliases": [ 00:23:19.319 "7fa36093-2fda-4f88-8817-6473c8b6ed7a" 00:23:19.319 ], 00:23:19.319 "product_name": "Malloc disk", 00:23:19.319 "block_size": 512, 00:23:19.319 "num_blocks": 65536, 00:23:19.319 "uuid": "7fa36093-2fda-4f88-8817-6473c8b6ed7a", 00:23:19.319 "assigned_rate_limits": { 00:23:19.319 "rw_ios_per_sec": 0, 00:23:19.319 "rw_mbytes_per_sec": 0, 00:23:19.319 "r_mbytes_per_sec": 0, 00:23:19.319 "w_mbytes_per_sec": 0 00:23:19.319 }, 00:23:19.319 "claimed": true, 00:23:19.319 "claim_type": "exclusive_write", 00:23:19.319 "zoned": false, 00:23:19.319 "supported_io_types": { 00:23:19.319 "read": true, 00:23:19.319 "write": true, 00:23:19.319 "unmap": true, 00:23:19.319 "flush": true, 00:23:19.319 "reset": true, 00:23:19.319 "nvme_admin": false, 00:23:19.319 "nvme_io": false, 00:23:19.319 "nvme_io_md": false, 00:23:19.319 "write_zeroes": true, 00:23:19.319 "zcopy": true, 00:23:19.319 "get_zone_info": false, 00:23:19.319 "zone_management": false, 00:23:19.319 "zone_append": false, 00:23:19.319 "compare": false, 00:23:19.319 "compare_and_write": false, 00:23:19.319 "abort": true, 00:23:19.319 "seek_hole": false, 00:23:19.319 "seek_data": false, 00:23:19.319 "copy": true, 00:23:19.319 "nvme_iov_md": false 00:23:19.319 }, 00:23:19.319 "memory_domains": [ 00:23:19.319 { 00:23:19.319 "dma_device_id": "system", 00:23:19.319 "dma_device_type": 1 00:23:19.319 }, 00:23:19.319 { 00:23:19.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.319 "dma_device_type": 2 00:23:19.319 } 00:23:19.319 ], 00:23:19.319 "driver_specific": {} 00:23:19.319 }' 00:23:19.319 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:19.319 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:19.320 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:19.320 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:19.320 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:19.320 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:19.320 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:19.320 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:19.320 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:19.320 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:19.320 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:19.320 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:19.320 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:19.320 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:19.320 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:19.579 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:19.579 "name": "BaseBdev4", 00:23:19.579 "aliases": [ 00:23:19.579 "974e268d-a69b-4805-9e4a-7a3c0e988a7c" 00:23:19.579 ], 00:23:19.579 "product_name": "Malloc disk", 00:23:19.579 "block_size": 512, 00:23:19.579 "num_blocks": 65536, 00:23:19.579 "uuid": "974e268d-a69b-4805-9e4a-7a3c0e988a7c", 00:23:19.579 "assigned_rate_limits": { 00:23:19.579 "rw_ios_per_sec": 0, 00:23:19.579 "rw_mbytes_per_sec": 0, 00:23:19.579 "r_mbytes_per_sec": 0, 00:23:19.579 "w_mbytes_per_sec": 0 00:23:19.579 }, 00:23:19.579 "claimed": true, 00:23:19.579 "claim_type": "exclusive_write", 00:23:19.579 "zoned": false, 00:23:19.579 "supported_io_types": { 00:23:19.579 "read": true, 00:23:19.579 "write": true, 00:23:19.579 "unmap": true, 00:23:19.579 "flush": true, 00:23:19.579 "reset": true, 00:23:19.579 "nvme_admin": false, 00:23:19.579 "nvme_io": false, 00:23:19.579 "nvme_io_md": false, 00:23:19.579 "write_zeroes": true, 00:23:19.579 "zcopy": true, 00:23:19.579 "get_zone_info": false, 00:23:19.579 "zone_management": false, 00:23:19.579 "zone_append": false, 00:23:19.579 "compare": false, 00:23:19.579 "compare_and_write": false, 00:23:19.579 "abort": true, 00:23:19.579 "seek_hole": false, 00:23:19.579 "seek_data": false, 00:23:19.579 "copy": true, 00:23:19.579 "nvme_iov_md": false 00:23:19.579 }, 00:23:19.579 "memory_domains": [ 00:23:19.579 { 00:23:19.579 "dma_device_id": "system", 00:23:19.579 "dma_device_type": 1 00:23:19.579 }, 00:23:19.579 { 00:23:19.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.579 "dma_device_type": 2 00:23:19.579 } 00:23:19.579 ], 00:23:19.579 "driver_specific": {} 00:23:19.579 }' 00:23:19.579 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:19.579 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:19.579 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:19.579 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:19.579 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:19.579 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:19.579 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:19.579 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:19.579 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:19.579 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:19.579 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:19.579 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:19.579 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:19.838 [2024-07-25 00:07:15.593219] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:19.838 [2024-07-25 00:07:15.593290] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:19.838 [2024-07-25 00:07:15.593388] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:19.838 [2024-07-25 00:07:15.593711] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:19.838 [2024-07-25 00:07:15.593731] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name Existed_Raid, state offline 00:23:19.838 00:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 93722 00:23:19.838 00:07:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 93722 ']' 00:23:19.838 00:07:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 93722 00:23:19.838 00:07:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:23:19.838 00:07:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:19.838 00:07:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93722 00:23:19.838 killing process with pid 93722 00:23:19.838 00:07:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:19.838 00:07:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:19.838 00:07:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93722' 00:23:19.839 00:07:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 93722 00:23:19.839 [2024-07-25 00:07:15.644939] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:19.839 00:07:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 93722 00:23:20.098 [2024-07-25 00:07:15.916130] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:21.476 00:07:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:23:21.476 00:23:21.476 real 0m27.099s 00:23:21.476 user 0m47.478s 00:23:21.476 sys 0m4.223s 00:23:21.476 00:07:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:21.476 00:07:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.476 ************************************ 00:23:21.476 END TEST raid_state_function_test 00:23:21.476 ************************************ 00:23:21.476 00:07:17 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:23:21.476 00:07:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:21.476 00:07:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:21.476 00:07:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:21.476 ************************************ 00:23:21.476 START TEST raid_state_function_test_sb 00:23:21.476 ************************************ 00:23:21.476 00:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=94713 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:21.477 Process raid pid: 94713 00:23:21.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 94713' 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 94713 /var/tmp/spdk-raid.sock 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 94713 ']' 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:21.477 00:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.477 [2024-07-25 00:07:17.107109] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:23:21.477 [2024-07-25 00:07:17.107249] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.477 [2024-07-25 00:07:17.275122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.736 [2024-07-25 00:07:17.499593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.995 [2024-07-25 00:07:17.677549] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:22.563 00:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:22.563 00:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:23:22.563 00:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:22.563 [2024-07-25 00:07:18.355463] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:22.563 [2024-07-25 00:07:18.355707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:22.563 [2024-07-25 00:07:18.355732] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:22.563 [2024-07-25 00:07:18.355749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:22.563 [2024-07-25 00:07:18.355758] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:22.563 [2024-07-25 00:07:18.355770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:22.563 [2024-07-25 00:07:18.355779] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:22.563 [2024-07-25 00:07:18.355792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:22.563 00:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:22.563 00:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:22.563 00:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:22.563 00:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:22.563 00:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:22.563 00:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:22.563 00:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:22.563 00:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:22.563 00:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:22.563 00:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:22.563 00:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.563 00:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.822 00:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:22.822 "name": "Existed_Raid", 00:23:22.822 "uuid": "bb77df5e-da5b-495d-bfbd-1e93d30c9b31", 00:23:22.822 "strip_size_kb": 0, 00:23:22.822 "state": "configuring", 00:23:22.822 "raid_level": "raid1", 00:23:22.822 "superblock": true, 00:23:22.822 "num_base_bdevs": 4, 00:23:22.822 "num_base_bdevs_discovered": 0, 00:23:22.822 "num_base_bdevs_operational": 4, 00:23:22.822 "base_bdevs_list": [ 00:23:22.822 { 00:23:22.822 "name": "BaseBdev1", 00:23:22.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.822 "is_configured": false, 00:23:22.822 "data_offset": 0, 00:23:22.822 "data_size": 0 00:23:22.822 }, 00:23:22.822 { 00:23:22.822 "name": "BaseBdev2", 00:23:22.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.822 "is_configured": false, 00:23:22.822 "data_offset": 0, 00:23:22.822 "data_size": 0 00:23:22.822 }, 00:23:22.822 { 00:23:22.822 "name": "BaseBdev3", 00:23:22.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.822 "is_configured": false, 00:23:22.822 "data_offset": 0, 00:23:22.822 "data_size": 0 00:23:22.822 }, 00:23:22.822 { 00:23:22.822 "name": "BaseBdev4", 00:23:22.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.822 "is_configured": false, 00:23:22.822 "data_offset": 0, 00:23:22.822 "data_size": 0 00:23:22.822 } 00:23:22.822 ] 00:23:22.822 }' 00:23:22.822 00:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:22.823 00:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.081 00:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:23.340 [2024-07-25 00:07:19.099544] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:23.340 [2024-07-25 00:07:19.099831] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:23:23.340 00:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:23.600 [2024-07-25 00:07:19.327736] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:23.600 [2024-07-25 00:07:19.328155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:23.600 [2024-07-25 00:07:19.328209] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:23.600 [2024-07-25 00:07:19.328239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:23.600 [2024-07-25 00:07:19.328256] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:23.600 [2024-07-25 00:07:19.328280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:23.600 [2024-07-25 00:07:19.328297] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:23.600 [2024-07-25 00:07:19.328320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:23.600 00:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:23.859 [2024-07-25 00:07:19.607621] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:23.859 BaseBdev1 00:23:23.859 00:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:23.859 00:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:23:23.859 00:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:23.859 00:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:23.859 00:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:23.859 00:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:23.859 00:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:24.119 00:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:24.378 [ 00:23:24.378 { 00:23:24.378 "name": "BaseBdev1", 00:23:24.378 "aliases": [ 00:23:24.378 "b300f5aa-3923-44f1-b5f6-0fb441336aa3" 00:23:24.378 ], 00:23:24.378 "product_name": "Malloc disk", 00:23:24.378 "block_size": 512, 00:23:24.378 "num_blocks": 65536, 00:23:24.378 "uuid": "b300f5aa-3923-44f1-b5f6-0fb441336aa3", 00:23:24.378 "assigned_rate_limits": { 00:23:24.378 "rw_ios_per_sec": 0, 00:23:24.378 "rw_mbytes_per_sec": 0, 00:23:24.378 "r_mbytes_per_sec": 0, 00:23:24.378 "w_mbytes_per_sec": 0 00:23:24.378 }, 00:23:24.378 "claimed": true, 00:23:24.378 "claim_type": "exclusive_write", 00:23:24.378 "zoned": false, 00:23:24.378 "supported_io_types": { 00:23:24.378 "read": true, 00:23:24.378 "write": true, 00:23:24.378 "unmap": true, 00:23:24.378 "flush": true, 00:23:24.378 "reset": true, 00:23:24.378 "nvme_admin": false, 00:23:24.378 "nvme_io": false, 00:23:24.378 "nvme_io_md": false, 00:23:24.378 "write_zeroes": true, 00:23:24.378 "zcopy": true, 00:23:24.378 "get_zone_info": false, 00:23:24.378 "zone_management": false, 00:23:24.378 "zone_append": false, 00:23:24.378 "compare": false, 00:23:24.378 "compare_and_write": false, 00:23:24.378 "abort": true, 00:23:24.378 "seek_hole": false, 00:23:24.378 "seek_data": false, 00:23:24.378 "copy": true, 00:23:24.378 "nvme_iov_md": false 00:23:24.378 }, 00:23:24.378 "memory_domains": [ 00:23:24.378 { 00:23:24.378 "dma_device_id": "system", 00:23:24.378 "dma_device_type": 1 00:23:24.378 }, 00:23:24.378 { 00:23:24.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.378 "dma_device_type": 2 00:23:24.378 } 00:23:24.378 ], 00:23:24.378 "driver_specific": {} 00:23:24.378 } 00:23:24.378 ] 00:23:24.378 00:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:24.378 00:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:24.378 00:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:24.378 00:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:24.378 00:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:24.378 00:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:24.378 00:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:24.378 00:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:24.378 00:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:24.378 00:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:24.378 00:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:24.378 00:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:24.378 00:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.637 00:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:24.637 "name": "Existed_Raid", 00:23:24.637 "uuid": "5fa7b280-a047-47c5-b436-b75768e5da51", 00:23:24.637 "strip_size_kb": 0, 00:23:24.637 "state": "configuring", 00:23:24.637 "raid_level": "raid1", 00:23:24.637 "superblock": true, 00:23:24.637 "num_base_bdevs": 4, 00:23:24.637 "num_base_bdevs_discovered": 1, 00:23:24.637 "num_base_bdevs_operational": 4, 00:23:24.637 "base_bdevs_list": [ 00:23:24.637 { 00:23:24.637 "name": "BaseBdev1", 00:23:24.637 "uuid": "b300f5aa-3923-44f1-b5f6-0fb441336aa3", 00:23:24.637 "is_configured": true, 00:23:24.637 "data_offset": 2048, 00:23:24.637 "data_size": 63488 00:23:24.637 }, 00:23:24.637 { 00:23:24.637 "name": "BaseBdev2", 00:23:24.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.637 "is_configured": false, 00:23:24.637 "data_offset": 0, 00:23:24.637 "data_size": 0 00:23:24.637 }, 00:23:24.637 { 00:23:24.637 "name": "BaseBdev3", 00:23:24.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.637 "is_configured": false, 00:23:24.637 "data_offset": 0, 00:23:24.637 "data_size": 0 00:23:24.637 }, 00:23:24.637 { 00:23:24.637 "name": "BaseBdev4", 00:23:24.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.637 "is_configured": false, 00:23:24.637 "data_offset": 0, 00:23:24.637 "data_size": 0 00:23:24.637 } 00:23:24.637 ] 00:23:24.637 }' 00:23:24.637 00:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:24.637 00:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.896 00:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:25.154 [2024-07-25 00:07:20.816098] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:25.154 [2024-07-25 00:07:20.816160] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:23:25.154 00:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:25.413 [2024-07-25 00:07:21.088208] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:25.413 [2024-07-25 00:07:21.090213] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:25.413 [2024-07-25 00:07:21.090282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:25.413 [2024-07-25 00:07:21.090297] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:25.413 [2024-07-25 00:07:21.090312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:25.413 [2024-07-25 00:07:21.090321] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:25.413 [2024-07-25 00:07:21.090345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:25.413 00:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:25.413 00:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:25.413 00:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:25.413 00:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:25.413 00:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:25.413 00:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:25.413 00:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:25.413 00:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:25.413 00:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:25.413 00:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:25.413 00:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:25.413 00:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:25.413 00:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.413 00:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:25.671 00:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:25.671 "name": "Existed_Raid", 00:23:25.671 "uuid": "8678e10f-5fe8-45a3-863f-b94ebcf51ca9", 00:23:25.671 "strip_size_kb": 0, 00:23:25.671 "state": "configuring", 00:23:25.671 "raid_level": "raid1", 00:23:25.671 "superblock": true, 00:23:25.671 "num_base_bdevs": 4, 00:23:25.671 "num_base_bdevs_discovered": 1, 00:23:25.671 "num_base_bdevs_operational": 4, 00:23:25.671 "base_bdevs_list": [ 00:23:25.671 { 00:23:25.671 "name": "BaseBdev1", 00:23:25.671 "uuid": "b300f5aa-3923-44f1-b5f6-0fb441336aa3", 00:23:25.671 "is_configured": true, 00:23:25.671 "data_offset": 2048, 00:23:25.671 "data_size": 63488 00:23:25.671 }, 00:23:25.671 { 00:23:25.671 "name": "BaseBdev2", 00:23:25.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.671 "is_configured": false, 00:23:25.671 "data_offset": 0, 00:23:25.671 "data_size": 0 00:23:25.671 }, 00:23:25.671 { 00:23:25.671 "name": "BaseBdev3", 00:23:25.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.671 "is_configured": false, 00:23:25.672 "data_offset": 0, 00:23:25.672 "data_size": 0 00:23:25.672 }, 00:23:25.672 { 00:23:25.672 "name": "BaseBdev4", 00:23:25.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.672 "is_configured": false, 00:23:25.672 "data_offset": 0, 00:23:25.672 "data_size": 0 00:23:25.672 } 00:23:25.672 ] 00:23:25.672 }' 00:23:25.672 00:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:25.672 00:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.929 00:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:26.188 [2024-07-25 00:07:21.929960] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:26.188 BaseBdev2 00:23:26.188 00:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:26.188 00:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:23:26.188 00:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:26.188 00:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:26.188 00:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:26.188 00:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:26.188 00:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:26.446 00:07:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:26.704 [ 00:23:26.704 { 00:23:26.704 "name": "BaseBdev2", 00:23:26.704 "aliases": [ 00:23:26.704 "91f1be0d-3625-4ff3-b479-4699503da3d4" 00:23:26.704 ], 00:23:26.704 "product_name": "Malloc disk", 00:23:26.704 "block_size": 512, 00:23:26.704 "num_blocks": 65536, 00:23:26.704 "uuid": "91f1be0d-3625-4ff3-b479-4699503da3d4", 00:23:26.704 "assigned_rate_limits": { 00:23:26.704 "rw_ios_per_sec": 0, 00:23:26.704 "rw_mbytes_per_sec": 0, 00:23:26.704 "r_mbytes_per_sec": 0, 00:23:26.704 "w_mbytes_per_sec": 0 00:23:26.704 }, 00:23:26.704 "claimed": true, 00:23:26.704 "claim_type": "exclusive_write", 00:23:26.704 "zoned": false, 00:23:26.704 "supported_io_types": { 00:23:26.704 "read": true, 00:23:26.704 "write": true, 00:23:26.704 "unmap": true, 00:23:26.704 "flush": true, 00:23:26.704 "reset": true, 00:23:26.704 "nvme_admin": false, 00:23:26.704 "nvme_io": false, 00:23:26.704 "nvme_io_md": false, 00:23:26.704 "write_zeroes": true, 00:23:26.704 "zcopy": true, 00:23:26.704 "get_zone_info": false, 00:23:26.704 "zone_management": false, 00:23:26.704 "zone_append": false, 00:23:26.704 "compare": false, 00:23:26.704 "compare_and_write": false, 00:23:26.704 "abort": true, 00:23:26.705 "seek_hole": false, 00:23:26.705 "seek_data": false, 00:23:26.705 "copy": true, 00:23:26.705 "nvme_iov_md": false 00:23:26.705 }, 00:23:26.705 "memory_domains": [ 00:23:26.705 { 00:23:26.705 "dma_device_id": "system", 00:23:26.705 "dma_device_type": 1 00:23:26.705 }, 00:23:26.705 { 00:23:26.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.705 "dma_device_type": 2 00:23:26.705 } 00:23:26.705 ], 00:23:26.705 "driver_specific": {} 00:23:26.705 } 00:23:26.705 ] 00:23:26.705 00:07:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:26.705 00:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:26.705 00:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:26.705 00:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:26.705 00:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:26.705 00:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:26.705 00:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:26.705 00:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:26.705 00:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:26.705 00:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:26.705 00:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:26.705 00:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:26.705 00:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:26.705 00:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.705 00:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:26.963 00:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:26.963 "name": "Existed_Raid", 00:23:26.963 "uuid": "8678e10f-5fe8-45a3-863f-b94ebcf51ca9", 00:23:26.963 "strip_size_kb": 0, 00:23:26.963 "state": "configuring", 00:23:26.963 "raid_level": "raid1", 00:23:26.963 "superblock": true, 00:23:26.963 "num_base_bdevs": 4, 00:23:26.963 "num_base_bdevs_discovered": 2, 00:23:26.963 "num_base_bdevs_operational": 4, 00:23:26.963 "base_bdevs_list": [ 00:23:26.963 { 00:23:26.963 "name": "BaseBdev1", 00:23:26.963 "uuid": "b300f5aa-3923-44f1-b5f6-0fb441336aa3", 00:23:26.963 "is_configured": true, 00:23:26.963 "data_offset": 2048, 00:23:26.963 "data_size": 63488 00:23:26.963 }, 00:23:26.963 { 00:23:26.963 "name": "BaseBdev2", 00:23:26.963 "uuid": "91f1be0d-3625-4ff3-b479-4699503da3d4", 00:23:26.963 "is_configured": true, 00:23:26.963 "data_offset": 2048, 00:23:26.963 "data_size": 63488 00:23:26.963 }, 00:23:26.963 { 00:23:26.963 "name": "BaseBdev3", 00:23:26.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.963 "is_configured": false, 00:23:26.963 "data_offset": 0, 00:23:26.963 "data_size": 0 00:23:26.963 }, 00:23:26.963 { 00:23:26.963 "name": "BaseBdev4", 00:23:26.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.963 "is_configured": false, 00:23:26.963 "data_offset": 0, 00:23:26.963 "data_size": 0 00:23:26.963 } 00:23:26.963 ] 00:23:26.963 }' 00:23:26.963 00:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:26.963 00:07:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.222 00:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:27.480 [2024-07-25 00:07:23.140536] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:27.480 BaseBdev3 00:23:27.480 00:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:27.480 00:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:23:27.480 00:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:27.480 00:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:27.480 00:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:27.480 00:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:27.480 00:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:27.739 00:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:27.739 [ 00:23:27.739 { 00:23:27.739 "name": "BaseBdev3", 00:23:27.739 "aliases": [ 00:23:27.739 "756687c1-86cb-45f4-a5d0-33ac8e00cd16" 00:23:27.739 ], 00:23:27.739 "product_name": "Malloc disk", 00:23:27.739 "block_size": 512, 00:23:27.739 "num_blocks": 65536, 00:23:27.739 "uuid": "756687c1-86cb-45f4-a5d0-33ac8e00cd16", 00:23:27.739 "assigned_rate_limits": { 00:23:27.739 "rw_ios_per_sec": 0, 00:23:27.739 "rw_mbytes_per_sec": 0, 00:23:27.739 "r_mbytes_per_sec": 0, 00:23:27.739 "w_mbytes_per_sec": 0 00:23:27.739 }, 00:23:27.739 "claimed": true, 00:23:27.739 "claim_type": "exclusive_write", 00:23:27.739 "zoned": false, 00:23:27.739 "supported_io_types": { 00:23:27.739 "read": true, 00:23:27.739 "write": true, 00:23:27.739 "unmap": true, 00:23:27.739 "flush": true, 00:23:27.739 "reset": true, 00:23:27.739 "nvme_admin": false, 00:23:27.739 "nvme_io": false, 00:23:27.739 "nvme_io_md": false, 00:23:27.739 "write_zeroes": true, 00:23:27.739 "zcopy": true, 00:23:27.739 "get_zone_info": false, 00:23:27.739 "zone_management": false, 00:23:27.739 "zone_append": false, 00:23:27.739 "compare": false, 00:23:27.739 "compare_and_write": false, 00:23:27.739 "abort": true, 00:23:27.739 "seek_hole": false, 00:23:27.739 "seek_data": false, 00:23:27.739 "copy": true, 00:23:27.739 "nvme_iov_md": false 00:23:27.739 }, 00:23:27.739 "memory_domains": [ 00:23:27.739 { 00:23:27.739 "dma_device_id": "system", 00:23:27.739 "dma_device_type": 1 00:23:27.739 }, 00:23:27.739 { 00:23:27.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.739 "dma_device_type": 2 00:23:27.739 } 00:23:27.739 ], 00:23:27.739 "driver_specific": {} 00:23:27.739 } 00:23:27.739 ] 00:23:27.739 00:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:27.739 00:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:27.739 00:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:27.739 00:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:27.739 00:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:27.739 00:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:27.739 00:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:27.739 00:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:27.739 00:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:27.739 00:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:27.739 00:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:27.740 00:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:27.740 00:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:27.740 00:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.740 00:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:27.998 00:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:27.998 "name": "Existed_Raid", 00:23:27.998 "uuid": "8678e10f-5fe8-45a3-863f-b94ebcf51ca9", 00:23:27.998 "strip_size_kb": 0, 00:23:27.998 "state": "configuring", 00:23:27.998 "raid_level": "raid1", 00:23:27.998 "superblock": true, 00:23:27.998 "num_base_bdevs": 4, 00:23:27.998 "num_base_bdevs_discovered": 3, 00:23:27.998 "num_base_bdevs_operational": 4, 00:23:27.998 "base_bdevs_list": [ 00:23:27.998 { 00:23:27.998 "name": "BaseBdev1", 00:23:27.998 "uuid": "b300f5aa-3923-44f1-b5f6-0fb441336aa3", 00:23:27.998 "is_configured": true, 00:23:27.998 "data_offset": 2048, 00:23:27.998 "data_size": 63488 00:23:27.998 }, 00:23:27.998 { 00:23:27.998 "name": "BaseBdev2", 00:23:27.998 "uuid": "91f1be0d-3625-4ff3-b479-4699503da3d4", 00:23:27.998 "is_configured": true, 00:23:27.998 "data_offset": 2048, 00:23:27.998 "data_size": 63488 00:23:27.998 }, 00:23:27.998 { 00:23:27.998 "name": "BaseBdev3", 00:23:27.998 "uuid": "756687c1-86cb-45f4-a5d0-33ac8e00cd16", 00:23:27.998 "is_configured": true, 00:23:27.998 "data_offset": 2048, 00:23:27.998 "data_size": 63488 00:23:27.998 }, 00:23:27.998 { 00:23:27.998 "name": "BaseBdev4", 00:23:27.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.998 "is_configured": false, 00:23:27.998 "data_offset": 0, 00:23:27.998 "data_size": 0 00:23:27.998 } 00:23:27.998 ] 00:23:27.998 }' 00:23:27.998 00:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:27.998 00:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.566 00:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:28.566 [2024-07-25 00:07:24.395952] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:28.566 [2024-07-25 00:07:24.396233] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:23:28.566 [2024-07-25 00:07:24.396251] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:28.566 [2024-07-25 00:07:24.396366] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:23:28.566 [2024-07-25 00:07:24.396699] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:23:28.566 [2024-07-25 00:07:24.396719] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:23:28.566 [2024-07-25 00:07:24.396920] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:28.566 BaseBdev4 00:23:28.566 00:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:23:28.566 00:07:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:23:28.566 00:07:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:28.566 00:07:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:28.566 00:07:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:28.566 00:07:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:28.566 00:07:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:28.825 00:07:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:29.083 [ 00:23:29.083 { 00:23:29.083 "name": "BaseBdev4", 00:23:29.083 "aliases": [ 00:23:29.083 "d4da67d1-9b81-4835-8c5a-9ea5b8dc471e" 00:23:29.083 ], 00:23:29.083 "product_name": "Malloc disk", 00:23:29.083 "block_size": 512, 00:23:29.083 "num_blocks": 65536, 00:23:29.083 "uuid": "d4da67d1-9b81-4835-8c5a-9ea5b8dc471e", 00:23:29.083 "assigned_rate_limits": { 00:23:29.083 "rw_ios_per_sec": 0, 00:23:29.083 "rw_mbytes_per_sec": 0, 00:23:29.083 "r_mbytes_per_sec": 0, 00:23:29.083 "w_mbytes_per_sec": 0 00:23:29.083 }, 00:23:29.083 "claimed": true, 00:23:29.083 "claim_type": "exclusive_write", 00:23:29.083 "zoned": false, 00:23:29.083 "supported_io_types": { 00:23:29.083 "read": true, 00:23:29.083 "write": true, 00:23:29.083 "unmap": true, 00:23:29.083 "flush": true, 00:23:29.083 "reset": true, 00:23:29.083 "nvme_admin": false, 00:23:29.083 "nvme_io": false, 00:23:29.083 "nvme_io_md": false, 00:23:29.083 "write_zeroes": true, 00:23:29.083 "zcopy": true, 00:23:29.084 "get_zone_info": false, 00:23:29.084 "zone_management": false, 00:23:29.084 "zone_append": false, 00:23:29.084 "compare": false, 00:23:29.084 "compare_and_write": false, 00:23:29.084 "abort": true, 00:23:29.084 "seek_hole": false, 00:23:29.084 "seek_data": false, 00:23:29.084 "copy": true, 00:23:29.084 "nvme_iov_md": false 00:23:29.084 }, 00:23:29.084 "memory_domains": [ 00:23:29.084 { 00:23:29.084 "dma_device_id": "system", 00:23:29.084 "dma_device_type": 1 00:23:29.084 }, 00:23:29.084 { 00:23:29.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.084 "dma_device_type": 2 00:23:29.084 } 00:23:29.084 ], 00:23:29.084 "driver_specific": {} 00:23:29.084 } 00:23:29.084 ] 00:23:29.084 00:07:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:29.084 00:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:29.084 00:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:29.084 00:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:23:29.084 00:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:29.084 00:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:29.084 00:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:29.084 00:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:29.084 00:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:29.084 00:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:29.084 00:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:29.084 00:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:29.084 00:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:29.084 00:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.084 00:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:29.342 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:29.342 "name": "Existed_Raid", 00:23:29.342 "uuid": "8678e10f-5fe8-45a3-863f-b94ebcf51ca9", 00:23:29.342 "strip_size_kb": 0, 00:23:29.342 "state": "online", 00:23:29.342 "raid_level": "raid1", 00:23:29.342 "superblock": true, 00:23:29.342 "num_base_bdevs": 4, 00:23:29.342 "num_base_bdevs_discovered": 4, 00:23:29.342 "num_base_bdevs_operational": 4, 00:23:29.342 "base_bdevs_list": [ 00:23:29.342 { 00:23:29.342 "name": "BaseBdev1", 00:23:29.342 "uuid": "b300f5aa-3923-44f1-b5f6-0fb441336aa3", 00:23:29.342 "is_configured": true, 00:23:29.342 "data_offset": 2048, 00:23:29.342 "data_size": 63488 00:23:29.342 }, 00:23:29.342 { 00:23:29.342 "name": "BaseBdev2", 00:23:29.342 "uuid": "91f1be0d-3625-4ff3-b479-4699503da3d4", 00:23:29.342 "is_configured": true, 00:23:29.342 "data_offset": 2048, 00:23:29.342 "data_size": 63488 00:23:29.342 }, 00:23:29.342 { 00:23:29.342 "name": "BaseBdev3", 00:23:29.342 "uuid": "756687c1-86cb-45f4-a5d0-33ac8e00cd16", 00:23:29.342 "is_configured": true, 00:23:29.342 "data_offset": 2048, 00:23:29.342 "data_size": 63488 00:23:29.342 }, 00:23:29.342 { 00:23:29.342 "name": "BaseBdev4", 00:23:29.342 "uuid": "d4da67d1-9b81-4835-8c5a-9ea5b8dc471e", 00:23:29.342 "is_configured": true, 00:23:29.342 "data_offset": 2048, 00:23:29.342 "data_size": 63488 00:23:29.342 } 00:23:29.342 ] 00:23:29.342 }' 00:23:29.342 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:29.342 00:07:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.600 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:29.600 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:29.600 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:29.600 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:29.600 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:29.600 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:23:29.600 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:29.600 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:29.893 [2024-07-25 00:07:25.592660] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:29.893 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:29.893 "name": "Existed_Raid", 00:23:29.893 "aliases": [ 00:23:29.893 "8678e10f-5fe8-45a3-863f-b94ebcf51ca9" 00:23:29.893 ], 00:23:29.893 "product_name": "Raid Volume", 00:23:29.893 "block_size": 512, 00:23:29.893 "num_blocks": 63488, 00:23:29.893 "uuid": "8678e10f-5fe8-45a3-863f-b94ebcf51ca9", 00:23:29.893 "assigned_rate_limits": { 00:23:29.893 "rw_ios_per_sec": 0, 00:23:29.893 "rw_mbytes_per_sec": 0, 00:23:29.893 "r_mbytes_per_sec": 0, 00:23:29.893 "w_mbytes_per_sec": 0 00:23:29.893 }, 00:23:29.893 "claimed": false, 00:23:29.893 "zoned": false, 00:23:29.893 "supported_io_types": { 00:23:29.893 "read": true, 00:23:29.893 "write": true, 00:23:29.893 "unmap": false, 00:23:29.893 "flush": false, 00:23:29.893 "reset": true, 00:23:29.893 "nvme_admin": false, 00:23:29.893 "nvme_io": false, 00:23:29.893 "nvme_io_md": false, 00:23:29.893 "write_zeroes": true, 00:23:29.893 "zcopy": false, 00:23:29.893 "get_zone_info": false, 00:23:29.893 "zone_management": false, 00:23:29.893 "zone_append": false, 00:23:29.893 "compare": false, 00:23:29.893 "compare_and_write": false, 00:23:29.893 "abort": false, 00:23:29.893 "seek_hole": false, 00:23:29.893 "seek_data": false, 00:23:29.893 "copy": false, 00:23:29.893 "nvme_iov_md": false 00:23:29.893 }, 00:23:29.893 "memory_domains": [ 00:23:29.893 { 00:23:29.893 "dma_device_id": "system", 00:23:29.893 "dma_device_type": 1 00:23:29.893 }, 00:23:29.893 { 00:23:29.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.893 "dma_device_type": 2 00:23:29.893 }, 00:23:29.893 { 00:23:29.893 "dma_device_id": "system", 00:23:29.893 "dma_device_type": 1 00:23:29.893 }, 00:23:29.893 { 00:23:29.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.893 "dma_device_type": 2 00:23:29.893 }, 00:23:29.893 { 00:23:29.893 "dma_device_id": "system", 00:23:29.893 "dma_device_type": 1 00:23:29.893 }, 00:23:29.893 { 00:23:29.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.893 "dma_device_type": 2 00:23:29.893 }, 00:23:29.893 { 00:23:29.893 "dma_device_id": "system", 00:23:29.893 "dma_device_type": 1 00:23:29.893 }, 00:23:29.893 { 00:23:29.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.893 "dma_device_type": 2 00:23:29.893 } 00:23:29.893 ], 00:23:29.893 "driver_specific": { 00:23:29.893 "raid": { 00:23:29.893 "uuid": "8678e10f-5fe8-45a3-863f-b94ebcf51ca9", 00:23:29.893 "strip_size_kb": 0, 00:23:29.893 "state": "online", 00:23:29.893 "raid_level": "raid1", 00:23:29.893 "superblock": true, 00:23:29.893 "num_base_bdevs": 4, 00:23:29.893 "num_base_bdevs_discovered": 4, 00:23:29.893 "num_base_bdevs_operational": 4, 00:23:29.893 "base_bdevs_list": [ 00:23:29.893 { 00:23:29.893 "name": "BaseBdev1", 00:23:29.893 "uuid": "b300f5aa-3923-44f1-b5f6-0fb441336aa3", 00:23:29.893 "is_configured": true, 00:23:29.893 "data_offset": 2048, 00:23:29.893 "data_size": 63488 00:23:29.893 }, 00:23:29.893 { 00:23:29.893 "name": "BaseBdev2", 00:23:29.893 "uuid": "91f1be0d-3625-4ff3-b479-4699503da3d4", 00:23:29.893 "is_configured": true, 00:23:29.893 "data_offset": 2048, 00:23:29.893 "data_size": 63488 00:23:29.893 }, 00:23:29.893 { 00:23:29.893 "name": "BaseBdev3", 00:23:29.893 "uuid": "756687c1-86cb-45f4-a5d0-33ac8e00cd16", 00:23:29.893 "is_configured": true, 00:23:29.893 "data_offset": 2048, 00:23:29.893 "data_size": 63488 00:23:29.893 }, 00:23:29.893 { 00:23:29.893 "name": "BaseBdev4", 00:23:29.893 "uuid": "d4da67d1-9b81-4835-8c5a-9ea5b8dc471e", 00:23:29.893 "is_configured": true, 00:23:29.893 "data_offset": 2048, 00:23:29.893 "data_size": 63488 00:23:29.893 } 00:23:29.893 ] 00:23:29.893 } 00:23:29.893 } 00:23:29.893 }' 00:23:29.893 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:29.893 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:29.893 BaseBdev2 00:23:29.893 BaseBdev3 00:23:29.893 BaseBdev4' 00:23:29.893 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:29.893 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:29.893 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:30.158 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:30.158 "name": "BaseBdev1", 00:23:30.158 "aliases": [ 00:23:30.158 "b300f5aa-3923-44f1-b5f6-0fb441336aa3" 00:23:30.158 ], 00:23:30.158 "product_name": "Malloc disk", 00:23:30.158 "block_size": 512, 00:23:30.158 "num_blocks": 65536, 00:23:30.158 "uuid": "b300f5aa-3923-44f1-b5f6-0fb441336aa3", 00:23:30.158 "assigned_rate_limits": { 00:23:30.158 "rw_ios_per_sec": 0, 00:23:30.158 "rw_mbytes_per_sec": 0, 00:23:30.158 "r_mbytes_per_sec": 0, 00:23:30.158 "w_mbytes_per_sec": 0 00:23:30.158 }, 00:23:30.158 "claimed": true, 00:23:30.158 "claim_type": "exclusive_write", 00:23:30.158 "zoned": false, 00:23:30.158 "supported_io_types": { 00:23:30.158 "read": true, 00:23:30.158 "write": true, 00:23:30.158 "unmap": true, 00:23:30.158 "flush": true, 00:23:30.158 "reset": true, 00:23:30.158 "nvme_admin": false, 00:23:30.158 "nvme_io": false, 00:23:30.158 "nvme_io_md": false, 00:23:30.158 "write_zeroes": true, 00:23:30.158 "zcopy": true, 00:23:30.158 "get_zone_info": false, 00:23:30.158 "zone_management": false, 00:23:30.158 "zone_append": false, 00:23:30.158 "compare": false, 00:23:30.158 "compare_and_write": false, 00:23:30.158 "abort": true, 00:23:30.158 "seek_hole": false, 00:23:30.158 "seek_data": false, 00:23:30.158 "copy": true, 00:23:30.158 "nvme_iov_md": false 00:23:30.158 }, 00:23:30.158 "memory_domains": [ 00:23:30.158 { 00:23:30.158 "dma_device_id": "system", 00:23:30.158 "dma_device_type": 1 00:23:30.158 }, 00:23:30.158 { 00:23:30.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.158 "dma_device_type": 2 00:23:30.158 } 00:23:30.158 ], 00:23:30.159 "driver_specific": {} 00:23:30.159 }' 00:23:30.159 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:30.159 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:30.159 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:30.159 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.159 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.159 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:30.159 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.159 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.159 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:30.159 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.159 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.159 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:30.159 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:30.159 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:30.159 00:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:30.417 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:30.417 "name": "BaseBdev2", 00:23:30.417 "aliases": [ 00:23:30.417 "91f1be0d-3625-4ff3-b479-4699503da3d4" 00:23:30.417 ], 00:23:30.417 "product_name": "Malloc disk", 00:23:30.417 "block_size": 512, 00:23:30.417 "num_blocks": 65536, 00:23:30.417 "uuid": "91f1be0d-3625-4ff3-b479-4699503da3d4", 00:23:30.417 "assigned_rate_limits": { 00:23:30.417 "rw_ios_per_sec": 0, 00:23:30.417 "rw_mbytes_per_sec": 0, 00:23:30.417 "r_mbytes_per_sec": 0, 00:23:30.417 "w_mbytes_per_sec": 0 00:23:30.417 }, 00:23:30.417 "claimed": true, 00:23:30.417 "claim_type": "exclusive_write", 00:23:30.417 "zoned": false, 00:23:30.417 "supported_io_types": { 00:23:30.417 "read": true, 00:23:30.417 "write": true, 00:23:30.417 "unmap": true, 00:23:30.417 "flush": true, 00:23:30.417 "reset": true, 00:23:30.417 "nvme_admin": false, 00:23:30.417 "nvme_io": false, 00:23:30.417 "nvme_io_md": false, 00:23:30.417 "write_zeroes": true, 00:23:30.417 "zcopy": true, 00:23:30.417 "get_zone_info": false, 00:23:30.417 "zone_management": false, 00:23:30.417 "zone_append": false, 00:23:30.417 "compare": false, 00:23:30.417 "compare_and_write": false, 00:23:30.417 "abort": true, 00:23:30.417 "seek_hole": false, 00:23:30.417 "seek_data": false, 00:23:30.417 "copy": true, 00:23:30.417 "nvme_iov_md": false 00:23:30.417 }, 00:23:30.417 "memory_domains": [ 00:23:30.417 { 00:23:30.417 "dma_device_id": "system", 00:23:30.417 "dma_device_type": 1 00:23:30.417 }, 00:23:30.417 { 00:23:30.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.417 "dma_device_type": 2 00:23:30.417 } 00:23:30.417 ], 00:23:30.417 "driver_specific": {} 00:23:30.417 }' 00:23:30.417 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:30.417 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:30.417 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:30.417 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.417 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.417 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:30.417 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.417 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.417 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:30.417 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.417 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.417 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:30.417 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:30.417 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:30.418 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:30.676 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:30.676 "name": "BaseBdev3", 00:23:30.676 "aliases": [ 00:23:30.676 "756687c1-86cb-45f4-a5d0-33ac8e00cd16" 00:23:30.676 ], 00:23:30.676 "product_name": "Malloc disk", 00:23:30.676 "block_size": 512, 00:23:30.676 "num_blocks": 65536, 00:23:30.676 "uuid": "756687c1-86cb-45f4-a5d0-33ac8e00cd16", 00:23:30.676 "assigned_rate_limits": { 00:23:30.676 "rw_ios_per_sec": 0, 00:23:30.676 "rw_mbytes_per_sec": 0, 00:23:30.676 "r_mbytes_per_sec": 0, 00:23:30.676 "w_mbytes_per_sec": 0 00:23:30.676 }, 00:23:30.676 "claimed": true, 00:23:30.676 "claim_type": "exclusive_write", 00:23:30.676 "zoned": false, 00:23:30.676 "supported_io_types": { 00:23:30.676 "read": true, 00:23:30.676 "write": true, 00:23:30.676 "unmap": true, 00:23:30.676 "flush": true, 00:23:30.676 "reset": true, 00:23:30.676 "nvme_admin": false, 00:23:30.676 "nvme_io": false, 00:23:30.676 "nvme_io_md": false, 00:23:30.676 "write_zeroes": true, 00:23:30.676 "zcopy": true, 00:23:30.676 "get_zone_info": false, 00:23:30.676 "zone_management": false, 00:23:30.676 "zone_append": false, 00:23:30.676 "compare": false, 00:23:30.676 "compare_and_write": false, 00:23:30.676 "abort": true, 00:23:30.676 "seek_hole": false, 00:23:30.676 "seek_data": false, 00:23:30.676 "copy": true, 00:23:30.676 "nvme_iov_md": false 00:23:30.676 }, 00:23:30.676 "memory_domains": [ 00:23:30.676 { 00:23:30.676 "dma_device_id": "system", 00:23:30.676 "dma_device_type": 1 00:23:30.676 }, 00:23:30.676 { 00:23:30.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.677 "dma_device_type": 2 00:23:30.677 } 00:23:30.677 ], 00:23:30.677 "driver_specific": {} 00:23:30.677 }' 00:23:30.677 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:30.935 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:30.935 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:30.935 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.935 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.935 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:30.935 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.935 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.935 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:30.935 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.935 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.935 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:30.935 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:30.935 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:30.935 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:31.193 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:31.193 "name": "BaseBdev4", 00:23:31.193 "aliases": [ 00:23:31.193 "d4da67d1-9b81-4835-8c5a-9ea5b8dc471e" 00:23:31.193 ], 00:23:31.193 "product_name": "Malloc disk", 00:23:31.193 "block_size": 512, 00:23:31.193 "num_blocks": 65536, 00:23:31.193 "uuid": "d4da67d1-9b81-4835-8c5a-9ea5b8dc471e", 00:23:31.193 "assigned_rate_limits": { 00:23:31.193 "rw_ios_per_sec": 0, 00:23:31.193 "rw_mbytes_per_sec": 0, 00:23:31.193 "r_mbytes_per_sec": 0, 00:23:31.193 "w_mbytes_per_sec": 0 00:23:31.193 }, 00:23:31.193 "claimed": true, 00:23:31.193 "claim_type": "exclusive_write", 00:23:31.193 "zoned": false, 00:23:31.193 "supported_io_types": { 00:23:31.193 "read": true, 00:23:31.193 "write": true, 00:23:31.193 "unmap": true, 00:23:31.193 "flush": true, 00:23:31.193 "reset": true, 00:23:31.193 "nvme_admin": false, 00:23:31.193 "nvme_io": false, 00:23:31.193 "nvme_io_md": false, 00:23:31.193 "write_zeroes": true, 00:23:31.193 "zcopy": true, 00:23:31.193 "get_zone_info": false, 00:23:31.193 "zone_management": false, 00:23:31.193 "zone_append": false, 00:23:31.193 "compare": false, 00:23:31.193 "compare_and_write": false, 00:23:31.193 "abort": true, 00:23:31.193 "seek_hole": false, 00:23:31.193 "seek_data": false, 00:23:31.193 "copy": true, 00:23:31.193 "nvme_iov_md": false 00:23:31.193 }, 00:23:31.193 "memory_domains": [ 00:23:31.193 { 00:23:31.193 "dma_device_id": "system", 00:23:31.193 "dma_device_type": 1 00:23:31.193 }, 00:23:31.193 { 00:23:31.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:31.193 "dma_device_type": 2 00:23:31.193 } 00:23:31.193 ], 00:23:31.193 "driver_specific": {} 00:23:31.193 }' 00:23:31.193 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:31.193 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:31.193 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:31.193 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:31.193 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:31.193 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:31.193 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:31.193 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:31.193 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:31.193 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:31.193 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:31.193 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:31.193 00:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:31.451 [2024-07-25 00:07:27.188891] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:31.451 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:31.451 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:23:31.451 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:31.451 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:23:31.451 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:23:31.451 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:23:31.451 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:31.451 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:31.451 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:31.451 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:31.451 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:31.451 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:31.451 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:31.451 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:31.451 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:31.451 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.451 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:31.709 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:31.709 "name": "Existed_Raid", 00:23:31.709 "uuid": "8678e10f-5fe8-45a3-863f-b94ebcf51ca9", 00:23:31.709 "strip_size_kb": 0, 00:23:31.709 "state": "online", 00:23:31.709 "raid_level": "raid1", 00:23:31.709 "superblock": true, 00:23:31.709 "num_base_bdevs": 4, 00:23:31.709 "num_base_bdevs_discovered": 3, 00:23:31.709 "num_base_bdevs_operational": 3, 00:23:31.709 "base_bdevs_list": [ 00:23:31.709 { 00:23:31.709 "name": null, 00:23:31.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.709 "is_configured": false, 00:23:31.709 "data_offset": 2048, 00:23:31.709 "data_size": 63488 00:23:31.709 }, 00:23:31.709 { 00:23:31.709 "name": "BaseBdev2", 00:23:31.709 "uuid": "91f1be0d-3625-4ff3-b479-4699503da3d4", 00:23:31.709 "is_configured": true, 00:23:31.709 "data_offset": 2048, 00:23:31.709 "data_size": 63488 00:23:31.709 }, 00:23:31.709 { 00:23:31.709 "name": "BaseBdev3", 00:23:31.709 "uuid": "756687c1-86cb-45f4-a5d0-33ac8e00cd16", 00:23:31.709 "is_configured": true, 00:23:31.709 "data_offset": 2048, 00:23:31.709 "data_size": 63488 00:23:31.709 }, 00:23:31.709 { 00:23:31.709 "name": "BaseBdev4", 00:23:31.709 "uuid": "d4da67d1-9b81-4835-8c5a-9ea5b8dc471e", 00:23:31.709 "is_configured": true, 00:23:31.709 "data_offset": 2048, 00:23:31.709 "data_size": 63488 00:23:31.709 } 00:23:31.709 ] 00:23:31.709 }' 00:23:31.709 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:31.710 00:07:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.967 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:31.967 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:31.967 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.967 00:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:32.224 00:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:32.224 00:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:32.224 00:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:32.481 [2024-07-25 00:07:28.225970] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:32.481 00:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:32.481 00:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:32.481 00:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.481 00:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:32.739 00:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:32.739 00:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:32.739 00:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:32.996 [2024-07-25 00:07:28.773449] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:33.254 00:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:33.254 00:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:33.254 00:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.254 00:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:33.254 00:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:33.254 00:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:33.254 00:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:33.512 [2024-07-25 00:07:29.366480] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:33.512 [2024-07-25 00:07:29.366840] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:33.769 [2024-07-25 00:07:29.442280] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:33.769 [2024-07-25 00:07:29.442543] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:33.769 [2024-07-25 00:07:29.442733] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:23:33.769 00:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:33.769 00:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:33.769 00:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.769 00:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:34.026 00:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:34.026 00:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:34.026 00:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:23:34.026 00:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:23:34.026 00:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:34.026 00:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:34.284 BaseBdev2 00:23:34.284 00:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:23:34.284 00:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:23:34.284 00:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:34.284 00:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:34.284 00:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:34.284 00:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:34.284 00:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:34.542 00:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:34.542 [ 00:23:34.542 { 00:23:34.542 "name": "BaseBdev2", 00:23:34.542 "aliases": [ 00:23:34.542 "e4865a34-2eaf-4776-aa09-0c7f2005f1f6" 00:23:34.542 ], 00:23:34.542 "product_name": "Malloc disk", 00:23:34.542 "block_size": 512, 00:23:34.542 "num_blocks": 65536, 00:23:34.542 "uuid": "e4865a34-2eaf-4776-aa09-0c7f2005f1f6", 00:23:34.542 "assigned_rate_limits": { 00:23:34.542 "rw_ios_per_sec": 0, 00:23:34.542 "rw_mbytes_per_sec": 0, 00:23:34.542 "r_mbytes_per_sec": 0, 00:23:34.542 "w_mbytes_per_sec": 0 00:23:34.542 }, 00:23:34.542 "claimed": false, 00:23:34.542 "zoned": false, 00:23:34.542 "supported_io_types": { 00:23:34.542 "read": true, 00:23:34.542 "write": true, 00:23:34.542 "unmap": true, 00:23:34.542 "flush": true, 00:23:34.542 "reset": true, 00:23:34.542 "nvme_admin": false, 00:23:34.542 "nvme_io": false, 00:23:34.542 "nvme_io_md": false, 00:23:34.542 "write_zeroes": true, 00:23:34.542 "zcopy": true, 00:23:34.542 "get_zone_info": false, 00:23:34.542 "zone_management": false, 00:23:34.542 "zone_append": false, 00:23:34.542 "compare": false, 00:23:34.542 "compare_and_write": false, 00:23:34.542 "abort": true, 00:23:34.542 "seek_hole": false, 00:23:34.542 "seek_data": false, 00:23:34.542 "copy": true, 00:23:34.542 "nvme_iov_md": false 00:23:34.542 }, 00:23:34.542 "memory_domains": [ 00:23:34.542 { 00:23:34.542 "dma_device_id": "system", 00:23:34.542 "dma_device_type": 1 00:23:34.542 }, 00:23:34.542 { 00:23:34.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:34.542 "dma_device_type": 2 00:23:34.542 } 00:23:34.542 ], 00:23:34.542 "driver_specific": {} 00:23:34.542 } 00:23:34.542 ] 00:23:34.542 00:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:34.542 00:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:34.542 00:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:34.542 00:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:34.800 BaseBdev3 00:23:34.800 00:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:34.800 00:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:23:34.800 00:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:34.800 00:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:34.800 00:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:34.800 00:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:34.800 00:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:35.058 00:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:35.315 [ 00:23:35.315 { 00:23:35.315 "name": "BaseBdev3", 00:23:35.315 "aliases": [ 00:23:35.315 "5cfcce47-92d3-4b8d-be26-e33c91244152" 00:23:35.315 ], 00:23:35.315 "product_name": "Malloc disk", 00:23:35.315 "block_size": 512, 00:23:35.315 "num_blocks": 65536, 00:23:35.315 "uuid": "5cfcce47-92d3-4b8d-be26-e33c91244152", 00:23:35.315 "assigned_rate_limits": { 00:23:35.315 "rw_ios_per_sec": 0, 00:23:35.315 "rw_mbytes_per_sec": 0, 00:23:35.315 "r_mbytes_per_sec": 0, 00:23:35.315 "w_mbytes_per_sec": 0 00:23:35.315 }, 00:23:35.315 "claimed": false, 00:23:35.315 "zoned": false, 00:23:35.315 "supported_io_types": { 00:23:35.315 "read": true, 00:23:35.315 "write": true, 00:23:35.315 "unmap": true, 00:23:35.315 "flush": true, 00:23:35.315 "reset": true, 00:23:35.316 "nvme_admin": false, 00:23:35.316 "nvme_io": false, 00:23:35.316 "nvme_io_md": false, 00:23:35.316 "write_zeroes": true, 00:23:35.316 "zcopy": true, 00:23:35.316 "get_zone_info": false, 00:23:35.316 "zone_management": false, 00:23:35.316 "zone_append": false, 00:23:35.316 "compare": false, 00:23:35.316 "compare_and_write": false, 00:23:35.316 "abort": true, 00:23:35.316 "seek_hole": false, 00:23:35.316 "seek_data": false, 00:23:35.316 "copy": true, 00:23:35.316 "nvme_iov_md": false 00:23:35.316 }, 00:23:35.316 "memory_domains": [ 00:23:35.316 { 00:23:35.316 "dma_device_id": "system", 00:23:35.316 "dma_device_type": 1 00:23:35.316 }, 00:23:35.316 { 00:23:35.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:35.316 "dma_device_type": 2 00:23:35.316 } 00:23:35.316 ], 00:23:35.316 "driver_specific": {} 00:23:35.316 } 00:23:35.316 ] 00:23:35.316 00:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:35.316 00:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:35.316 00:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:35.316 00:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:35.573 BaseBdev4 00:23:35.573 00:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:23:35.573 00:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:23:35.573 00:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:35.573 00:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:35.573 00:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:35.573 00:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:35.573 00:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:35.830 00:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:36.195 [ 00:23:36.195 { 00:23:36.195 "name": "BaseBdev4", 00:23:36.195 "aliases": [ 00:23:36.195 "d71b1f7b-7f08-4358-bef7-1f760a99b69d" 00:23:36.195 ], 00:23:36.195 "product_name": "Malloc disk", 00:23:36.195 "block_size": 512, 00:23:36.195 "num_blocks": 65536, 00:23:36.195 "uuid": "d71b1f7b-7f08-4358-bef7-1f760a99b69d", 00:23:36.195 "assigned_rate_limits": { 00:23:36.195 "rw_ios_per_sec": 0, 00:23:36.195 "rw_mbytes_per_sec": 0, 00:23:36.195 "r_mbytes_per_sec": 0, 00:23:36.195 "w_mbytes_per_sec": 0 00:23:36.195 }, 00:23:36.195 "claimed": false, 00:23:36.195 "zoned": false, 00:23:36.195 "supported_io_types": { 00:23:36.195 "read": true, 00:23:36.195 "write": true, 00:23:36.195 "unmap": true, 00:23:36.195 "flush": true, 00:23:36.195 "reset": true, 00:23:36.195 "nvme_admin": false, 00:23:36.195 "nvme_io": false, 00:23:36.195 "nvme_io_md": false, 00:23:36.195 "write_zeroes": true, 00:23:36.195 "zcopy": true, 00:23:36.195 "get_zone_info": false, 00:23:36.195 "zone_management": false, 00:23:36.195 "zone_append": false, 00:23:36.195 "compare": false, 00:23:36.195 "compare_and_write": false, 00:23:36.195 "abort": true, 00:23:36.195 "seek_hole": false, 00:23:36.195 "seek_data": false, 00:23:36.195 "copy": true, 00:23:36.195 "nvme_iov_md": false 00:23:36.195 }, 00:23:36.195 "memory_domains": [ 00:23:36.195 { 00:23:36.195 "dma_device_id": "system", 00:23:36.195 "dma_device_type": 1 00:23:36.195 }, 00:23:36.195 { 00:23:36.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:36.195 "dma_device_type": 2 00:23:36.195 } 00:23:36.195 ], 00:23:36.195 "driver_specific": {} 00:23:36.195 } 00:23:36.195 ] 00:23:36.195 00:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:36.195 00:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:36.195 00:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:36.195 00:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:36.195 [2024-07-25 00:07:31.946690] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:36.195 [2024-07-25 00:07:31.947011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:36.195 [2024-07-25 00:07:31.947151] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:36.195 [2024-07-25 00:07:31.949211] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:36.195 [2024-07-25 00:07:31.949423] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:36.195 00:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:36.195 00:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:36.195 00:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:36.195 00:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:36.195 00:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:36.195 00:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:36.195 00:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:36.195 00:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:36.195 00:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:36.195 00:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:36.195 00:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.195 00:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:36.452 00:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:36.452 "name": "Existed_Raid", 00:23:36.452 "uuid": "af3eefe7-4a9a-4ea2-8bf1-0cbedd9d5e32", 00:23:36.452 "strip_size_kb": 0, 00:23:36.452 "state": "configuring", 00:23:36.452 "raid_level": "raid1", 00:23:36.452 "superblock": true, 00:23:36.452 "num_base_bdevs": 4, 00:23:36.452 "num_base_bdevs_discovered": 3, 00:23:36.452 "num_base_bdevs_operational": 4, 00:23:36.452 "base_bdevs_list": [ 00:23:36.452 { 00:23:36.452 "name": "BaseBdev1", 00:23:36.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.452 "is_configured": false, 00:23:36.452 "data_offset": 0, 00:23:36.452 "data_size": 0 00:23:36.452 }, 00:23:36.452 { 00:23:36.452 "name": "BaseBdev2", 00:23:36.452 "uuid": "e4865a34-2eaf-4776-aa09-0c7f2005f1f6", 00:23:36.452 "is_configured": true, 00:23:36.452 "data_offset": 2048, 00:23:36.452 "data_size": 63488 00:23:36.452 }, 00:23:36.452 { 00:23:36.452 "name": "BaseBdev3", 00:23:36.452 "uuid": "5cfcce47-92d3-4b8d-be26-e33c91244152", 00:23:36.452 "is_configured": true, 00:23:36.452 "data_offset": 2048, 00:23:36.452 "data_size": 63488 00:23:36.452 }, 00:23:36.452 { 00:23:36.452 "name": "BaseBdev4", 00:23:36.452 "uuid": "d71b1f7b-7f08-4358-bef7-1f760a99b69d", 00:23:36.452 "is_configured": true, 00:23:36.452 "data_offset": 2048, 00:23:36.452 "data_size": 63488 00:23:36.452 } 00:23:36.452 ] 00:23:36.452 }' 00:23:36.452 00:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:36.452 00:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:36.710 00:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:36.968 [2024-07-25 00:07:32.706865] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:36.968 00:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:36.968 00:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:36.968 00:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:36.968 00:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:36.968 00:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:36.968 00:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:36.968 00:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:36.968 00:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:36.968 00:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:36.968 00:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:36.968 00:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.968 00:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:37.227 00:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:37.227 "name": "Existed_Raid", 00:23:37.227 "uuid": "af3eefe7-4a9a-4ea2-8bf1-0cbedd9d5e32", 00:23:37.227 "strip_size_kb": 0, 00:23:37.227 "state": "configuring", 00:23:37.227 "raid_level": "raid1", 00:23:37.227 "superblock": true, 00:23:37.227 "num_base_bdevs": 4, 00:23:37.227 "num_base_bdevs_discovered": 2, 00:23:37.227 "num_base_bdevs_operational": 4, 00:23:37.227 "base_bdevs_list": [ 00:23:37.227 { 00:23:37.227 "name": "BaseBdev1", 00:23:37.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.227 "is_configured": false, 00:23:37.227 "data_offset": 0, 00:23:37.227 "data_size": 0 00:23:37.227 }, 00:23:37.227 { 00:23:37.227 "name": null, 00:23:37.227 "uuid": "e4865a34-2eaf-4776-aa09-0c7f2005f1f6", 00:23:37.227 "is_configured": false, 00:23:37.227 "data_offset": 2048, 00:23:37.227 "data_size": 63488 00:23:37.227 }, 00:23:37.227 { 00:23:37.227 "name": "BaseBdev3", 00:23:37.227 "uuid": "5cfcce47-92d3-4b8d-be26-e33c91244152", 00:23:37.227 "is_configured": true, 00:23:37.227 "data_offset": 2048, 00:23:37.227 "data_size": 63488 00:23:37.227 }, 00:23:37.227 { 00:23:37.227 "name": "BaseBdev4", 00:23:37.227 "uuid": "d71b1f7b-7f08-4358-bef7-1f760a99b69d", 00:23:37.227 "is_configured": true, 00:23:37.227 "data_offset": 2048, 00:23:37.227 "data_size": 63488 00:23:37.227 } 00:23:37.227 ] 00:23:37.227 }' 00:23:37.227 00:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:37.227 00:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:37.485 00:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.485 00:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:37.742 00:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:37.742 00:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:38.001 [2024-07-25 00:07:33.712034] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:38.001 BaseBdev1 00:23:38.001 00:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:38.001 00:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:23:38.001 00:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:38.001 00:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:38.001 00:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:38.001 00:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:38.001 00:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:38.259 00:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:38.518 [ 00:23:38.518 { 00:23:38.518 "name": "BaseBdev1", 00:23:38.518 "aliases": [ 00:23:38.518 "5ea00d80-dbe0-4f00-b3b2-92ec95c0b471" 00:23:38.518 ], 00:23:38.518 "product_name": "Malloc disk", 00:23:38.518 "block_size": 512, 00:23:38.518 "num_blocks": 65536, 00:23:38.518 "uuid": "5ea00d80-dbe0-4f00-b3b2-92ec95c0b471", 00:23:38.518 "assigned_rate_limits": { 00:23:38.518 "rw_ios_per_sec": 0, 00:23:38.518 "rw_mbytes_per_sec": 0, 00:23:38.518 "r_mbytes_per_sec": 0, 00:23:38.518 "w_mbytes_per_sec": 0 00:23:38.518 }, 00:23:38.518 "claimed": true, 00:23:38.518 "claim_type": "exclusive_write", 00:23:38.518 "zoned": false, 00:23:38.518 "supported_io_types": { 00:23:38.518 "read": true, 00:23:38.518 "write": true, 00:23:38.518 "unmap": true, 00:23:38.518 "flush": true, 00:23:38.518 "reset": true, 00:23:38.518 "nvme_admin": false, 00:23:38.518 "nvme_io": false, 00:23:38.518 "nvme_io_md": false, 00:23:38.518 "write_zeroes": true, 00:23:38.518 "zcopy": true, 00:23:38.518 "get_zone_info": false, 00:23:38.518 "zone_management": false, 00:23:38.518 "zone_append": false, 00:23:38.518 "compare": false, 00:23:38.518 "compare_and_write": false, 00:23:38.518 "abort": true, 00:23:38.518 "seek_hole": false, 00:23:38.518 "seek_data": false, 00:23:38.518 "copy": true, 00:23:38.518 "nvme_iov_md": false 00:23:38.518 }, 00:23:38.518 "memory_domains": [ 00:23:38.518 { 00:23:38.518 "dma_device_id": "system", 00:23:38.518 "dma_device_type": 1 00:23:38.518 }, 00:23:38.518 { 00:23:38.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.518 "dma_device_type": 2 00:23:38.518 } 00:23:38.518 ], 00:23:38.518 "driver_specific": {} 00:23:38.518 } 00:23:38.518 ] 00:23:38.518 00:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:38.518 00:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:38.518 00:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:38.518 00:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:38.518 00:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:38.518 00:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:38.518 00:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:38.518 00:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:38.518 00:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:38.518 00:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:38.518 00:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:38.518 00:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.518 00:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:38.776 00:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:38.776 "name": "Existed_Raid", 00:23:38.776 "uuid": "af3eefe7-4a9a-4ea2-8bf1-0cbedd9d5e32", 00:23:38.776 "strip_size_kb": 0, 00:23:38.776 "state": "configuring", 00:23:38.776 "raid_level": "raid1", 00:23:38.776 "superblock": true, 00:23:38.776 "num_base_bdevs": 4, 00:23:38.776 "num_base_bdevs_discovered": 3, 00:23:38.776 "num_base_bdevs_operational": 4, 00:23:38.776 "base_bdevs_list": [ 00:23:38.776 { 00:23:38.776 "name": "BaseBdev1", 00:23:38.776 "uuid": "5ea00d80-dbe0-4f00-b3b2-92ec95c0b471", 00:23:38.776 "is_configured": true, 00:23:38.776 "data_offset": 2048, 00:23:38.776 "data_size": 63488 00:23:38.776 }, 00:23:38.776 { 00:23:38.776 "name": null, 00:23:38.776 "uuid": "e4865a34-2eaf-4776-aa09-0c7f2005f1f6", 00:23:38.776 "is_configured": false, 00:23:38.776 "data_offset": 2048, 00:23:38.776 "data_size": 63488 00:23:38.776 }, 00:23:38.776 { 00:23:38.776 "name": "BaseBdev3", 00:23:38.776 "uuid": "5cfcce47-92d3-4b8d-be26-e33c91244152", 00:23:38.776 "is_configured": true, 00:23:38.776 "data_offset": 2048, 00:23:38.776 "data_size": 63488 00:23:38.776 }, 00:23:38.776 { 00:23:38.776 "name": "BaseBdev4", 00:23:38.776 "uuid": "d71b1f7b-7f08-4358-bef7-1f760a99b69d", 00:23:38.776 "is_configured": true, 00:23:38.776 "data_offset": 2048, 00:23:38.776 "data_size": 63488 00:23:38.776 } 00:23:38.776 ] 00:23:38.776 }' 00:23:38.776 00:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:38.776 00:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.060 00:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.060 00:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:39.318 00:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:39.318 00:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:39.576 [2024-07-25 00:07:35.264540] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:39.576 00:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:39.576 00:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:39.576 00:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:39.576 00:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:39.576 00:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:39.576 00:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:39.576 00:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:39.576 00:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:39.576 00:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:39.576 00:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:39.576 00:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.576 00:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:39.834 00:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:39.834 "name": "Existed_Raid", 00:23:39.834 "uuid": "af3eefe7-4a9a-4ea2-8bf1-0cbedd9d5e32", 00:23:39.834 "strip_size_kb": 0, 00:23:39.834 "state": "configuring", 00:23:39.834 "raid_level": "raid1", 00:23:39.834 "superblock": true, 00:23:39.834 "num_base_bdevs": 4, 00:23:39.834 "num_base_bdevs_discovered": 2, 00:23:39.834 "num_base_bdevs_operational": 4, 00:23:39.834 "base_bdevs_list": [ 00:23:39.834 { 00:23:39.834 "name": "BaseBdev1", 00:23:39.834 "uuid": "5ea00d80-dbe0-4f00-b3b2-92ec95c0b471", 00:23:39.834 "is_configured": true, 00:23:39.834 "data_offset": 2048, 00:23:39.834 "data_size": 63488 00:23:39.834 }, 00:23:39.834 { 00:23:39.834 "name": null, 00:23:39.834 "uuid": "e4865a34-2eaf-4776-aa09-0c7f2005f1f6", 00:23:39.834 "is_configured": false, 00:23:39.834 "data_offset": 2048, 00:23:39.834 "data_size": 63488 00:23:39.834 }, 00:23:39.834 { 00:23:39.834 "name": null, 00:23:39.834 "uuid": "5cfcce47-92d3-4b8d-be26-e33c91244152", 00:23:39.834 "is_configured": false, 00:23:39.834 "data_offset": 2048, 00:23:39.834 "data_size": 63488 00:23:39.834 }, 00:23:39.834 { 00:23:39.834 "name": "BaseBdev4", 00:23:39.834 "uuid": "d71b1f7b-7f08-4358-bef7-1f760a99b69d", 00:23:39.834 "is_configured": true, 00:23:39.834 "data_offset": 2048, 00:23:39.834 "data_size": 63488 00:23:39.834 } 00:23:39.834 ] 00:23:39.834 }' 00:23:39.834 00:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:39.834 00:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:40.091 00:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.091 00:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:40.349 00:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:40.349 00:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:40.607 [2024-07-25 00:07:36.248854] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:40.607 00:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:40.607 00:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:40.607 00:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:40.607 00:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:40.607 00:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:40.607 00:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:40.607 00:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:40.607 00:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:40.607 00:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:40.607 00:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:40.607 00:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.607 00:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:40.866 00:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:40.866 "name": "Existed_Raid", 00:23:40.866 "uuid": "af3eefe7-4a9a-4ea2-8bf1-0cbedd9d5e32", 00:23:40.866 "strip_size_kb": 0, 00:23:40.866 "state": "configuring", 00:23:40.866 "raid_level": "raid1", 00:23:40.866 "superblock": true, 00:23:40.866 "num_base_bdevs": 4, 00:23:40.866 "num_base_bdevs_discovered": 3, 00:23:40.866 "num_base_bdevs_operational": 4, 00:23:40.866 "base_bdevs_list": [ 00:23:40.866 { 00:23:40.866 "name": "BaseBdev1", 00:23:40.866 "uuid": "5ea00d80-dbe0-4f00-b3b2-92ec95c0b471", 00:23:40.866 "is_configured": true, 00:23:40.866 "data_offset": 2048, 00:23:40.866 "data_size": 63488 00:23:40.866 }, 00:23:40.866 { 00:23:40.866 "name": null, 00:23:40.866 "uuid": "e4865a34-2eaf-4776-aa09-0c7f2005f1f6", 00:23:40.866 "is_configured": false, 00:23:40.866 "data_offset": 2048, 00:23:40.866 "data_size": 63488 00:23:40.866 }, 00:23:40.866 { 00:23:40.866 "name": "BaseBdev3", 00:23:40.866 "uuid": "5cfcce47-92d3-4b8d-be26-e33c91244152", 00:23:40.866 "is_configured": true, 00:23:40.866 "data_offset": 2048, 00:23:40.866 "data_size": 63488 00:23:40.866 }, 00:23:40.866 { 00:23:40.866 "name": "BaseBdev4", 00:23:40.866 "uuid": "d71b1f7b-7f08-4358-bef7-1f760a99b69d", 00:23:40.866 "is_configured": true, 00:23:40.866 "data_offset": 2048, 00:23:40.866 "data_size": 63488 00:23:40.866 } 00:23:40.866 ] 00:23:40.866 }' 00:23:40.866 00:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:40.866 00:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.124 00:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.124 00:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:41.382 00:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:41.382 00:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:41.382 [2024-07-25 00:07:37.229208] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:41.640 00:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:41.640 00:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:41.640 00:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:41.640 00:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:41.640 00:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:41.640 00:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:41.640 00:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:41.640 00:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:41.640 00:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:41.640 00:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:41.640 00:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.640 00:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:41.898 00:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:41.898 "name": "Existed_Raid", 00:23:41.898 "uuid": "af3eefe7-4a9a-4ea2-8bf1-0cbedd9d5e32", 00:23:41.898 "strip_size_kb": 0, 00:23:41.898 "state": "configuring", 00:23:41.898 "raid_level": "raid1", 00:23:41.898 "superblock": true, 00:23:41.898 "num_base_bdevs": 4, 00:23:41.898 "num_base_bdevs_discovered": 2, 00:23:41.898 "num_base_bdevs_operational": 4, 00:23:41.898 "base_bdevs_list": [ 00:23:41.898 { 00:23:41.898 "name": null, 00:23:41.898 "uuid": "5ea00d80-dbe0-4f00-b3b2-92ec95c0b471", 00:23:41.898 "is_configured": false, 00:23:41.898 "data_offset": 2048, 00:23:41.898 "data_size": 63488 00:23:41.898 }, 00:23:41.898 { 00:23:41.898 "name": null, 00:23:41.898 "uuid": "e4865a34-2eaf-4776-aa09-0c7f2005f1f6", 00:23:41.898 "is_configured": false, 00:23:41.898 "data_offset": 2048, 00:23:41.898 "data_size": 63488 00:23:41.898 }, 00:23:41.898 { 00:23:41.898 "name": "BaseBdev3", 00:23:41.898 "uuid": "5cfcce47-92d3-4b8d-be26-e33c91244152", 00:23:41.898 "is_configured": true, 00:23:41.898 "data_offset": 2048, 00:23:41.898 "data_size": 63488 00:23:41.898 }, 00:23:41.898 { 00:23:41.898 "name": "BaseBdev4", 00:23:41.898 "uuid": "d71b1f7b-7f08-4358-bef7-1f760a99b69d", 00:23:41.899 "is_configured": true, 00:23:41.899 "data_offset": 2048, 00:23:41.899 "data_size": 63488 00:23:41.899 } 00:23:41.899 ] 00:23:41.899 }' 00:23:41.899 00:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:41.899 00:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.156 00:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.157 00:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:42.415 00:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:42.415 00:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:42.673 [2024-07-25 00:07:38.373303] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:42.673 00:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:42.673 00:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:42.673 00:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:42.673 00:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:42.673 00:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:42.673 00:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:42.673 00:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:42.673 00:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:42.673 00:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:42.673 00:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:42.673 00:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.673 00:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:42.932 00:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:42.932 "name": "Existed_Raid", 00:23:42.932 "uuid": "af3eefe7-4a9a-4ea2-8bf1-0cbedd9d5e32", 00:23:42.932 "strip_size_kb": 0, 00:23:42.932 "state": "configuring", 00:23:42.932 "raid_level": "raid1", 00:23:42.932 "superblock": true, 00:23:42.932 "num_base_bdevs": 4, 00:23:42.932 "num_base_bdevs_discovered": 3, 00:23:42.932 "num_base_bdevs_operational": 4, 00:23:42.932 "base_bdevs_list": [ 00:23:42.932 { 00:23:42.932 "name": null, 00:23:42.932 "uuid": "5ea00d80-dbe0-4f00-b3b2-92ec95c0b471", 00:23:42.932 "is_configured": false, 00:23:42.932 "data_offset": 2048, 00:23:42.932 "data_size": 63488 00:23:42.932 }, 00:23:42.932 { 00:23:42.932 "name": "BaseBdev2", 00:23:42.932 "uuid": "e4865a34-2eaf-4776-aa09-0c7f2005f1f6", 00:23:42.932 "is_configured": true, 00:23:42.932 "data_offset": 2048, 00:23:42.932 "data_size": 63488 00:23:42.932 }, 00:23:42.932 { 00:23:42.932 "name": "BaseBdev3", 00:23:42.932 "uuid": "5cfcce47-92d3-4b8d-be26-e33c91244152", 00:23:42.932 "is_configured": true, 00:23:42.932 "data_offset": 2048, 00:23:42.932 "data_size": 63488 00:23:42.932 }, 00:23:42.932 { 00:23:42.932 "name": "BaseBdev4", 00:23:42.932 "uuid": "d71b1f7b-7f08-4358-bef7-1f760a99b69d", 00:23:42.932 "is_configured": true, 00:23:42.932 "data_offset": 2048, 00:23:42.932 "data_size": 63488 00:23:42.932 } 00:23:42.932 ] 00:23:42.932 }' 00:23:42.932 00:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:42.932 00:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.191 00:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.191 00:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:43.450 00:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:23:43.450 00:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.450 00:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:43.709 00:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5ea00d80-dbe0-4f00-b3b2-92ec95c0b471 00:23:43.967 [2024-07-25 00:07:39.720417] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:43.967 NewBaseBdev 00:23:43.967 [2024-07-25 00:07:39.720894] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:23:43.967 [2024-07-25 00:07:39.720928] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:43.967 [2024-07-25 00:07:39.721059] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ee0 00:23:43.967 [2024-07-25 00:07:39.721447] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:23:43.967 [2024-07-25 00:07:39.721464] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000009380 00:23:43.967 [2024-07-25 00:07:39.721601] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:43.967 00:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:23:43.967 00:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:23:43.967 00:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:43.967 00:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:43.967 00:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:43.967 00:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:43.967 00:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:44.226 00:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:44.485 [ 00:23:44.485 { 00:23:44.485 "name": "NewBaseBdev", 00:23:44.485 "aliases": [ 00:23:44.485 "5ea00d80-dbe0-4f00-b3b2-92ec95c0b471" 00:23:44.485 ], 00:23:44.485 "product_name": "Malloc disk", 00:23:44.485 "block_size": 512, 00:23:44.485 "num_blocks": 65536, 00:23:44.485 "uuid": "5ea00d80-dbe0-4f00-b3b2-92ec95c0b471", 00:23:44.485 "assigned_rate_limits": { 00:23:44.485 "rw_ios_per_sec": 0, 00:23:44.485 "rw_mbytes_per_sec": 0, 00:23:44.485 "r_mbytes_per_sec": 0, 00:23:44.485 "w_mbytes_per_sec": 0 00:23:44.485 }, 00:23:44.485 "claimed": true, 00:23:44.485 "claim_type": "exclusive_write", 00:23:44.485 "zoned": false, 00:23:44.485 "supported_io_types": { 00:23:44.485 "read": true, 00:23:44.485 "write": true, 00:23:44.485 "unmap": true, 00:23:44.485 "flush": true, 00:23:44.485 "reset": true, 00:23:44.485 "nvme_admin": false, 00:23:44.485 "nvme_io": false, 00:23:44.485 "nvme_io_md": false, 00:23:44.485 "write_zeroes": true, 00:23:44.485 "zcopy": true, 00:23:44.485 "get_zone_info": false, 00:23:44.485 "zone_management": false, 00:23:44.485 "zone_append": false, 00:23:44.485 "compare": false, 00:23:44.485 "compare_and_write": false, 00:23:44.485 "abort": true, 00:23:44.485 "seek_hole": false, 00:23:44.485 "seek_data": false, 00:23:44.485 "copy": true, 00:23:44.485 "nvme_iov_md": false 00:23:44.485 }, 00:23:44.485 "memory_domains": [ 00:23:44.485 { 00:23:44.485 "dma_device_id": "system", 00:23:44.485 "dma_device_type": 1 00:23:44.485 }, 00:23:44.485 { 00:23:44.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.485 "dma_device_type": 2 00:23:44.485 } 00:23:44.485 ], 00:23:44.485 "driver_specific": {} 00:23:44.485 } 00:23:44.485 ] 00:23:44.485 00:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:44.485 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:23:44.485 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:44.485 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:44.485 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:44.485 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:44.485 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:44.485 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:44.485 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:44.485 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:44.485 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:44.485 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.485 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:44.744 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:44.744 "name": "Existed_Raid", 00:23:44.744 "uuid": "af3eefe7-4a9a-4ea2-8bf1-0cbedd9d5e32", 00:23:44.744 "strip_size_kb": 0, 00:23:44.744 "state": "online", 00:23:44.744 "raid_level": "raid1", 00:23:44.744 "superblock": true, 00:23:44.744 "num_base_bdevs": 4, 00:23:44.744 "num_base_bdevs_discovered": 4, 00:23:44.744 "num_base_bdevs_operational": 4, 00:23:44.744 "base_bdevs_list": [ 00:23:44.744 { 00:23:44.744 "name": "NewBaseBdev", 00:23:44.744 "uuid": "5ea00d80-dbe0-4f00-b3b2-92ec95c0b471", 00:23:44.744 "is_configured": true, 00:23:44.744 "data_offset": 2048, 00:23:44.744 "data_size": 63488 00:23:44.744 }, 00:23:44.744 { 00:23:44.744 "name": "BaseBdev2", 00:23:44.744 "uuid": "e4865a34-2eaf-4776-aa09-0c7f2005f1f6", 00:23:44.744 "is_configured": true, 00:23:44.744 "data_offset": 2048, 00:23:44.744 "data_size": 63488 00:23:44.744 }, 00:23:44.744 { 00:23:44.744 "name": "BaseBdev3", 00:23:44.744 "uuid": "5cfcce47-92d3-4b8d-be26-e33c91244152", 00:23:44.744 "is_configured": true, 00:23:44.744 "data_offset": 2048, 00:23:44.744 "data_size": 63488 00:23:44.744 }, 00:23:44.744 { 00:23:44.744 "name": "BaseBdev4", 00:23:44.744 "uuid": "d71b1f7b-7f08-4358-bef7-1f760a99b69d", 00:23:44.744 "is_configured": true, 00:23:44.744 "data_offset": 2048, 00:23:44.744 "data_size": 63488 00:23:44.744 } 00:23:44.744 ] 00:23:44.744 }' 00:23:44.744 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:44.744 00:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.003 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:45.003 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:45.003 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:45.003 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:45.003 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:45.003 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:23:45.003 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:45.003 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:45.262 [2024-07-25 00:07:40.905148] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:45.262 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:45.262 "name": "Existed_Raid", 00:23:45.262 "aliases": [ 00:23:45.262 "af3eefe7-4a9a-4ea2-8bf1-0cbedd9d5e32" 00:23:45.262 ], 00:23:45.262 "product_name": "Raid Volume", 00:23:45.262 "block_size": 512, 00:23:45.262 "num_blocks": 63488, 00:23:45.262 "uuid": "af3eefe7-4a9a-4ea2-8bf1-0cbedd9d5e32", 00:23:45.262 "assigned_rate_limits": { 00:23:45.262 "rw_ios_per_sec": 0, 00:23:45.262 "rw_mbytes_per_sec": 0, 00:23:45.262 "r_mbytes_per_sec": 0, 00:23:45.262 "w_mbytes_per_sec": 0 00:23:45.262 }, 00:23:45.262 "claimed": false, 00:23:45.262 "zoned": false, 00:23:45.262 "supported_io_types": { 00:23:45.262 "read": true, 00:23:45.262 "write": true, 00:23:45.262 "unmap": false, 00:23:45.262 "flush": false, 00:23:45.262 "reset": true, 00:23:45.262 "nvme_admin": false, 00:23:45.262 "nvme_io": false, 00:23:45.262 "nvme_io_md": false, 00:23:45.262 "write_zeroes": true, 00:23:45.262 "zcopy": false, 00:23:45.262 "get_zone_info": false, 00:23:45.262 "zone_management": false, 00:23:45.262 "zone_append": false, 00:23:45.262 "compare": false, 00:23:45.262 "compare_and_write": false, 00:23:45.262 "abort": false, 00:23:45.262 "seek_hole": false, 00:23:45.262 "seek_data": false, 00:23:45.262 "copy": false, 00:23:45.262 "nvme_iov_md": false 00:23:45.262 }, 00:23:45.262 "memory_domains": [ 00:23:45.262 { 00:23:45.262 "dma_device_id": "system", 00:23:45.262 "dma_device_type": 1 00:23:45.262 }, 00:23:45.262 { 00:23:45.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.262 "dma_device_type": 2 00:23:45.262 }, 00:23:45.262 { 00:23:45.262 "dma_device_id": "system", 00:23:45.262 "dma_device_type": 1 00:23:45.262 }, 00:23:45.262 { 00:23:45.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.262 "dma_device_type": 2 00:23:45.262 }, 00:23:45.262 { 00:23:45.262 "dma_device_id": "system", 00:23:45.262 "dma_device_type": 1 00:23:45.262 }, 00:23:45.262 { 00:23:45.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.262 "dma_device_type": 2 00:23:45.262 }, 00:23:45.262 { 00:23:45.262 "dma_device_id": "system", 00:23:45.262 "dma_device_type": 1 00:23:45.262 }, 00:23:45.262 { 00:23:45.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.262 "dma_device_type": 2 00:23:45.262 } 00:23:45.262 ], 00:23:45.262 "driver_specific": { 00:23:45.262 "raid": { 00:23:45.262 "uuid": "af3eefe7-4a9a-4ea2-8bf1-0cbedd9d5e32", 00:23:45.262 "strip_size_kb": 0, 00:23:45.262 "state": "online", 00:23:45.262 "raid_level": "raid1", 00:23:45.262 "superblock": true, 00:23:45.262 "num_base_bdevs": 4, 00:23:45.262 "num_base_bdevs_discovered": 4, 00:23:45.262 "num_base_bdevs_operational": 4, 00:23:45.262 "base_bdevs_list": [ 00:23:45.262 { 00:23:45.262 "name": "NewBaseBdev", 00:23:45.262 "uuid": "5ea00d80-dbe0-4f00-b3b2-92ec95c0b471", 00:23:45.262 "is_configured": true, 00:23:45.262 "data_offset": 2048, 00:23:45.262 "data_size": 63488 00:23:45.262 }, 00:23:45.262 { 00:23:45.262 "name": "BaseBdev2", 00:23:45.262 "uuid": "e4865a34-2eaf-4776-aa09-0c7f2005f1f6", 00:23:45.262 "is_configured": true, 00:23:45.262 "data_offset": 2048, 00:23:45.262 "data_size": 63488 00:23:45.262 }, 00:23:45.262 { 00:23:45.262 "name": "BaseBdev3", 00:23:45.262 "uuid": "5cfcce47-92d3-4b8d-be26-e33c91244152", 00:23:45.262 "is_configured": true, 00:23:45.262 "data_offset": 2048, 00:23:45.262 "data_size": 63488 00:23:45.262 }, 00:23:45.262 { 00:23:45.262 "name": "BaseBdev4", 00:23:45.262 "uuid": "d71b1f7b-7f08-4358-bef7-1f760a99b69d", 00:23:45.262 "is_configured": true, 00:23:45.262 "data_offset": 2048, 00:23:45.262 "data_size": 63488 00:23:45.262 } 00:23:45.262 ] 00:23:45.262 } 00:23:45.262 } 00:23:45.262 }' 00:23:45.262 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:45.262 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:45.262 BaseBdev2 00:23:45.262 BaseBdev3 00:23:45.262 BaseBdev4' 00:23:45.262 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:45.262 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:45.262 00:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:45.520 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:45.520 "name": "NewBaseBdev", 00:23:45.520 "aliases": [ 00:23:45.520 "5ea00d80-dbe0-4f00-b3b2-92ec95c0b471" 00:23:45.520 ], 00:23:45.520 "product_name": "Malloc disk", 00:23:45.520 "block_size": 512, 00:23:45.520 "num_blocks": 65536, 00:23:45.520 "uuid": "5ea00d80-dbe0-4f00-b3b2-92ec95c0b471", 00:23:45.520 "assigned_rate_limits": { 00:23:45.520 "rw_ios_per_sec": 0, 00:23:45.520 "rw_mbytes_per_sec": 0, 00:23:45.520 "r_mbytes_per_sec": 0, 00:23:45.520 "w_mbytes_per_sec": 0 00:23:45.520 }, 00:23:45.520 "claimed": true, 00:23:45.520 "claim_type": "exclusive_write", 00:23:45.520 "zoned": false, 00:23:45.520 "supported_io_types": { 00:23:45.520 "read": true, 00:23:45.520 "write": true, 00:23:45.520 "unmap": true, 00:23:45.520 "flush": true, 00:23:45.520 "reset": true, 00:23:45.520 "nvme_admin": false, 00:23:45.520 "nvme_io": false, 00:23:45.520 "nvme_io_md": false, 00:23:45.520 "write_zeroes": true, 00:23:45.520 "zcopy": true, 00:23:45.520 "get_zone_info": false, 00:23:45.520 "zone_management": false, 00:23:45.520 "zone_append": false, 00:23:45.520 "compare": false, 00:23:45.520 "compare_and_write": false, 00:23:45.520 "abort": true, 00:23:45.520 "seek_hole": false, 00:23:45.520 "seek_data": false, 00:23:45.520 "copy": true, 00:23:45.520 "nvme_iov_md": false 00:23:45.520 }, 00:23:45.520 "memory_domains": [ 00:23:45.520 { 00:23:45.520 "dma_device_id": "system", 00:23:45.520 "dma_device_type": 1 00:23:45.520 }, 00:23:45.520 { 00:23:45.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.520 "dma_device_type": 2 00:23:45.520 } 00:23:45.520 ], 00:23:45.520 "driver_specific": {} 00:23:45.520 }' 00:23:45.520 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:45.520 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:45.520 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:45.520 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:45.520 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:45.520 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:45.520 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:45.520 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:45.520 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:45.520 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:45.520 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:45.521 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:45.521 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:45.521 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:45.521 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:45.779 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:45.779 "name": "BaseBdev2", 00:23:45.779 "aliases": [ 00:23:45.779 "e4865a34-2eaf-4776-aa09-0c7f2005f1f6" 00:23:45.779 ], 00:23:45.779 "product_name": "Malloc disk", 00:23:45.779 "block_size": 512, 00:23:45.779 "num_blocks": 65536, 00:23:45.779 "uuid": "e4865a34-2eaf-4776-aa09-0c7f2005f1f6", 00:23:45.779 "assigned_rate_limits": { 00:23:45.779 "rw_ios_per_sec": 0, 00:23:45.779 "rw_mbytes_per_sec": 0, 00:23:45.779 "r_mbytes_per_sec": 0, 00:23:45.779 "w_mbytes_per_sec": 0 00:23:45.779 }, 00:23:45.779 "claimed": true, 00:23:45.779 "claim_type": "exclusive_write", 00:23:45.779 "zoned": false, 00:23:45.779 "supported_io_types": { 00:23:45.779 "read": true, 00:23:45.779 "write": true, 00:23:45.779 "unmap": true, 00:23:45.779 "flush": true, 00:23:45.779 "reset": true, 00:23:45.779 "nvme_admin": false, 00:23:45.779 "nvme_io": false, 00:23:45.779 "nvme_io_md": false, 00:23:45.779 "write_zeroes": true, 00:23:45.779 "zcopy": true, 00:23:45.779 "get_zone_info": false, 00:23:45.779 "zone_management": false, 00:23:45.779 "zone_append": false, 00:23:45.779 "compare": false, 00:23:45.779 "compare_and_write": false, 00:23:45.779 "abort": true, 00:23:45.779 "seek_hole": false, 00:23:45.779 "seek_data": false, 00:23:45.779 "copy": true, 00:23:45.779 "nvme_iov_md": false 00:23:45.779 }, 00:23:45.779 "memory_domains": [ 00:23:45.779 { 00:23:45.779 "dma_device_id": "system", 00:23:45.779 "dma_device_type": 1 00:23:45.779 }, 00:23:45.779 { 00:23:45.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.779 "dma_device_type": 2 00:23:45.779 } 00:23:45.779 ], 00:23:45.779 "driver_specific": {} 00:23:45.779 }' 00:23:45.779 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:45.779 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:45.779 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:45.779 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:45.779 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:45.779 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:45.779 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:45.779 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:45.779 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:45.779 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:45.779 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:45.779 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:45.779 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:45.779 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:45.779 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:46.037 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:46.037 "name": "BaseBdev3", 00:23:46.037 "aliases": [ 00:23:46.037 "5cfcce47-92d3-4b8d-be26-e33c91244152" 00:23:46.037 ], 00:23:46.037 "product_name": "Malloc disk", 00:23:46.037 "block_size": 512, 00:23:46.037 "num_blocks": 65536, 00:23:46.037 "uuid": "5cfcce47-92d3-4b8d-be26-e33c91244152", 00:23:46.037 "assigned_rate_limits": { 00:23:46.037 "rw_ios_per_sec": 0, 00:23:46.037 "rw_mbytes_per_sec": 0, 00:23:46.037 "r_mbytes_per_sec": 0, 00:23:46.037 "w_mbytes_per_sec": 0 00:23:46.037 }, 00:23:46.037 "claimed": true, 00:23:46.037 "claim_type": "exclusive_write", 00:23:46.037 "zoned": false, 00:23:46.037 "supported_io_types": { 00:23:46.037 "read": true, 00:23:46.037 "write": true, 00:23:46.037 "unmap": true, 00:23:46.037 "flush": true, 00:23:46.037 "reset": true, 00:23:46.037 "nvme_admin": false, 00:23:46.037 "nvme_io": false, 00:23:46.037 "nvme_io_md": false, 00:23:46.037 "write_zeroes": true, 00:23:46.037 "zcopy": true, 00:23:46.037 "get_zone_info": false, 00:23:46.037 "zone_management": false, 00:23:46.037 "zone_append": false, 00:23:46.037 "compare": false, 00:23:46.037 "compare_and_write": false, 00:23:46.037 "abort": true, 00:23:46.037 "seek_hole": false, 00:23:46.037 "seek_data": false, 00:23:46.037 "copy": true, 00:23:46.037 "nvme_iov_md": false 00:23:46.037 }, 00:23:46.037 "memory_domains": [ 00:23:46.038 { 00:23:46.038 "dma_device_id": "system", 00:23:46.038 "dma_device_type": 1 00:23:46.038 }, 00:23:46.038 { 00:23:46.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.038 "dma_device_type": 2 00:23:46.038 } 00:23:46.038 ], 00:23:46.038 "driver_specific": {} 00:23:46.038 }' 00:23:46.038 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:46.038 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:46.038 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:46.038 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:46.038 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:46.038 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:46.038 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:46.296 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:46.296 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:46.296 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:46.296 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:46.296 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:46.296 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:46.296 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:46.296 00:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:46.554 00:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:46.554 "name": "BaseBdev4", 00:23:46.554 "aliases": [ 00:23:46.554 "d71b1f7b-7f08-4358-bef7-1f760a99b69d" 00:23:46.554 ], 00:23:46.554 "product_name": "Malloc disk", 00:23:46.554 "block_size": 512, 00:23:46.554 "num_blocks": 65536, 00:23:46.554 "uuid": "d71b1f7b-7f08-4358-bef7-1f760a99b69d", 00:23:46.554 "assigned_rate_limits": { 00:23:46.554 "rw_ios_per_sec": 0, 00:23:46.554 "rw_mbytes_per_sec": 0, 00:23:46.554 "r_mbytes_per_sec": 0, 00:23:46.554 "w_mbytes_per_sec": 0 00:23:46.554 }, 00:23:46.554 "claimed": true, 00:23:46.554 "claim_type": "exclusive_write", 00:23:46.554 "zoned": false, 00:23:46.554 "supported_io_types": { 00:23:46.554 "read": true, 00:23:46.554 "write": true, 00:23:46.554 "unmap": true, 00:23:46.554 "flush": true, 00:23:46.554 "reset": true, 00:23:46.554 "nvme_admin": false, 00:23:46.554 "nvme_io": false, 00:23:46.554 "nvme_io_md": false, 00:23:46.554 "write_zeroes": true, 00:23:46.554 "zcopy": true, 00:23:46.554 "get_zone_info": false, 00:23:46.554 "zone_management": false, 00:23:46.554 "zone_append": false, 00:23:46.554 "compare": false, 00:23:46.554 "compare_and_write": false, 00:23:46.554 "abort": true, 00:23:46.554 "seek_hole": false, 00:23:46.554 "seek_data": false, 00:23:46.554 "copy": true, 00:23:46.554 "nvme_iov_md": false 00:23:46.554 }, 00:23:46.554 "memory_domains": [ 00:23:46.554 { 00:23:46.554 "dma_device_id": "system", 00:23:46.554 "dma_device_type": 1 00:23:46.554 }, 00:23:46.554 { 00:23:46.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.554 "dma_device_type": 2 00:23:46.554 } 00:23:46.554 ], 00:23:46.554 "driver_specific": {} 00:23:46.554 }' 00:23:46.554 00:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:46.554 00:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:46.554 00:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:46.554 00:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:46.554 00:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:46.554 00:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:46.554 00:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:46.554 00:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:46.554 00:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:46.554 00:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:46.554 00:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:46.554 00:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:46.554 00:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:46.812 [2024-07-25 00:07:42.561301] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:46.812 [2024-07-25 00:07:42.561340] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:46.812 [2024-07-25 00:07:42.561416] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:46.812 [2024-07-25 00:07:42.561704] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:46.812 [2024-07-25 00:07:42.561722] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name Existed_Raid, state offline 00:23:46.812 00:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 94713 00:23:46.812 00:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 94713 ']' 00:23:46.812 00:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 94713 00:23:46.812 00:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:23:46.812 00:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:46.812 00:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94713 00:23:46.812 killing process with pid 94713 00:23:46.812 00:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:46.813 00:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:46.813 00:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94713' 00:23:46.813 00:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 94713 00:23:46.813 [2024-07-25 00:07:42.614270] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:46.813 00:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 94713 00:23:47.071 [2024-07-25 00:07:42.906484] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:48.460 ************************************ 00:23:48.460 END TEST raid_state_function_test_sb 00:23:48.460 ************************************ 00:23:48.460 00:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:23:48.460 00:23:48.460 real 0m26.903s 00:23:48.460 user 0m47.131s 00:23:48.460 sys 0m4.195s 00:23:48.460 00:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:48.460 00:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.460 00:07:44 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:23:48.460 00:07:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:48.460 00:07:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:48.460 00:07:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:48.460 ************************************ 00:23:48.460 START TEST raid_superblock_test 00:23:48.460 ************************************ 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=95696 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 95696 /var/tmp/spdk-raid.sock 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 95696 ']' 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:48.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:48.460 00:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.460 [2024-07-25 00:07:44.081844] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:23:48.460 [2024-07-25 00:07:44.082283] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95696 ] 00:23:48.460 [2024-07-25 00:07:44.250446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.718 [2024-07-25 00:07:44.415759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.718 [2024-07-25 00:07:44.577920] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:49.283 00:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:49.283 00:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:23:49.283 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:23:49.283 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:49.283 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:23:49.283 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:23:49.283 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:49.283 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:49.283 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:23:49.283 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:49.283 00:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:49.541 malloc1 00:23:49.541 00:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:49.799 [2024-07-25 00:07:45.445155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:49.799 [2024-07-25 00:07:45.445447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:49.799 [2024-07-25 00:07:45.445600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:23:49.799 [2024-07-25 00:07:45.445717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:49.799 [2024-07-25 00:07:45.448236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:49.799 [2024-07-25 00:07:45.448277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:49.799 pt1 00:23:49.799 00:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:23:49.799 00:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:49.799 00:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:23:49.799 00:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:23:49.799 00:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:49.799 00:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:49.799 00:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:23:49.799 00:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:49.799 00:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:50.057 malloc2 00:23:50.057 00:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:50.057 [2024-07-25 00:07:45.906087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:50.057 [2024-07-25 00:07:45.906191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.057 [2024-07-25 00:07:45.906223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:23:50.057 [2024-07-25 00:07:45.906237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.057 [2024-07-25 00:07:45.908675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.057 [2024-07-25 00:07:45.908716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:50.057 pt2 00:23:50.315 00:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:23:50.315 00:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:50.315 00:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:23:50.315 00:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:23:50.315 00:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:50.315 00:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:50.315 00:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:23:50.315 00:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:50.315 00:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:50.315 malloc3 00:23:50.315 00:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:50.573 [2024-07-25 00:07:46.370705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:50.573 [2024-07-25 00:07:46.370878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.573 [2024-07-25 00:07:46.370944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:23:50.573 [2024-07-25 00:07:46.370963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.573 [2024-07-25 00:07:46.373356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.573 [2024-07-25 00:07:46.373413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:50.573 pt3 00:23:50.573 00:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:23:50.573 00:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:50.573 00:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:23:50.573 00:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:23:50.573 00:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:23:50.573 00:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:50.573 00:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:23:50.573 00:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:50.573 00:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:23:50.832 malloc4 00:23:50.832 00:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:51.090 [2024-07-25 00:07:46.872111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:51.090 [2024-07-25 00:07:46.872217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:51.090 [2024-07-25 00:07:46.872268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:23:51.090 [2024-07-25 00:07:46.872282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:51.090 [2024-07-25 00:07:46.874669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:51.090 [2024-07-25 00:07:46.874709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:51.090 pt4 00:23:51.090 00:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:23:51.090 00:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:51.090 00:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:23:51.348 [2024-07-25 00:07:47.128215] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:51.348 [2024-07-25 00:07:47.130324] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:51.348 [2024-07-25 00:07:47.130408] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:51.348 [2024-07-25 00:07:47.130469] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:51.348 [2024-07-25 00:07:47.130786] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009680 00:23:51.348 [2024-07-25 00:07:47.130805] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:51.348 [2024-07-25 00:07:47.131005] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:23:51.348 [2024-07-25 00:07:47.131478] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009680 00:23:51.348 [2024-07-25 00:07:47.131510] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009680 00:23:51.348 [2024-07-25 00:07:47.131726] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:51.348 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:51.348 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:51.348 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:51.348 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:51.348 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:51.348 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:51.348 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:51.348 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:51.348 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:51.348 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:51.348 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.348 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.606 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:51.606 "name": "raid_bdev1", 00:23:51.606 "uuid": "3ae71a3a-a017-4556-8bc2-1aaccf2fadea", 00:23:51.606 "strip_size_kb": 0, 00:23:51.606 "state": "online", 00:23:51.606 "raid_level": "raid1", 00:23:51.606 "superblock": true, 00:23:51.606 "num_base_bdevs": 4, 00:23:51.606 "num_base_bdevs_discovered": 4, 00:23:51.606 "num_base_bdevs_operational": 4, 00:23:51.606 "base_bdevs_list": [ 00:23:51.606 { 00:23:51.606 "name": "pt1", 00:23:51.606 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:51.606 "is_configured": true, 00:23:51.606 "data_offset": 2048, 00:23:51.606 "data_size": 63488 00:23:51.606 }, 00:23:51.606 { 00:23:51.606 "name": "pt2", 00:23:51.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:51.606 "is_configured": true, 00:23:51.606 "data_offset": 2048, 00:23:51.606 "data_size": 63488 00:23:51.606 }, 00:23:51.606 { 00:23:51.606 "name": "pt3", 00:23:51.606 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:51.606 "is_configured": true, 00:23:51.606 "data_offset": 2048, 00:23:51.606 "data_size": 63488 00:23:51.606 }, 00:23:51.606 { 00:23:51.606 "name": "pt4", 00:23:51.606 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:51.606 "is_configured": true, 00:23:51.606 "data_offset": 2048, 00:23:51.606 "data_size": 63488 00:23:51.606 } 00:23:51.606 ] 00:23:51.606 }' 00:23:51.606 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:51.606 00:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.865 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:23:51.865 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:51.865 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:51.865 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:51.865 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:51.865 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:51.865 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:51.865 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:52.124 [2024-07-25 00:07:47.908800] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:52.124 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:52.124 "name": "raid_bdev1", 00:23:52.124 "aliases": [ 00:23:52.124 "3ae71a3a-a017-4556-8bc2-1aaccf2fadea" 00:23:52.124 ], 00:23:52.124 "product_name": "Raid Volume", 00:23:52.124 "block_size": 512, 00:23:52.124 "num_blocks": 63488, 00:23:52.124 "uuid": "3ae71a3a-a017-4556-8bc2-1aaccf2fadea", 00:23:52.124 "assigned_rate_limits": { 00:23:52.124 "rw_ios_per_sec": 0, 00:23:52.124 "rw_mbytes_per_sec": 0, 00:23:52.124 "r_mbytes_per_sec": 0, 00:23:52.124 "w_mbytes_per_sec": 0 00:23:52.124 }, 00:23:52.124 "claimed": false, 00:23:52.124 "zoned": false, 00:23:52.124 "supported_io_types": { 00:23:52.124 "read": true, 00:23:52.124 "write": true, 00:23:52.124 "unmap": false, 00:23:52.124 "flush": false, 00:23:52.124 "reset": true, 00:23:52.124 "nvme_admin": false, 00:23:52.124 "nvme_io": false, 00:23:52.124 "nvme_io_md": false, 00:23:52.124 "write_zeroes": true, 00:23:52.124 "zcopy": false, 00:23:52.124 "get_zone_info": false, 00:23:52.124 "zone_management": false, 00:23:52.124 "zone_append": false, 00:23:52.124 "compare": false, 00:23:52.124 "compare_and_write": false, 00:23:52.124 "abort": false, 00:23:52.124 "seek_hole": false, 00:23:52.124 "seek_data": false, 00:23:52.124 "copy": false, 00:23:52.124 "nvme_iov_md": false 00:23:52.124 }, 00:23:52.124 "memory_domains": [ 00:23:52.124 { 00:23:52.124 "dma_device_id": "system", 00:23:52.124 "dma_device_type": 1 00:23:52.124 }, 00:23:52.124 { 00:23:52.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.124 "dma_device_type": 2 00:23:52.124 }, 00:23:52.124 { 00:23:52.124 "dma_device_id": "system", 00:23:52.124 "dma_device_type": 1 00:23:52.124 }, 00:23:52.124 { 00:23:52.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.124 "dma_device_type": 2 00:23:52.124 }, 00:23:52.124 { 00:23:52.124 "dma_device_id": "system", 00:23:52.124 "dma_device_type": 1 00:23:52.124 }, 00:23:52.124 { 00:23:52.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.124 "dma_device_type": 2 00:23:52.124 }, 00:23:52.124 { 00:23:52.124 "dma_device_id": "system", 00:23:52.124 "dma_device_type": 1 00:23:52.124 }, 00:23:52.124 { 00:23:52.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.124 "dma_device_type": 2 00:23:52.124 } 00:23:52.124 ], 00:23:52.124 "driver_specific": { 00:23:52.124 "raid": { 00:23:52.124 "uuid": "3ae71a3a-a017-4556-8bc2-1aaccf2fadea", 00:23:52.124 "strip_size_kb": 0, 00:23:52.124 "state": "online", 00:23:52.124 "raid_level": "raid1", 00:23:52.124 "superblock": true, 00:23:52.124 "num_base_bdevs": 4, 00:23:52.124 "num_base_bdevs_discovered": 4, 00:23:52.124 "num_base_bdevs_operational": 4, 00:23:52.124 "base_bdevs_list": [ 00:23:52.124 { 00:23:52.124 "name": "pt1", 00:23:52.124 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:52.124 "is_configured": true, 00:23:52.124 "data_offset": 2048, 00:23:52.124 "data_size": 63488 00:23:52.124 }, 00:23:52.124 { 00:23:52.124 "name": "pt2", 00:23:52.124 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:52.124 "is_configured": true, 00:23:52.124 "data_offset": 2048, 00:23:52.124 "data_size": 63488 00:23:52.124 }, 00:23:52.124 { 00:23:52.124 "name": "pt3", 00:23:52.124 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:52.124 "is_configured": true, 00:23:52.124 "data_offset": 2048, 00:23:52.124 "data_size": 63488 00:23:52.124 }, 00:23:52.124 { 00:23:52.125 "name": "pt4", 00:23:52.125 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:52.125 "is_configured": true, 00:23:52.125 "data_offset": 2048, 00:23:52.125 "data_size": 63488 00:23:52.125 } 00:23:52.125 ] 00:23:52.125 } 00:23:52.125 } 00:23:52.125 }' 00:23:52.125 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:52.125 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:52.125 pt2 00:23:52.125 pt3 00:23:52.125 pt4' 00:23:52.125 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:52.125 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:52.125 00:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:52.383 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:52.383 "name": "pt1", 00:23:52.383 "aliases": [ 00:23:52.383 "00000000-0000-0000-0000-000000000001" 00:23:52.383 ], 00:23:52.383 "product_name": "passthru", 00:23:52.383 "block_size": 512, 00:23:52.383 "num_blocks": 65536, 00:23:52.383 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:52.383 "assigned_rate_limits": { 00:23:52.383 "rw_ios_per_sec": 0, 00:23:52.383 "rw_mbytes_per_sec": 0, 00:23:52.384 "r_mbytes_per_sec": 0, 00:23:52.384 "w_mbytes_per_sec": 0 00:23:52.384 }, 00:23:52.384 "claimed": true, 00:23:52.384 "claim_type": "exclusive_write", 00:23:52.384 "zoned": false, 00:23:52.384 "supported_io_types": { 00:23:52.384 "read": true, 00:23:52.384 "write": true, 00:23:52.384 "unmap": true, 00:23:52.384 "flush": true, 00:23:52.384 "reset": true, 00:23:52.384 "nvme_admin": false, 00:23:52.384 "nvme_io": false, 00:23:52.384 "nvme_io_md": false, 00:23:52.384 "write_zeroes": true, 00:23:52.384 "zcopy": true, 00:23:52.384 "get_zone_info": false, 00:23:52.384 "zone_management": false, 00:23:52.384 "zone_append": false, 00:23:52.384 "compare": false, 00:23:52.384 "compare_and_write": false, 00:23:52.384 "abort": true, 00:23:52.384 "seek_hole": false, 00:23:52.384 "seek_data": false, 00:23:52.384 "copy": true, 00:23:52.384 "nvme_iov_md": false 00:23:52.384 }, 00:23:52.384 "memory_domains": [ 00:23:52.384 { 00:23:52.384 "dma_device_id": "system", 00:23:52.384 "dma_device_type": 1 00:23:52.384 }, 00:23:52.384 { 00:23:52.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.384 "dma_device_type": 2 00:23:52.384 } 00:23:52.384 ], 00:23:52.384 "driver_specific": { 00:23:52.384 "passthru": { 00:23:52.384 "name": "pt1", 00:23:52.384 "base_bdev_name": "malloc1" 00:23:52.384 } 00:23:52.384 } 00:23:52.384 }' 00:23:52.384 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:52.384 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:52.384 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:52.384 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:52.384 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:52.384 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:52.384 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:52.384 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:52.384 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:52.384 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:52.384 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:52.384 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:52.384 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:52.384 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:52.384 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:52.643 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:52.643 "name": "pt2", 00:23:52.643 "aliases": [ 00:23:52.643 "00000000-0000-0000-0000-000000000002" 00:23:52.643 ], 00:23:52.643 "product_name": "passthru", 00:23:52.643 "block_size": 512, 00:23:52.643 "num_blocks": 65536, 00:23:52.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:52.643 "assigned_rate_limits": { 00:23:52.643 "rw_ios_per_sec": 0, 00:23:52.643 "rw_mbytes_per_sec": 0, 00:23:52.643 "r_mbytes_per_sec": 0, 00:23:52.643 "w_mbytes_per_sec": 0 00:23:52.643 }, 00:23:52.643 "claimed": true, 00:23:52.643 "claim_type": "exclusive_write", 00:23:52.643 "zoned": false, 00:23:52.643 "supported_io_types": { 00:23:52.643 "read": true, 00:23:52.643 "write": true, 00:23:52.643 "unmap": true, 00:23:52.643 "flush": true, 00:23:52.643 "reset": true, 00:23:52.643 "nvme_admin": false, 00:23:52.643 "nvme_io": false, 00:23:52.643 "nvme_io_md": false, 00:23:52.643 "write_zeroes": true, 00:23:52.643 "zcopy": true, 00:23:52.643 "get_zone_info": false, 00:23:52.643 "zone_management": false, 00:23:52.643 "zone_append": false, 00:23:52.643 "compare": false, 00:23:52.643 "compare_and_write": false, 00:23:52.643 "abort": true, 00:23:52.643 "seek_hole": false, 00:23:52.643 "seek_data": false, 00:23:52.643 "copy": true, 00:23:52.643 "nvme_iov_md": false 00:23:52.643 }, 00:23:52.643 "memory_domains": [ 00:23:52.643 { 00:23:52.643 "dma_device_id": "system", 00:23:52.643 "dma_device_type": 1 00:23:52.643 }, 00:23:52.643 { 00:23:52.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.643 "dma_device_type": 2 00:23:52.643 } 00:23:52.643 ], 00:23:52.643 "driver_specific": { 00:23:52.643 "passthru": { 00:23:52.643 "name": "pt2", 00:23:52.643 "base_bdev_name": "malloc2" 00:23:52.643 } 00:23:52.643 } 00:23:52.643 }' 00:23:52.902 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:52.902 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:52.902 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:52.902 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:52.902 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:52.902 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:52.902 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:52.902 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:52.902 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:52.902 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:52.902 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:52.902 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:52.902 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:52.902 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:52.902 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:53.160 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:53.160 "name": "pt3", 00:23:53.160 "aliases": [ 00:23:53.160 "00000000-0000-0000-0000-000000000003" 00:23:53.160 ], 00:23:53.160 "product_name": "passthru", 00:23:53.160 "block_size": 512, 00:23:53.160 "num_blocks": 65536, 00:23:53.160 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:53.160 "assigned_rate_limits": { 00:23:53.160 "rw_ios_per_sec": 0, 00:23:53.160 "rw_mbytes_per_sec": 0, 00:23:53.160 "r_mbytes_per_sec": 0, 00:23:53.160 "w_mbytes_per_sec": 0 00:23:53.160 }, 00:23:53.160 "claimed": true, 00:23:53.160 "claim_type": "exclusive_write", 00:23:53.160 "zoned": false, 00:23:53.160 "supported_io_types": { 00:23:53.160 "read": true, 00:23:53.160 "write": true, 00:23:53.160 "unmap": true, 00:23:53.160 "flush": true, 00:23:53.160 "reset": true, 00:23:53.160 "nvme_admin": false, 00:23:53.160 "nvme_io": false, 00:23:53.160 "nvme_io_md": false, 00:23:53.160 "write_zeroes": true, 00:23:53.160 "zcopy": true, 00:23:53.160 "get_zone_info": false, 00:23:53.160 "zone_management": false, 00:23:53.160 "zone_append": false, 00:23:53.160 "compare": false, 00:23:53.160 "compare_and_write": false, 00:23:53.160 "abort": true, 00:23:53.160 "seek_hole": false, 00:23:53.160 "seek_data": false, 00:23:53.160 "copy": true, 00:23:53.160 "nvme_iov_md": false 00:23:53.160 }, 00:23:53.160 "memory_domains": [ 00:23:53.160 { 00:23:53.160 "dma_device_id": "system", 00:23:53.160 "dma_device_type": 1 00:23:53.160 }, 00:23:53.160 { 00:23:53.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:53.160 "dma_device_type": 2 00:23:53.160 } 00:23:53.160 ], 00:23:53.160 "driver_specific": { 00:23:53.160 "passthru": { 00:23:53.160 "name": "pt3", 00:23:53.160 "base_bdev_name": "malloc3" 00:23:53.160 } 00:23:53.160 } 00:23:53.160 }' 00:23:53.160 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:53.160 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:53.160 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:53.160 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:53.160 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:53.160 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:53.160 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:53.160 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:53.160 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:53.160 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:53.160 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:53.160 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:53.160 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:53.160 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:23:53.160 00:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:53.418 00:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:53.418 "name": "pt4", 00:23:53.418 "aliases": [ 00:23:53.418 "00000000-0000-0000-0000-000000000004" 00:23:53.418 ], 00:23:53.418 "product_name": "passthru", 00:23:53.418 "block_size": 512, 00:23:53.418 "num_blocks": 65536, 00:23:53.418 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:53.418 "assigned_rate_limits": { 00:23:53.418 "rw_ios_per_sec": 0, 00:23:53.418 "rw_mbytes_per_sec": 0, 00:23:53.418 "r_mbytes_per_sec": 0, 00:23:53.418 "w_mbytes_per_sec": 0 00:23:53.418 }, 00:23:53.418 "claimed": true, 00:23:53.418 "claim_type": "exclusive_write", 00:23:53.418 "zoned": false, 00:23:53.418 "supported_io_types": { 00:23:53.418 "read": true, 00:23:53.418 "write": true, 00:23:53.418 "unmap": true, 00:23:53.418 "flush": true, 00:23:53.418 "reset": true, 00:23:53.418 "nvme_admin": false, 00:23:53.419 "nvme_io": false, 00:23:53.419 "nvme_io_md": false, 00:23:53.419 "write_zeroes": true, 00:23:53.419 "zcopy": true, 00:23:53.419 "get_zone_info": false, 00:23:53.419 "zone_management": false, 00:23:53.419 "zone_append": false, 00:23:53.419 "compare": false, 00:23:53.419 "compare_and_write": false, 00:23:53.419 "abort": true, 00:23:53.419 "seek_hole": false, 00:23:53.419 "seek_data": false, 00:23:53.419 "copy": true, 00:23:53.419 "nvme_iov_md": false 00:23:53.419 }, 00:23:53.419 "memory_domains": [ 00:23:53.419 { 00:23:53.419 "dma_device_id": "system", 00:23:53.419 "dma_device_type": 1 00:23:53.419 }, 00:23:53.419 { 00:23:53.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:53.419 "dma_device_type": 2 00:23:53.419 } 00:23:53.419 ], 00:23:53.419 "driver_specific": { 00:23:53.419 "passthru": { 00:23:53.419 "name": "pt4", 00:23:53.419 "base_bdev_name": "malloc4" 00:23:53.419 } 00:23:53.419 } 00:23:53.419 }' 00:23:53.419 00:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:53.419 00:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:53.419 00:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:53.419 00:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:53.677 00:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:53.677 00:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:53.677 00:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:53.677 00:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:53.677 00:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:53.677 00:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:53.677 00:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:53.677 00:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:53.677 00:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:53.677 00:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:23:53.677 [2024-07-25 00:07:49.537226] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:53.936 00:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=3ae71a3a-a017-4556-8bc2-1aaccf2fadea 00:23:53.936 00:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 3ae71a3a-a017-4556-8bc2-1aaccf2fadea ']' 00:23:53.936 00:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:53.936 [2024-07-25 00:07:49.801053] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:53.936 [2024-07-25 00:07:49.801090] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:53.936 [2024-07-25 00:07:49.801179] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:53.936 [2024-07-25 00:07:49.801320] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:53.936 [2024-07-25 00:07:49.801362] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009680 name raid_bdev1, state offline 00:23:54.194 00:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.194 00:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:23:54.452 00:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:23:54.452 00:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:23:54.452 00:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:54.452 00:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:54.452 00:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:54.453 00:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:54.711 00:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:54.711 00:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:54.969 00:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:54.970 00:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:55.226 00:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:55.226 00:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:55.486 00:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:23:55.486 00:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:55.486 00:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:23:55.486 00:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:55.486 00:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:55.486 00:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:55.486 00:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:55.486 00:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:55.486 00:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:55.486 00:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:55.486 00:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:55.486 00:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:55.486 00:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:55.745 [2024-07-25 00:07:51.441454] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:55.745 [2024-07-25 00:07:51.443660] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:55.745 [2024-07-25 00:07:51.443925] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:55.745 [2024-07-25 00:07:51.443990] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:23:55.745 [2024-07-25 00:07:51.444058] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:55.745 [2024-07-25 00:07:51.444272] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:55.745 [2024-07-25 00:07:51.444305] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:55.745 [2024-07-25 00:07:51.444364] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:23:55.745 [2024-07-25 00:07:51.444384] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:55.745 [2024-07-25 00:07:51.444398] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009c80 name raid_bdev1, state configuring 00:23:55.745 request: 00:23:55.745 { 00:23:55.745 "name": "raid_bdev1", 00:23:55.745 "raid_level": "raid1", 00:23:55.745 "base_bdevs": [ 00:23:55.745 "malloc1", 00:23:55.745 "malloc2", 00:23:55.745 "malloc3", 00:23:55.745 "malloc4" 00:23:55.745 ], 00:23:55.745 "superblock": false, 00:23:55.745 "method": "bdev_raid_create", 00:23:55.745 "req_id": 1 00:23:55.745 } 00:23:55.745 Got JSON-RPC error response 00:23:55.745 response: 00:23:55.745 { 00:23:55.745 "code": -17, 00:23:55.745 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:55.745 } 00:23:55.745 00:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:23:55.745 00:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:55.745 00:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:55.745 00:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:55.745 00:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.745 00:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:23:56.003 00:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:23:56.003 00:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:23:56.003 00:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:56.262 [2024-07-25 00:07:51.953479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:56.262 [2024-07-25 00:07:51.953574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.262 [2024-07-25 00:07:51.953600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:23:56.262 [2024-07-25 00:07:51.953616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.262 [2024-07-25 00:07:51.956074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.262 [2024-07-25 00:07:51.956119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:56.262 [2024-07-25 00:07:51.956215] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:56.262 [2024-07-25 00:07:51.956284] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:56.262 pt1 00:23:56.262 00:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:23:56.262 00:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:56.262 00:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:56.262 00:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:56.262 00:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:56.262 00:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:56.262 00:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:56.262 00:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:56.262 00:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:56.262 00:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:56.262 00:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.262 00:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.520 00:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:56.520 "name": "raid_bdev1", 00:23:56.520 "uuid": "3ae71a3a-a017-4556-8bc2-1aaccf2fadea", 00:23:56.520 "strip_size_kb": 0, 00:23:56.520 "state": "configuring", 00:23:56.520 "raid_level": "raid1", 00:23:56.520 "superblock": true, 00:23:56.520 "num_base_bdevs": 4, 00:23:56.520 "num_base_bdevs_discovered": 1, 00:23:56.520 "num_base_bdevs_operational": 4, 00:23:56.520 "base_bdevs_list": [ 00:23:56.520 { 00:23:56.520 "name": "pt1", 00:23:56.520 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:56.520 "is_configured": true, 00:23:56.520 "data_offset": 2048, 00:23:56.520 "data_size": 63488 00:23:56.520 }, 00:23:56.520 { 00:23:56.520 "name": null, 00:23:56.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:56.520 "is_configured": false, 00:23:56.520 "data_offset": 2048, 00:23:56.520 "data_size": 63488 00:23:56.520 }, 00:23:56.520 { 00:23:56.520 "name": null, 00:23:56.520 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:56.520 "is_configured": false, 00:23:56.520 "data_offset": 2048, 00:23:56.520 "data_size": 63488 00:23:56.520 }, 00:23:56.520 { 00:23:56.520 "name": null, 00:23:56.520 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:56.520 "is_configured": false, 00:23:56.520 "data_offset": 2048, 00:23:56.520 "data_size": 63488 00:23:56.520 } 00:23:56.520 ] 00:23:56.520 }' 00:23:56.520 00:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:56.520 00:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.779 00:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:23:56.779 00:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:57.037 [2024-07-25 00:07:52.725698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:57.037 [2024-07-25 00:07:52.725794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.037 [2024-07-25 00:07:52.725853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:23:57.037 [2024-07-25 00:07:52.725873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.037 [2024-07-25 00:07:52.726418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.037 [2024-07-25 00:07:52.726464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:57.037 [2024-07-25 00:07:52.726564] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:57.037 [2024-07-25 00:07:52.726618] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:57.038 pt2 00:23:57.038 00:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:57.296 [2024-07-25 00:07:52.985780] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:57.296 00:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:23:57.296 00:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:57.296 00:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:57.296 00:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:57.296 00:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:57.296 00:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:57.296 00:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:57.296 00:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:57.296 00:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:57.296 00:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:57.296 00:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.296 00:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.554 00:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:57.554 "name": "raid_bdev1", 00:23:57.554 "uuid": "3ae71a3a-a017-4556-8bc2-1aaccf2fadea", 00:23:57.554 "strip_size_kb": 0, 00:23:57.554 "state": "configuring", 00:23:57.554 "raid_level": "raid1", 00:23:57.554 "superblock": true, 00:23:57.554 "num_base_bdevs": 4, 00:23:57.554 "num_base_bdevs_discovered": 1, 00:23:57.554 "num_base_bdevs_operational": 4, 00:23:57.554 "base_bdevs_list": [ 00:23:57.554 { 00:23:57.554 "name": "pt1", 00:23:57.554 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:57.554 "is_configured": true, 00:23:57.554 "data_offset": 2048, 00:23:57.554 "data_size": 63488 00:23:57.554 }, 00:23:57.554 { 00:23:57.554 "name": null, 00:23:57.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:57.554 "is_configured": false, 00:23:57.554 "data_offset": 2048, 00:23:57.554 "data_size": 63488 00:23:57.554 }, 00:23:57.554 { 00:23:57.554 "name": null, 00:23:57.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:57.554 "is_configured": false, 00:23:57.554 "data_offset": 2048, 00:23:57.554 "data_size": 63488 00:23:57.554 }, 00:23:57.554 { 00:23:57.554 "name": null, 00:23:57.554 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:57.554 "is_configured": false, 00:23:57.554 "data_offset": 2048, 00:23:57.554 "data_size": 63488 00:23:57.554 } 00:23:57.554 ] 00:23:57.554 }' 00:23:57.554 00:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:57.554 00:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.811 00:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:23:57.811 00:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:23:57.811 00:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:58.069 [2024-07-25 00:07:53.778023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:58.069 [2024-07-25 00:07:53.778120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:58.069 [2024-07-25 00:07:53.778173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:23:58.069 [2024-07-25 00:07:53.778190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:58.069 [2024-07-25 00:07:53.778671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:58.069 [2024-07-25 00:07:53.778696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:58.069 [2024-07-25 00:07:53.778849] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:58.069 [2024-07-25 00:07:53.778894] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:58.069 pt2 00:23:58.069 00:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:23:58.069 00:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:23:58.069 00:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:58.327 [2024-07-25 00:07:53.994085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:58.327 [2024-07-25 00:07:53.994390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:58.327 [2024-07-25 00:07:53.994486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:23:58.327 [2024-07-25 00:07:53.994775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:58.327 [2024-07-25 00:07:53.995423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:58.327 [2024-07-25 00:07:53.995597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:58.327 [2024-07-25 00:07:53.995862] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:58.327 [2024-07-25 00:07:53.996006] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:58.327 pt3 00:23:58.327 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:23:58.327 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:23:58.327 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:58.585 [2024-07-25 00:07:54.214122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:58.585 [2024-07-25 00:07:54.214419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:58.585 [2024-07-25 00:07:54.214515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:23:58.585 [2024-07-25 00:07:54.214772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:58.585 [2024-07-25 00:07:54.215348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:58.585 [2024-07-25 00:07:54.215384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:58.585 [2024-07-25 00:07:54.215505] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:58.585 [2024-07-25 00:07:54.215532] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:58.585 [2024-07-25 00:07:54.215695] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a880 00:23:58.585 [2024-07-25 00:07:54.215709] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:58.585 [2024-07-25 00:07:54.215880] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:23:58.585 [2024-07-25 00:07:54.216277] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a880 00:23:58.585 [2024-07-25 00:07:54.216303] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a880 00:23:58.586 [2024-07-25 00:07:54.216479] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.586 pt4 00:23:58.586 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:23:58.586 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:23:58.586 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:58.586 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:58.586 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:58.586 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:58.586 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:58.586 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:58.586 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:58.586 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:58.586 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:58.586 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:58.586 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.586 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.586 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:58.586 "name": "raid_bdev1", 00:23:58.586 "uuid": "3ae71a3a-a017-4556-8bc2-1aaccf2fadea", 00:23:58.586 "strip_size_kb": 0, 00:23:58.586 "state": "online", 00:23:58.586 "raid_level": "raid1", 00:23:58.586 "superblock": true, 00:23:58.586 "num_base_bdevs": 4, 00:23:58.586 "num_base_bdevs_discovered": 4, 00:23:58.586 "num_base_bdevs_operational": 4, 00:23:58.586 "base_bdevs_list": [ 00:23:58.586 { 00:23:58.586 "name": "pt1", 00:23:58.586 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:58.586 "is_configured": true, 00:23:58.586 "data_offset": 2048, 00:23:58.586 "data_size": 63488 00:23:58.586 }, 00:23:58.586 { 00:23:58.586 "name": "pt2", 00:23:58.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:58.586 "is_configured": true, 00:23:58.586 "data_offset": 2048, 00:23:58.586 "data_size": 63488 00:23:58.586 }, 00:23:58.586 { 00:23:58.586 "name": "pt3", 00:23:58.586 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:58.586 "is_configured": true, 00:23:58.586 "data_offset": 2048, 00:23:58.586 "data_size": 63488 00:23:58.586 }, 00:23:58.586 { 00:23:58.586 "name": "pt4", 00:23:58.586 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:58.586 "is_configured": true, 00:23:58.586 "data_offset": 2048, 00:23:58.586 "data_size": 63488 00:23:58.586 } 00:23:58.586 ] 00:23:58.586 }' 00:23:58.586 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:58.586 00:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.153 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:23:59.153 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:59.153 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:59.153 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:59.153 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:59.153 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:59.153 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:59.153 00:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:59.153 [2024-07-25 00:07:54.982893] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:59.153 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:59.153 "name": "raid_bdev1", 00:23:59.153 "aliases": [ 00:23:59.153 "3ae71a3a-a017-4556-8bc2-1aaccf2fadea" 00:23:59.153 ], 00:23:59.153 "product_name": "Raid Volume", 00:23:59.153 "block_size": 512, 00:23:59.153 "num_blocks": 63488, 00:23:59.153 "uuid": "3ae71a3a-a017-4556-8bc2-1aaccf2fadea", 00:23:59.153 "assigned_rate_limits": { 00:23:59.153 "rw_ios_per_sec": 0, 00:23:59.153 "rw_mbytes_per_sec": 0, 00:23:59.153 "r_mbytes_per_sec": 0, 00:23:59.153 "w_mbytes_per_sec": 0 00:23:59.153 }, 00:23:59.153 "claimed": false, 00:23:59.153 "zoned": false, 00:23:59.153 "supported_io_types": { 00:23:59.153 "read": true, 00:23:59.153 "write": true, 00:23:59.153 "unmap": false, 00:23:59.153 "flush": false, 00:23:59.153 "reset": true, 00:23:59.153 "nvme_admin": false, 00:23:59.153 "nvme_io": false, 00:23:59.153 "nvme_io_md": false, 00:23:59.153 "write_zeroes": true, 00:23:59.153 "zcopy": false, 00:23:59.153 "get_zone_info": false, 00:23:59.153 "zone_management": false, 00:23:59.153 "zone_append": false, 00:23:59.153 "compare": false, 00:23:59.153 "compare_and_write": false, 00:23:59.153 "abort": false, 00:23:59.153 "seek_hole": false, 00:23:59.153 "seek_data": false, 00:23:59.153 "copy": false, 00:23:59.153 "nvme_iov_md": false 00:23:59.153 }, 00:23:59.153 "memory_domains": [ 00:23:59.153 { 00:23:59.153 "dma_device_id": "system", 00:23:59.153 "dma_device_type": 1 00:23:59.153 }, 00:23:59.153 { 00:23:59.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:59.153 "dma_device_type": 2 00:23:59.153 }, 00:23:59.153 { 00:23:59.153 "dma_device_id": "system", 00:23:59.153 "dma_device_type": 1 00:23:59.153 }, 00:23:59.153 { 00:23:59.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:59.153 "dma_device_type": 2 00:23:59.153 }, 00:23:59.153 { 00:23:59.153 "dma_device_id": "system", 00:23:59.153 "dma_device_type": 1 00:23:59.153 }, 00:23:59.153 { 00:23:59.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:59.153 "dma_device_type": 2 00:23:59.153 }, 00:23:59.153 { 00:23:59.153 "dma_device_id": "system", 00:23:59.153 "dma_device_type": 1 00:23:59.153 }, 00:23:59.153 { 00:23:59.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:59.153 "dma_device_type": 2 00:23:59.153 } 00:23:59.153 ], 00:23:59.153 "driver_specific": { 00:23:59.153 "raid": { 00:23:59.153 "uuid": "3ae71a3a-a017-4556-8bc2-1aaccf2fadea", 00:23:59.153 "strip_size_kb": 0, 00:23:59.153 "state": "online", 00:23:59.153 "raid_level": "raid1", 00:23:59.153 "superblock": true, 00:23:59.153 "num_base_bdevs": 4, 00:23:59.153 "num_base_bdevs_discovered": 4, 00:23:59.153 "num_base_bdevs_operational": 4, 00:23:59.153 "base_bdevs_list": [ 00:23:59.153 { 00:23:59.153 "name": "pt1", 00:23:59.153 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:59.153 "is_configured": true, 00:23:59.153 "data_offset": 2048, 00:23:59.153 "data_size": 63488 00:23:59.153 }, 00:23:59.153 { 00:23:59.153 "name": "pt2", 00:23:59.153 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:59.153 "is_configured": true, 00:23:59.153 "data_offset": 2048, 00:23:59.153 "data_size": 63488 00:23:59.153 }, 00:23:59.153 { 00:23:59.153 "name": "pt3", 00:23:59.153 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:59.153 "is_configured": true, 00:23:59.153 "data_offset": 2048, 00:23:59.153 "data_size": 63488 00:23:59.153 }, 00:23:59.153 { 00:23:59.153 "name": "pt4", 00:23:59.153 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:59.153 "is_configured": true, 00:23:59.153 "data_offset": 2048, 00:23:59.153 "data_size": 63488 00:23:59.153 } 00:23:59.153 ] 00:23:59.153 } 00:23:59.153 } 00:23:59.153 }' 00:23:59.153 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:59.153 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:59.153 pt2 00:23:59.153 pt3 00:23:59.153 pt4' 00:23:59.153 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:59.153 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:59.153 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:59.412 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:59.412 "name": "pt1", 00:23:59.412 "aliases": [ 00:23:59.412 "00000000-0000-0000-0000-000000000001" 00:23:59.412 ], 00:23:59.412 "product_name": "passthru", 00:23:59.412 "block_size": 512, 00:23:59.412 "num_blocks": 65536, 00:23:59.412 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:59.412 "assigned_rate_limits": { 00:23:59.412 "rw_ios_per_sec": 0, 00:23:59.412 "rw_mbytes_per_sec": 0, 00:23:59.412 "r_mbytes_per_sec": 0, 00:23:59.412 "w_mbytes_per_sec": 0 00:23:59.412 }, 00:23:59.412 "claimed": true, 00:23:59.412 "claim_type": "exclusive_write", 00:23:59.412 "zoned": false, 00:23:59.412 "supported_io_types": { 00:23:59.412 "read": true, 00:23:59.412 "write": true, 00:23:59.412 "unmap": true, 00:23:59.412 "flush": true, 00:23:59.412 "reset": true, 00:23:59.412 "nvme_admin": false, 00:23:59.412 "nvme_io": false, 00:23:59.412 "nvme_io_md": false, 00:23:59.412 "write_zeroes": true, 00:23:59.412 "zcopy": true, 00:23:59.412 "get_zone_info": false, 00:23:59.412 "zone_management": false, 00:23:59.412 "zone_append": false, 00:23:59.412 "compare": false, 00:23:59.412 "compare_and_write": false, 00:23:59.412 "abort": true, 00:23:59.412 "seek_hole": false, 00:23:59.412 "seek_data": false, 00:23:59.412 "copy": true, 00:23:59.412 "nvme_iov_md": false 00:23:59.412 }, 00:23:59.412 "memory_domains": [ 00:23:59.412 { 00:23:59.412 "dma_device_id": "system", 00:23:59.412 "dma_device_type": 1 00:23:59.412 }, 00:23:59.412 { 00:23:59.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:59.412 "dma_device_type": 2 00:23:59.412 } 00:23:59.412 ], 00:23:59.412 "driver_specific": { 00:23:59.412 "passthru": { 00:23:59.412 "name": "pt1", 00:23:59.412 "base_bdev_name": "malloc1" 00:23:59.412 } 00:23:59.412 } 00:23:59.412 }' 00:23:59.412 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:59.412 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:59.412 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:59.412 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:59.412 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:59.412 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:59.412 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:59.670 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:59.670 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:59.670 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:59.670 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:59.670 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:59.670 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:59.670 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:59.670 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:59.928 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:59.928 "name": "pt2", 00:23:59.928 "aliases": [ 00:23:59.928 "00000000-0000-0000-0000-000000000002" 00:23:59.928 ], 00:23:59.928 "product_name": "passthru", 00:23:59.928 "block_size": 512, 00:23:59.928 "num_blocks": 65536, 00:23:59.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:59.928 "assigned_rate_limits": { 00:23:59.928 "rw_ios_per_sec": 0, 00:23:59.928 "rw_mbytes_per_sec": 0, 00:23:59.928 "r_mbytes_per_sec": 0, 00:23:59.928 "w_mbytes_per_sec": 0 00:23:59.928 }, 00:23:59.928 "claimed": true, 00:23:59.928 "claim_type": "exclusive_write", 00:23:59.928 "zoned": false, 00:23:59.928 "supported_io_types": { 00:23:59.928 "read": true, 00:23:59.928 "write": true, 00:23:59.928 "unmap": true, 00:23:59.928 "flush": true, 00:23:59.928 "reset": true, 00:23:59.928 "nvme_admin": false, 00:23:59.928 "nvme_io": false, 00:23:59.928 "nvme_io_md": false, 00:23:59.928 "write_zeroes": true, 00:23:59.928 "zcopy": true, 00:23:59.928 "get_zone_info": false, 00:23:59.928 "zone_management": false, 00:23:59.928 "zone_append": false, 00:23:59.928 "compare": false, 00:23:59.928 "compare_and_write": false, 00:23:59.928 "abort": true, 00:23:59.928 "seek_hole": false, 00:23:59.928 "seek_data": false, 00:23:59.928 "copy": true, 00:23:59.928 "nvme_iov_md": false 00:23:59.928 }, 00:23:59.928 "memory_domains": [ 00:23:59.928 { 00:23:59.928 "dma_device_id": "system", 00:23:59.928 "dma_device_type": 1 00:23:59.928 }, 00:23:59.928 { 00:23:59.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:59.928 "dma_device_type": 2 00:23:59.928 } 00:23:59.928 ], 00:23:59.928 "driver_specific": { 00:23:59.928 "passthru": { 00:23:59.928 "name": "pt2", 00:23:59.928 "base_bdev_name": "malloc2" 00:23:59.928 } 00:23:59.928 } 00:23:59.928 }' 00:23:59.928 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:59.928 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:59.928 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:59.928 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:59.928 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:59.928 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:59.928 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:59.928 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:59.928 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:59.928 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:59.928 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:59.928 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:59.928 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:59.929 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:59.929 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:00.187 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:00.187 "name": "pt3", 00:24:00.187 "aliases": [ 00:24:00.187 "00000000-0000-0000-0000-000000000003" 00:24:00.187 ], 00:24:00.187 "product_name": "passthru", 00:24:00.187 "block_size": 512, 00:24:00.187 "num_blocks": 65536, 00:24:00.187 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:00.187 "assigned_rate_limits": { 00:24:00.187 "rw_ios_per_sec": 0, 00:24:00.187 "rw_mbytes_per_sec": 0, 00:24:00.187 "r_mbytes_per_sec": 0, 00:24:00.187 "w_mbytes_per_sec": 0 00:24:00.187 }, 00:24:00.187 "claimed": true, 00:24:00.187 "claim_type": "exclusive_write", 00:24:00.187 "zoned": false, 00:24:00.187 "supported_io_types": { 00:24:00.187 "read": true, 00:24:00.187 "write": true, 00:24:00.187 "unmap": true, 00:24:00.187 "flush": true, 00:24:00.187 "reset": true, 00:24:00.187 "nvme_admin": false, 00:24:00.187 "nvme_io": false, 00:24:00.187 "nvme_io_md": false, 00:24:00.187 "write_zeroes": true, 00:24:00.187 "zcopy": true, 00:24:00.187 "get_zone_info": false, 00:24:00.187 "zone_management": false, 00:24:00.187 "zone_append": false, 00:24:00.187 "compare": false, 00:24:00.187 "compare_and_write": false, 00:24:00.187 "abort": true, 00:24:00.187 "seek_hole": false, 00:24:00.187 "seek_data": false, 00:24:00.187 "copy": true, 00:24:00.187 "nvme_iov_md": false 00:24:00.187 }, 00:24:00.187 "memory_domains": [ 00:24:00.187 { 00:24:00.187 "dma_device_id": "system", 00:24:00.187 "dma_device_type": 1 00:24:00.187 }, 00:24:00.187 { 00:24:00.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.187 "dma_device_type": 2 00:24:00.187 } 00:24:00.187 ], 00:24:00.187 "driver_specific": { 00:24:00.187 "passthru": { 00:24:00.187 "name": "pt3", 00:24:00.187 "base_bdev_name": "malloc3" 00:24:00.187 } 00:24:00.187 } 00:24:00.187 }' 00:24:00.187 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:00.187 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:00.187 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:00.187 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:00.187 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:00.187 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:00.187 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:00.187 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:00.187 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:00.187 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:00.187 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:00.187 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:00.187 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:00.187 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:00.187 00:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:00.446 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:00.446 "name": "pt4", 00:24:00.446 "aliases": [ 00:24:00.446 "00000000-0000-0000-0000-000000000004" 00:24:00.446 ], 00:24:00.446 "product_name": "passthru", 00:24:00.446 "block_size": 512, 00:24:00.446 "num_blocks": 65536, 00:24:00.446 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:00.446 "assigned_rate_limits": { 00:24:00.446 "rw_ios_per_sec": 0, 00:24:00.446 "rw_mbytes_per_sec": 0, 00:24:00.446 "r_mbytes_per_sec": 0, 00:24:00.446 "w_mbytes_per_sec": 0 00:24:00.446 }, 00:24:00.446 "claimed": true, 00:24:00.446 "claim_type": "exclusive_write", 00:24:00.446 "zoned": false, 00:24:00.446 "supported_io_types": { 00:24:00.446 "read": true, 00:24:00.446 "write": true, 00:24:00.446 "unmap": true, 00:24:00.446 "flush": true, 00:24:00.446 "reset": true, 00:24:00.446 "nvme_admin": false, 00:24:00.446 "nvme_io": false, 00:24:00.446 "nvme_io_md": false, 00:24:00.446 "write_zeroes": true, 00:24:00.446 "zcopy": true, 00:24:00.446 "get_zone_info": false, 00:24:00.446 "zone_management": false, 00:24:00.446 "zone_append": false, 00:24:00.446 "compare": false, 00:24:00.446 "compare_and_write": false, 00:24:00.446 "abort": true, 00:24:00.446 "seek_hole": false, 00:24:00.446 "seek_data": false, 00:24:00.446 "copy": true, 00:24:00.446 "nvme_iov_md": false 00:24:00.446 }, 00:24:00.446 "memory_domains": [ 00:24:00.446 { 00:24:00.446 "dma_device_id": "system", 00:24:00.446 "dma_device_type": 1 00:24:00.446 }, 00:24:00.446 { 00:24:00.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.446 "dma_device_type": 2 00:24:00.446 } 00:24:00.446 ], 00:24:00.446 "driver_specific": { 00:24:00.446 "passthru": { 00:24:00.446 "name": "pt4", 00:24:00.446 "base_bdev_name": "malloc4" 00:24:00.446 } 00:24:00.446 } 00:24:00.446 }' 00:24:00.446 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:00.446 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:00.446 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:00.446 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:00.446 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:00.446 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:00.446 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:00.446 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:00.446 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:00.446 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:00.446 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:00.446 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:00.704 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:00.704 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:24:00.963 [2024-07-25 00:07:56.603468] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:00.963 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 3ae71a3a-a017-4556-8bc2-1aaccf2fadea '!=' 3ae71a3a-a017-4556-8bc2-1aaccf2fadea ']' 00:24:00.963 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:24:00.963 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:00.963 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:24:00.963 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:01.222 [2024-07-25 00:07:56.867231] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:01.222 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:01.222 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:01.222 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:01.222 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:01.222 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:01.222 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:01.222 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:01.222 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:01.222 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:01.222 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:01.222 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.222 00:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.481 00:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:01.481 "name": "raid_bdev1", 00:24:01.481 "uuid": "3ae71a3a-a017-4556-8bc2-1aaccf2fadea", 00:24:01.481 "strip_size_kb": 0, 00:24:01.481 "state": "online", 00:24:01.481 "raid_level": "raid1", 00:24:01.481 "superblock": true, 00:24:01.481 "num_base_bdevs": 4, 00:24:01.481 "num_base_bdevs_discovered": 3, 00:24:01.481 "num_base_bdevs_operational": 3, 00:24:01.481 "base_bdevs_list": [ 00:24:01.481 { 00:24:01.481 "name": null, 00:24:01.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.481 "is_configured": false, 00:24:01.481 "data_offset": 2048, 00:24:01.481 "data_size": 63488 00:24:01.481 }, 00:24:01.481 { 00:24:01.481 "name": "pt2", 00:24:01.481 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:01.481 "is_configured": true, 00:24:01.481 "data_offset": 2048, 00:24:01.481 "data_size": 63488 00:24:01.481 }, 00:24:01.481 { 00:24:01.481 "name": "pt3", 00:24:01.481 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:01.481 "is_configured": true, 00:24:01.481 "data_offset": 2048, 00:24:01.481 "data_size": 63488 00:24:01.481 }, 00:24:01.481 { 00:24:01.481 "name": "pt4", 00:24:01.481 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:01.481 "is_configured": true, 00:24:01.481 "data_offset": 2048, 00:24:01.481 "data_size": 63488 00:24:01.481 } 00:24:01.481 ] 00:24:01.481 }' 00:24:01.481 00:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:01.481 00:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.739 00:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:01.997 [2024-07-25 00:07:57.699334] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:01.997 [2024-07-25 00:07:57.699571] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:01.997 [2024-07-25 00:07:57.699762] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:01.997 [2024-07-25 00:07:57.699922] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:01.997 [2024-07-25 00:07:57.699942] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state offline 00:24:01.997 00:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.997 00:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:24:02.255 00:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:24:02.255 00:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:24:02.255 00:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:02.255 00:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:24:02.255 00:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:02.513 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:02.513 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:24:02.513 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:02.771 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:02.771 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:24:02.771 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:03.029 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:03.029 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:24:03.029 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:24:03.029 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:24:03.029 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:03.029 [2024-07-25 00:07:58.843617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:03.029 [2024-07-25 00:07:58.843707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.029 [2024-07-25 00:07:58.843737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b780 00:24:03.029 [2024-07-25 00:07:58.843750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.029 [2024-07-25 00:07:58.846144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.029 [2024-07-25 00:07:58.846201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:03.029 [2024-07-25 00:07:58.846321] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:03.029 [2024-07-25 00:07:58.846371] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:03.029 pt2 00:24:03.029 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:24:03.029 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:03.029 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:03.029 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:03.029 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:03.029 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:03.029 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:03.029 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:03.029 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:03.029 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:03.029 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.029 00:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.288 00:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:03.288 "name": "raid_bdev1", 00:24:03.288 "uuid": "3ae71a3a-a017-4556-8bc2-1aaccf2fadea", 00:24:03.288 "strip_size_kb": 0, 00:24:03.288 "state": "configuring", 00:24:03.288 "raid_level": "raid1", 00:24:03.288 "superblock": true, 00:24:03.288 "num_base_bdevs": 4, 00:24:03.288 "num_base_bdevs_discovered": 1, 00:24:03.288 "num_base_bdevs_operational": 3, 00:24:03.288 "base_bdevs_list": [ 00:24:03.288 { 00:24:03.288 "name": null, 00:24:03.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.288 "is_configured": false, 00:24:03.288 "data_offset": 2048, 00:24:03.288 "data_size": 63488 00:24:03.288 }, 00:24:03.288 { 00:24:03.288 "name": "pt2", 00:24:03.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:03.288 "is_configured": true, 00:24:03.288 "data_offset": 2048, 00:24:03.288 "data_size": 63488 00:24:03.288 }, 00:24:03.288 { 00:24:03.288 "name": null, 00:24:03.288 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:03.288 "is_configured": false, 00:24:03.288 "data_offset": 2048, 00:24:03.288 "data_size": 63488 00:24:03.288 }, 00:24:03.288 { 00:24:03.288 "name": null, 00:24:03.288 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:03.288 "is_configured": false, 00:24:03.288 "data_offset": 2048, 00:24:03.288 "data_size": 63488 00:24:03.288 } 00:24:03.288 ] 00:24:03.288 }' 00:24:03.288 00:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:03.288 00:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.865 00:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:24:03.865 00:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:24:03.865 00:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:03.865 [2024-07-25 00:07:59.611840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:03.865 [2024-07-25 00:07:59.612175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.865 [2024-07-25 00:07:59.612331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:24:03.865 [2024-07-25 00:07:59.612480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.865 [2024-07-25 00:07:59.613093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.865 [2024-07-25 00:07:59.613136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:03.865 [2024-07-25 00:07:59.613279] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:24:03.865 [2024-07-25 00:07:59.613305] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:03.865 pt3 00:24:03.865 00:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:24:03.865 00:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:03.865 00:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:03.865 00:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:03.865 00:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:03.865 00:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:03.865 00:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:03.865 00:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:03.865 00:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:03.865 00:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:03.865 00:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.865 00:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.138 00:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:04.138 "name": "raid_bdev1", 00:24:04.138 "uuid": "3ae71a3a-a017-4556-8bc2-1aaccf2fadea", 00:24:04.138 "strip_size_kb": 0, 00:24:04.138 "state": "configuring", 00:24:04.138 "raid_level": "raid1", 00:24:04.138 "superblock": true, 00:24:04.138 "num_base_bdevs": 4, 00:24:04.138 "num_base_bdevs_discovered": 2, 00:24:04.138 "num_base_bdevs_operational": 3, 00:24:04.138 "base_bdevs_list": [ 00:24:04.138 { 00:24:04.138 "name": null, 00:24:04.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.138 "is_configured": false, 00:24:04.138 "data_offset": 2048, 00:24:04.138 "data_size": 63488 00:24:04.138 }, 00:24:04.138 { 00:24:04.138 "name": "pt2", 00:24:04.138 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:04.138 "is_configured": true, 00:24:04.138 "data_offset": 2048, 00:24:04.138 "data_size": 63488 00:24:04.138 }, 00:24:04.138 { 00:24:04.138 "name": "pt3", 00:24:04.138 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:04.138 "is_configured": true, 00:24:04.138 "data_offset": 2048, 00:24:04.138 "data_size": 63488 00:24:04.138 }, 00:24:04.138 { 00:24:04.138 "name": null, 00:24:04.138 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:04.138 "is_configured": false, 00:24:04.138 "data_offset": 2048, 00:24:04.138 "data_size": 63488 00:24:04.138 } 00:24:04.138 ] 00:24:04.138 }' 00:24:04.138 00:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:04.138 00:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.397 00:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:24:04.397 00:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:24:04.397 00:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:24:04.397 00:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:04.656 [2024-07-25 00:08:00.416119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:04.656 [2024-07-25 00:08:00.416196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:04.656 [2024-07-25 00:08:00.416228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:24:04.656 [2024-07-25 00:08:00.416243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:04.656 [2024-07-25 00:08:00.416723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:04.656 [2024-07-25 00:08:00.416754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:04.656 [2024-07-25 00:08:00.416915] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:24:04.656 [2024-07-25 00:08:00.416969] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:04.656 [2024-07-25 00:08:00.417125] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000bd80 00:24:04.656 [2024-07-25 00:08:00.417147] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:04.656 [2024-07-25 00:08:00.417257] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ba0 00:24:04.656 [2024-07-25 00:08:00.417626] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000bd80 00:24:04.656 [2024-07-25 00:08:00.417653] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000bd80 00:24:04.656 [2024-07-25 00:08:00.417830] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:04.656 pt4 00:24:04.656 00:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:04.656 00:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:04.656 00:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:04.656 00:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:04.656 00:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:04.656 00:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:04.656 00:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:04.656 00:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:04.656 00:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:04.656 00:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:04.656 00:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.656 00:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.916 00:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:04.916 "name": "raid_bdev1", 00:24:04.916 "uuid": "3ae71a3a-a017-4556-8bc2-1aaccf2fadea", 00:24:04.916 "strip_size_kb": 0, 00:24:04.916 "state": "online", 00:24:04.916 "raid_level": "raid1", 00:24:04.916 "superblock": true, 00:24:04.916 "num_base_bdevs": 4, 00:24:04.916 "num_base_bdevs_discovered": 3, 00:24:04.916 "num_base_bdevs_operational": 3, 00:24:04.916 "base_bdevs_list": [ 00:24:04.916 { 00:24:04.916 "name": null, 00:24:04.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.916 "is_configured": false, 00:24:04.916 "data_offset": 2048, 00:24:04.916 "data_size": 63488 00:24:04.916 }, 00:24:04.916 { 00:24:04.916 "name": "pt2", 00:24:04.916 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:04.916 "is_configured": true, 00:24:04.916 "data_offset": 2048, 00:24:04.916 "data_size": 63488 00:24:04.916 }, 00:24:04.916 { 00:24:04.916 "name": "pt3", 00:24:04.916 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:04.916 "is_configured": true, 00:24:04.916 "data_offset": 2048, 00:24:04.916 "data_size": 63488 00:24:04.916 }, 00:24:04.916 { 00:24:04.916 "name": "pt4", 00:24:04.916 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:04.916 "is_configured": true, 00:24:04.916 "data_offset": 2048, 00:24:04.916 "data_size": 63488 00:24:04.916 } 00:24:04.916 ] 00:24:04.916 }' 00:24:04.916 00:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:04.916 00:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.175 00:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:05.433 [2024-07-25 00:08:01.192352] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:05.433 [2024-07-25 00:08:01.192403] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:05.433 [2024-07-25 00:08:01.192497] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:05.433 [2024-07-25 00:08:01.192585] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:05.433 [2024-07-25 00:08:01.192605] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state offline 00:24:05.433 00:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.433 00:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:24:05.692 00:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:24:05.692 00:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:24:05.692 00:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 4 -gt 2 ']' 00:24:05.692 00:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=3 00:24:05.692 00:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:05.951 00:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:06.210 [2024-07-25 00:08:01.896448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:06.210 [2024-07-25 00:08:01.896552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:06.210 [2024-07-25 00:08:01.896578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c680 00:24:06.210 [2024-07-25 00:08:01.896593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:06.210 [2024-07-25 00:08:01.899022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:06.210 pt1 00:24:06.210 [2024-07-25 00:08:01.899352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:06.210 [2024-07-25 00:08:01.899497] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:06.210 [2024-07-25 00:08:01.899564] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:06.210 [2024-07-25 00:08:01.899734] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:06.210 [2024-07-25 00:08:01.899760] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:06.210 [2024-07-25 00:08:01.899779] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000cc80 name raid_bdev1, state configuring 00:24:06.210 [2024-07-25 00:08:01.899896] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:06.210 [2024-07-25 00:08:01.900013] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:06.210 00:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 4 -gt 2 ']' 00:24:06.210 00:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:24:06.210 00:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:06.210 00:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:06.210 00:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:06.210 00:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:06.210 00:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:06.210 00:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:06.210 00:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:06.210 00:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:06.210 00:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:06.210 00:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.210 00:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.469 00:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:06.469 "name": "raid_bdev1", 00:24:06.469 "uuid": "3ae71a3a-a017-4556-8bc2-1aaccf2fadea", 00:24:06.469 "strip_size_kb": 0, 00:24:06.469 "state": "configuring", 00:24:06.469 "raid_level": "raid1", 00:24:06.469 "superblock": true, 00:24:06.469 "num_base_bdevs": 4, 00:24:06.469 "num_base_bdevs_discovered": 2, 00:24:06.469 "num_base_bdevs_operational": 3, 00:24:06.469 "base_bdevs_list": [ 00:24:06.469 { 00:24:06.469 "name": null, 00:24:06.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.469 "is_configured": false, 00:24:06.469 "data_offset": 2048, 00:24:06.469 "data_size": 63488 00:24:06.469 }, 00:24:06.469 { 00:24:06.469 "name": "pt2", 00:24:06.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:06.470 "is_configured": true, 00:24:06.470 "data_offset": 2048, 00:24:06.470 "data_size": 63488 00:24:06.470 }, 00:24:06.470 { 00:24:06.470 "name": "pt3", 00:24:06.470 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:06.470 "is_configured": true, 00:24:06.470 "data_offset": 2048, 00:24:06.470 "data_size": 63488 00:24:06.470 }, 00:24:06.470 { 00:24:06.470 "name": null, 00:24:06.470 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:06.470 "is_configured": false, 00:24:06.470 "data_offset": 2048, 00:24:06.470 "data_size": 63488 00:24:06.470 } 00:24:06.470 ] 00:24:06.470 }' 00:24:06.470 00:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:06.470 00:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.729 00:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:24:06.729 00:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:06.988 00:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:24:06.988 00:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:07.248 [2024-07-25 00:08:02.900756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:07.248 [2024-07-25 00:08:02.900860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:07.248 [2024-07-25 00:08:02.900893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d280 00:24:07.248 [2024-07-25 00:08:02.900906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:07.248 [2024-07-25 00:08:02.901411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:07.248 [2024-07-25 00:08:02.901443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:07.248 [2024-07-25 00:08:02.901545] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:24:07.248 [2024-07-25 00:08:02.901573] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:07.248 [2024-07-25 00:08:02.901723] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000cf80 00:24:07.248 [2024-07-25 00:08:02.901737] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:07.248 [2024-07-25 00:08:02.901850] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005c70 00:24:07.248 [2024-07-25 00:08:02.902196] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000cf80 00:24:07.248 [2024-07-25 00:08:02.902224] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000cf80 00:24:07.248 [2024-07-25 00:08:02.902391] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:07.248 pt4 00:24:07.248 00:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:07.248 00:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:07.248 00:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:07.248 00:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:07.248 00:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:07.248 00:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:07.248 00:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:07.248 00:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:07.248 00:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:07.248 00:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:07.248 00:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.248 00:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.508 00:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:07.508 "name": "raid_bdev1", 00:24:07.508 "uuid": "3ae71a3a-a017-4556-8bc2-1aaccf2fadea", 00:24:07.508 "strip_size_kb": 0, 00:24:07.508 "state": "online", 00:24:07.508 "raid_level": "raid1", 00:24:07.508 "superblock": true, 00:24:07.508 "num_base_bdevs": 4, 00:24:07.508 "num_base_bdevs_discovered": 3, 00:24:07.508 "num_base_bdevs_operational": 3, 00:24:07.508 "base_bdevs_list": [ 00:24:07.508 { 00:24:07.508 "name": null, 00:24:07.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.508 "is_configured": false, 00:24:07.508 "data_offset": 2048, 00:24:07.508 "data_size": 63488 00:24:07.508 }, 00:24:07.508 { 00:24:07.508 "name": "pt2", 00:24:07.508 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:07.508 "is_configured": true, 00:24:07.508 "data_offset": 2048, 00:24:07.508 "data_size": 63488 00:24:07.508 }, 00:24:07.508 { 00:24:07.508 "name": "pt3", 00:24:07.508 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:07.508 "is_configured": true, 00:24:07.508 "data_offset": 2048, 00:24:07.508 "data_size": 63488 00:24:07.508 }, 00:24:07.508 { 00:24:07.508 "name": "pt4", 00:24:07.508 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:07.508 "is_configured": true, 00:24:07.508 "data_offset": 2048, 00:24:07.508 "data_size": 63488 00:24:07.508 } 00:24:07.508 ] 00:24:07.508 }' 00:24:07.508 00:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:07.508 00:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.765 00:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:24:07.765 00:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:08.022 00:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:24:08.022 00:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:08.022 00:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:24:08.280 [2024-07-25 00:08:03.969459] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:08.280 00:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 3ae71a3a-a017-4556-8bc2-1aaccf2fadea '!=' 3ae71a3a-a017-4556-8bc2-1aaccf2fadea ']' 00:24:08.280 00:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 95696 00:24:08.280 00:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 95696 ']' 00:24:08.280 00:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 95696 00:24:08.280 00:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:24:08.280 00:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:08.280 00:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95696 00:24:08.280 killing process with pid 95696 00:24:08.280 00:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:08.280 00:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:08.280 00:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95696' 00:24:08.280 00:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 95696 00:24:08.280 [2024-07-25 00:08:04.022286] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:08.280 00:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 95696 00:24:08.280 [2024-07-25 00:08:04.022383] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:08.280 [2024-07-25 00:08:04.022499] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:08.280 [2024-07-25 00:08:04.022520] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000cf80 name raid_bdev1, state offline 00:24:08.538 [2024-07-25 00:08:04.349669] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:09.913 00:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:24:09.913 00:24:09.913 real 0m21.355s 00:24:09.913 user 0m37.295s 00:24:09.913 sys 0m3.339s 00:24:09.913 00:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:09.913 00:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.913 ************************************ 00:24:09.913 END TEST raid_superblock_test 00:24:09.913 ************************************ 00:24:09.913 00:08:05 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:24:09.913 00:08:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:09.913 00:08:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:09.913 00:08:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:09.913 ************************************ 00:24:09.913 START TEST raid_read_error_test 00:24:09.913 ************************************ 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.JUBzCDGv74 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=96464 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 96464 /var/tmp/spdk-raid.sock 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 96464 ']' 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:09.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:09.913 00:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.913 [2024-07-25 00:08:05.533637] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:24:09.913 [2024-07-25 00:08:05.533851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96464 ] 00:24:09.913 [2024-07-25 00:08:05.704922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.176 [2024-07-25 00:08:05.873933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.176 [2024-07-25 00:08:06.040444] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:10.742 00:08:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:10.742 00:08:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:24:10.742 00:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:10.742 00:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:11.001 BaseBdev1_malloc 00:24:11.001 00:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:11.258 true 00:24:11.258 00:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:11.516 [2024-07-25 00:08:07.197872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:11.516 [2024-07-25 00:08:07.197981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:11.516 [2024-07-25 00:08:07.198014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:24:11.516 [2024-07-25 00:08:07.198031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:11.516 [2024-07-25 00:08:07.200395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:11.516 [2024-07-25 00:08:07.200455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:11.516 BaseBdev1 00:24:11.516 00:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:11.516 00:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:11.773 BaseBdev2_malloc 00:24:11.773 00:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:12.031 true 00:24:12.031 00:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:12.290 [2024-07-25 00:08:07.932575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:12.290 [2024-07-25 00:08:07.932668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.290 [2024-07-25 00:08:07.932701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:24:12.290 [2024-07-25 00:08:07.932720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.290 [2024-07-25 00:08:07.935308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.290 [2024-07-25 00:08:07.935353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:12.290 BaseBdev2 00:24:12.290 00:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:12.290 00:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:12.548 BaseBdev3_malloc 00:24:12.548 00:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:12.548 true 00:24:12.806 00:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:12.806 [2024-07-25 00:08:08.603146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:12.806 [2024-07-25 00:08:08.603263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.806 [2024-07-25 00:08:08.603291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:24:12.806 [2024-07-25 00:08:08.603306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.806 [2024-07-25 00:08:08.605660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.806 [2024-07-25 00:08:08.605704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:12.806 BaseBdev3 00:24:12.806 00:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:12.806 00:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:13.065 BaseBdev4_malloc 00:24:13.065 00:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:24:13.323 true 00:24:13.323 00:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:13.582 [2024-07-25 00:08:09.287812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:13.582 [2024-07-25 00:08:09.287930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.582 [2024-07-25 00:08:09.287963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:24:13.582 [2024-07-25 00:08:09.287980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.582 [2024-07-25 00:08:09.290640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.582 [2024-07-25 00:08:09.290686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:13.582 BaseBdev4 00:24:13.582 00:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:24:13.841 [2024-07-25 00:08:09.503941] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:13.841 [2024-07-25 00:08:09.505980] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:13.841 [2024-07-25 00:08:09.506074] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:13.841 [2024-07-25 00:08:09.506208] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:13.841 [2024-07-25 00:08:09.506511] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a280 00:24:13.841 [2024-07-25 00:08:09.506541] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:13.841 [2024-07-25 00:08:09.506717] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:24:13.841 [2024-07-25 00:08:09.507217] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a280 00:24:13.841 [2024-07-25 00:08:09.507260] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a280 00:24:13.841 [2024-07-25 00:08:09.507458] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:13.841 00:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:13.841 00:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:13.841 00:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:13.841 00:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:13.841 00:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:13.841 00:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:13.841 00:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:13.841 00:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:13.841 00:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:13.841 00:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:13.841 00:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.841 00:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.100 00:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:14.100 "name": "raid_bdev1", 00:24:14.100 "uuid": "b5567afe-f978-457e-a6bf-4ed36d6feafa", 00:24:14.100 "strip_size_kb": 0, 00:24:14.100 "state": "online", 00:24:14.100 "raid_level": "raid1", 00:24:14.100 "superblock": true, 00:24:14.100 "num_base_bdevs": 4, 00:24:14.100 "num_base_bdevs_discovered": 4, 00:24:14.100 "num_base_bdevs_operational": 4, 00:24:14.101 "base_bdevs_list": [ 00:24:14.101 { 00:24:14.101 "name": "BaseBdev1", 00:24:14.101 "uuid": "5f209264-c717-5eb4-8335-b26c2d9f1758", 00:24:14.101 "is_configured": true, 00:24:14.101 "data_offset": 2048, 00:24:14.101 "data_size": 63488 00:24:14.101 }, 00:24:14.101 { 00:24:14.101 "name": "BaseBdev2", 00:24:14.101 "uuid": "53ade181-9ca3-5bc0-9ae6-55fe06f5970b", 00:24:14.101 "is_configured": true, 00:24:14.101 "data_offset": 2048, 00:24:14.101 "data_size": 63488 00:24:14.101 }, 00:24:14.101 { 00:24:14.101 "name": "BaseBdev3", 00:24:14.101 "uuid": "4980ec14-670c-5996-98ac-9f610abcfc9d", 00:24:14.101 "is_configured": true, 00:24:14.101 "data_offset": 2048, 00:24:14.101 "data_size": 63488 00:24:14.101 }, 00:24:14.101 { 00:24:14.101 "name": "BaseBdev4", 00:24:14.101 "uuid": "ae2e04f3-1674-5a2e-aab2-165f056bd7e1", 00:24:14.101 "is_configured": true, 00:24:14.101 "data_offset": 2048, 00:24:14.101 "data_size": 63488 00:24:14.101 } 00:24:14.101 ] 00:24:14.101 }' 00:24:14.101 00:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:14.101 00:08:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.360 00:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:24:14.360 00:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:14.360 [2024-07-25 00:08:10.189314] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ba0 00:24:15.297 00:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:24:15.555 00:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:24:15.555 00:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:24:15.555 00:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:24:15.555 00:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:24:15.555 00:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:15.555 00:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:15.555 00:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:15.555 00:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:15.555 00:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:15.556 00:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:15.556 00:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:15.556 00:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:15.556 00:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:15.556 00:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:15.556 00:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.556 00:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.815 00:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:15.815 "name": "raid_bdev1", 00:24:15.815 "uuid": "b5567afe-f978-457e-a6bf-4ed36d6feafa", 00:24:15.815 "strip_size_kb": 0, 00:24:15.815 "state": "online", 00:24:15.815 "raid_level": "raid1", 00:24:15.815 "superblock": true, 00:24:15.815 "num_base_bdevs": 4, 00:24:15.815 "num_base_bdevs_discovered": 4, 00:24:15.815 "num_base_bdevs_operational": 4, 00:24:15.815 "base_bdevs_list": [ 00:24:15.815 { 00:24:15.815 "name": "BaseBdev1", 00:24:15.815 "uuid": "5f209264-c717-5eb4-8335-b26c2d9f1758", 00:24:15.815 "is_configured": true, 00:24:15.815 "data_offset": 2048, 00:24:15.815 "data_size": 63488 00:24:15.815 }, 00:24:15.815 { 00:24:15.815 "name": "BaseBdev2", 00:24:15.815 "uuid": "53ade181-9ca3-5bc0-9ae6-55fe06f5970b", 00:24:15.815 "is_configured": true, 00:24:15.815 "data_offset": 2048, 00:24:15.815 "data_size": 63488 00:24:15.815 }, 00:24:15.815 { 00:24:15.815 "name": "BaseBdev3", 00:24:15.815 "uuid": "4980ec14-670c-5996-98ac-9f610abcfc9d", 00:24:15.815 "is_configured": true, 00:24:15.815 "data_offset": 2048, 00:24:15.815 "data_size": 63488 00:24:15.815 }, 00:24:15.815 { 00:24:15.815 "name": "BaseBdev4", 00:24:15.815 "uuid": "ae2e04f3-1674-5a2e-aab2-165f056bd7e1", 00:24:15.815 "is_configured": true, 00:24:15.815 "data_offset": 2048, 00:24:15.815 "data_size": 63488 00:24:15.815 } 00:24:15.815 ] 00:24:15.815 }' 00:24:15.815 00:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:15.815 00:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.383 00:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:16.383 [2024-07-25 00:08:12.209637] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:16.383 [2024-07-25 00:08:12.209699] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:16.383 [2024-07-25 00:08:12.212688] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:16.383 [2024-07-25 00:08:12.212770] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:16.383 [2024-07-25 00:08:12.212934] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:16.383 [2024-07-25 00:08:12.212958] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a280 name raid_bdev1, state offline 00:24:16.383 0 00:24:16.383 00:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 96464 00:24:16.383 00:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 96464 ']' 00:24:16.383 00:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 96464 00:24:16.383 00:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:24:16.383 00:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:16.383 00:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96464 00:24:16.642 killing process with pid 96464 00:24:16.642 00:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:16.642 00:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:16.642 00:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96464' 00:24:16.642 00:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 96464 00:24:16.642 00:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 96464 00:24:16.642 [2024-07-25 00:08:12.273486] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:16.902 [2024-07-25 00:08:12.513293] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:17.838 00:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.JUBzCDGv74 00:24:17.838 00:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:24:17.838 00:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:24:17.838 ************************************ 00:24:17.838 END TEST raid_read_error_test 00:24:17.838 ************************************ 00:24:17.838 00:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:24:17.838 00:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:24:17.838 00:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:17.838 00:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:24:17.838 00:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:24:17.838 00:24:17.838 real 0m8.182s 00:24:17.838 user 0m12.283s 00:24:17.838 sys 0m1.057s 00:24:17.838 00:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:17.838 00:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:17.838 00:08:13 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:24:17.838 00:08:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:17.838 00:08:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:17.838 00:08:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:17.838 ************************************ 00:24:17.838 START TEST raid_write_error_test 00:24:17.838 ************************************ 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.42pDHIvDvf 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=96653 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 96653 /var/tmp/spdk-raid.sock 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 96653 ']' 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:17.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:17.838 00:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.096 [2024-07-25 00:08:13.742475] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:24:18.096 [2024-07-25 00:08:13.742672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96653 ] 00:24:18.096 [2024-07-25 00:08:13.911929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.354 [2024-07-25 00:08:14.085034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.614 [2024-07-25 00:08:14.252744] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:18.873 00:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:18.873 00:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:24:18.873 00:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:18.873 00:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:19.132 BaseBdev1_malloc 00:24:19.132 00:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:19.391 true 00:24:19.391 00:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:19.649 [2024-07-25 00:08:15.283722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:19.649 [2024-07-25 00:08:15.283825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:19.649 [2024-07-25 00:08:15.283874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:24:19.649 [2024-07-25 00:08:15.283891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:19.649 [2024-07-25 00:08:15.286235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:19.650 [2024-07-25 00:08:15.286291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:19.650 BaseBdev1 00:24:19.650 00:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:19.650 00:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:19.908 BaseBdev2_malloc 00:24:19.908 00:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:20.167 true 00:24:20.167 00:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:20.425 [2024-07-25 00:08:16.050436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:20.425 [2024-07-25 00:08:16.050525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.425 [2024-07-25 00:08:16.050554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:24:20.425 [2024-07-25 00:08:16.050572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.425 [2024-07-25 00:08:16.053072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.425 [2024-07-25 00:08:16.053117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:20.425 BaseBdev2 00:24:20.425 00:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:20.425 00:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:20.425 BaseBdev3_malloc 00:24:20.684 00:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:20.684 true 00:24:20.684 00:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:20.942 [2024-07-25 00:08:16.706128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:20.942 [2024-07-25 00:08:16.706253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.942 [2024-07-25 00:08:16.706296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:24:20.942 [2024-07-25 00:08:16.706312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.942 [2024-07-25 00:08:16.708922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.942 [2024-07-25 00:08:16.708968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:20.942 BaseBdev3 00:24:20.942 00:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:20.942 00:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:21.199 BaseBdev4_malloc 00:24:21.199 00:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:24:21.456 true 00:24:21.456 00:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:21.715 [2024-07-25 00:08:17.496901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:21.715 [2024-07-25 00:08:17.496990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:21.715 [2024-07-25 00:08:17.497020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:24:21.715 [2024-07-25 00:08:17.497036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:21.715 [2024-07-25 00:08:17.499601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:21.715 [2024-07-25 00:08:17.499658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:21.715 BaseBdev4 00:24:21.715 00:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:24:21.973 [2024-07-25 00:08:17.717121] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:21.973 [2024-07-25 00:08:17.719334] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:21.974 [2024-07-25 00:08:17.719461] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:21.974 [2024-07-25 00:08:17.719549] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:21.974 [2024-07-25 00:08:17.719920] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a280 00:24:21.974 [2024-07-25 00:08:17.719943] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:21.974 [2024-07-25 00:08:17.720111] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:24:21.974 [2024-07-25 00:08:17.720522] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a280 00:24:21.974 [2024-07-25 00:08:17.720539] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a280 00:24:21.974 [2024-07-25 00:08:17.720712] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:21.974 00:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:21.974 00:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:21.974 00:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:21.974 00:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:21.974 00:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:21.974 00:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:21.974 00:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:21.974 00:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:21.974 00:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:21.974 00:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:21.974 00:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.974 00:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.232 00:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:22.232 "name": "raid_bdev1", 00:24:22.232 "uuid": "2ac05c1f-1106-4f08-ad51-c07e1ed1bbfd", 00:24:22.232 "strip_size_kb": 0, 00:24:22.232 "state": "online", 00:24:22.232 "raid_level": "raid1", 00:24:22.232 "superblock": true, 00:24:22.232 "num_base_bdevs": 4, 00:24:22.232 "num_base_bdevs_discovered": 4, 00:24:22.232 "num_base_bdevs_operational": 4, 00:24:22.232 "base_bdevs_list": [ 00:24:22.232 { 00:24:22.232 "name": "BaseBdev1", 00:24:22.232 "uuid": "d83e3103-19fa-5ce6-92ad-8afb36d8a1df", 00:24:22.232 "is_configured": true, 00:24:22.232 "data_offset": 2048, 00:24:22.232 "data_size": 63488 00:24:22.232 }, 00:24:22.232 { 00:24:22.232 "name": "BaseBdev2", 00:24:22.232 "uuid": "a4c27206-d690-5fb7-aa96-2470ce27a137", 00:24:22.232 "is_configured": true, 00:24:22.232 "data_offset": 2048, 00:24:22.232 "data_size": 63488 00:24:22.232 }, 00:24:22.232 { 00:24:22.232 "name": "BaseBdev3", 00:24:22.232 "uuid": "76cd49d2-83ba-5583-8b24-2b5f86b492e8", 00:24:22.232 "is_configured": true, 00:24:22.232 "data_offset": 2048, 00:24:22.232 "data_size": 63488 00:24:22.232 }, 00:24:22.232 { 00:24:22.232 "name": "BaseBdev4", 00:24:22.232 "uuid": "66e2a52b-9e0a-55e7-b51e-f6cee7b7c096", 00:24:22.232 "is_configured": true, 00:24:22.232 "data_offset": 2048, 00:24:22.232 "data_size": 63488 00:24:22.232 } 00:24:22.232 ] 00:24:22.232 }' 00:24:22.232 00:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:22.232 00:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.491 00:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:24:22.491 00:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:22.491 [2024-07-25 00:08:18.358543] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ba0 00:24:23.427 00:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:24:23.685 [2024-07-25 00:08:19.488193] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:24:23.686 [2024-07-25 00:08:19.488276] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:23.686 [2024-07-25 00:08:19.488551] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005ba0 00:24:23.686 00:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:24:23.686 00:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:24:23.686 00:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:24:23.686 00:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=3 00:24:23.686 00:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:23.686 00:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:23.686 00:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:23.686 00:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:23.686 00:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:23.686 00:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:23.686 00:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:23.686 00:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:23.686 00:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:23.686 00:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:23.686 00:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.686 00:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.943 00:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:23.943 "name": "raid_bdev1", 00:24:23.943 "uuid": "2ac05c1f-1106-4f08-ad51-c07e1ed1bbfd", 00:24:23.943 "strip_size_kb": 0, 00:24:23.943 "state": "online", 00:24:23.943 "raid_level": "raid1", 00:24:23.943 "superblock": true, 00:24:23.943 "num_base_bdevs": 4, 00:24:23.944 "num_base_bdevs_discovered": 3, 00:24:23.944 "num_base_bdevs_operational": 3, 00:24:23.944 "base_bdevs_list": [ 00:24:23.944 { 00:24:23.944 "name": null, 00:24:23.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.944 "is_configured": false, 00:24:23.944 "data_offset": 2048, 00:24:23.944 "data_size": 63488 00:24:23.944 }, 00:24:23.944 { 00:24:23.944 "name": "BaseBdev2", 00:24:23.944 "uuid": "a4c27206-d690-5fb7-aa96-2470ce27a137", 00:24:23.944 "is_configured": true, 00:24:23.944 "data_offset": 2048, 00:24:23.944 "data_size": 63488 00:24:23.944 }, 00:24:23.944 { 00:24:23.944 "name": "BaseBdev3", 00:24:23.944 "uuid": "76cd49d2-83ba-5583-8b24-2b5f86b492e8", 00:24:23.944 "is_configured": true, 00:24:23.944 "data_offset": 2048, 00:24:23.944 "data_size": 63488 00:24:23.944 }, 00:24:23.944 { 00:24:23.944 "name": "BaseBdev4", 00:24:23.944 "uuid": "66e2a52b-9e0a-55e7-b51e-f6cee7b7c096", 00:24:23.944 "is_configured": true, 00:24:23.944 "data_offset": 2048, 00:24:23.944 "data_size": 63488 00:24:23.944 } 00:24:23.944 ] 00:24:23.944 }' 00:24:23.944 00:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:23.944 00:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.202 00:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:24.461 [2024-07-25 00:08:20.249514] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:24.461 [2024-07-25 00:08:20.249557] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:24.461 [2024-07-25 00:08:20.252716] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:24.461 [2024-07-25 00:08:20.252768] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:24.461 [2024-07-25 00:08:20.252912] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:24.461 [2024-07-25 00:08:20.252929] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a280 name raid_bdev1, state offline 00:24:24.461 0 00:24:24.461 00:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 96653 00:24:24.461 00:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 96653 ']' 00:24:24.461 00:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 96653 00:24:24.461 00:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:24:24.461 00:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:24.461 00:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96653 00:24:24.461 00:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:24.461 killing process with pid 96653 00:24:24.461 00:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:24.461 00:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96653' 00:24:24.461 00:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 96653 00:24:24.461 [2024-07-25 00:08:20.306353] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:24.461 00:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 96653 00:24:24.719 [2024-07-25 00:08:20.548658] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:26.150 00:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:24:26.150 00:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.42pDHIvDvf 00:24:26.150 00:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:24:26.150 00:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:24:26.150 00:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:24:26.150 00:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:26.150 00:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:24:26.150 00:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:24:26.150 00:24:26.150 real 0m7.973s 00:24:26.150 user 0m11.943s 00:24:26.150 sys 0m1.021s 00:24:26.150 00:08:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:26.150 00:08:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:26.150 ************************************ 00:24:26.150 END TEST raid_write_error_test 00:24:26.150 ************************************ 00:24:26.150 00:08:21 bdev_raid -- bdev/bdev_raid.sh@955 -- # '[' true = true ']' 00:24:26.151 00:08:21 bdev_raid -- bdev/bdev_raid.sh@956 -- # for n in 2 4 00:24:26.151 00:08:21 bdev_raid -- bdev/bdev_raid.sh@957 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:24:26.151 00:08:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:24:26.151 00:08:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:26.151 00:08:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:26.151 ************************************ 00:24:26.151 START TEST raid_rebuild_test 00:24:26.151 ************************************ 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:24:26.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=96840 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 96840 /var/tmp/spdk-raid.sock 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 96840 ']' 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:26.151 00:08:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:26.151 [2024-07-25 00:08:21.765203] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:24:26.151 [2024-07-25 00:08:21.765761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96840 ] 00:24:26.151 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:26.151 Zero copy mechanism will not be used. 00:24:26.151 [2024-07-25 00:08:21.931719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.410 [2024-07-25 00:08:22.107835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.410 [2024-07-25 00:08:22.271896] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:26.977 00:08:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:26.977 00:08:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:24:26.977 00:08:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:26.977 00:08:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:27.235 BaseBdev1_malloc 00:24:27.235 00:08:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:27.235 [2024-07-25 00:08:23.101259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:27.235 [2024-07-25 00:08:23.101379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:27.235 [2024-07-25 00:08:23.101423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:24:27.235 [2024-07-25 00:08:23.101441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:27.235 [2024-07-25 00:08:23.104228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:27.235 [2024-07-25 00:08:23.104294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:27.494 BaseBdev1 00:24:27.494 00:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:27.494 00:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:27.752 BaseBdev2_malloc 00:24:27.752 00:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:28.010 [2024-07-25 00:08:23.628468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:28.010 [2024-07-25 00:08:23.628588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:28.010 [2024-07-25 00:08:23.628620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:24:28.010 [2024-07-25 00:08:23.628639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:28.010 [2024-07-25 00:08:23.631105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:28.010 [2024-07-25 00:08:23.631401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:28.010 BaseBdev2 00:24:28.010 00:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:28.268 spare_malloc 00:24:28.268 00:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:28.527 spare_delay 00:24:28.527 00:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:28.786 [2024-07-25 00:08:24.396613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:28.786 [2024-07-25 00:08:24.396695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:28.786 [2024-07-25 00:08:24.396742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:24:28.786 [2024-07-25 00:08:24.396773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:28.786 [2024-07-25 00:08:24.399711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:28.786 spare 00:24:28.786 [2024-07-25 00:08:24.399933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:28.786 00:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:24:28.786 [2024-07-25 00:08:24.608904] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:28.786 [2024-07-25 00:08:24.611030] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:28.786 [2024-07-25 00:08:24.611358] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009080 00:24:28.786 [2024-07-25 00:08:24.611385] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:28.786 [2024-07-25 00:08:24.611575] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:24:28.786 [2024-07-25 00:08:24.612072] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009080 00:24:28.786 [2024-07-25 00:08:24.612107] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009080 00:24:28.786 [2024-07-25 00:08:24.612308] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:28.786 00:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:28.786 00:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:28.786 00:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:28.786 00:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:28.786 00:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:28.786 00:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:28.786 00:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:28.786 00:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:28.786 00:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:28.786 00:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:28.786 00:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.786 00:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.047 00:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:29.047 "name": "raid_bdev1", 00:24:29.047 "uuid": "e6ee6d8f-ee04-4e16-8245-5823e86662fa", 00:24:29.047 "strip_size_kb": 0, 00:24:29.047 "state": "online", 00:24:29.047 "raid_level": "raid1", 00:24:29.047 "superblock": false, 00:24:29.047 "num_base_bdevs": 2, 00:24:29.047 "num_base_bdevs_discovered": 2, 00:24:29.047 "num_base_bdevs_operational": 2, 00:24:29.047 "base_bdevs_list": [ 00:24:29.047 { 00:24:29.047 "name": "BaseBdev1", 00:24:29.047 "uuid": "b6f15e8a-4540-5409-8b47-ca05d3e2047b", 00:24:29.047 "is_configured": true, 00:24:29.047 "data_offset": 0, 00:24:29.047 "data_size": 65536 00:24:29.047 }, 00:24:29.047 { 00:24:29.047 "name": "BaseBdev2", 00:24:29.047 "uuid": "5bc615ea-3f54-55c9-a26d-93d7098f7909", 00:24:29.047 "is_configured": true, 00:24:29.047 "data_offset": 0, 00:24:29.047 "data_size": 65536 00:24:29.047 } 00:24:29.047 ] 00:24:29.047 }' 00:24:29.047 00:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:29.047 00:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:29.615 00:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:29.615 00:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:24:29.615 [2024-07-25 00:08:25.397349] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:29.615 00:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:24:29.615 00:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.615 00:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:29.875 00:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:24:29.875 00:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:24:29.875 00:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:24:29.875 00:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:24:29.875 00:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:29.875 00:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:29.875 00:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:29.875 00:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:29.875 00:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:29.875 00:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:29.875 00:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:24:29.875 00:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:29.875 00:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:29.875 00:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:30.135 [2024-07-25 00:08:25.889269] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:24:30.135 /dev/nbd0 00:24:30.135 00:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:30.135 00:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:30.135 00:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:30.135 00:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:24:30.135 00:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:30.135 00:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:30.135 00:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:30.135 00:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:24:30.135 00:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:30.135 00:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:30.135 00:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:30.135 1+0 records in 00:24:30.135 1+0 records out 00:24:30.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474367 s, 8.6 MB/s 00:24:30.135 00:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:30.135 00:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:24:30.135 00:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:30.135 00:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:30.135 00:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:24:30.135 00:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:30.135 00:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:30.135 00:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:24:30.135 00:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:24:30.135 00:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:24:36.703 65536+0 records in 00:24:36.703 65536+0 records out 00:24:36.703 33554432 bytes (34 MB, 32 MiB) copied, 6.1572 s, 5.4 MB/s 00:24:36.703 00:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:36.703 00:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:36.703 00:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:36.703 00:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:36.703 00:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:24:36.703 00:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:36.703 00:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:36.703 [2024-07-25 00:08:32.361046] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:36.703 00:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:36.703 00:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:36.703 00:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:36.703 00:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:36.703 00:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:36.703 00:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:36.703 00:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:36.703 00:08:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:36.703 00:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:36.961 [2024-07-25 00:08:32.629675] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:36.961 00:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:36.961 00:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:36.961 00:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:36.961 00:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:36.961 00:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:36.961 00:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:36.961 00:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:36.961 00:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:36.961 00:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:36.961 00:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:36.961 00:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.961 00:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.220 00:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:37.220 "name": "raid_bdev1", 00:24:37.220 "uuid": "e6ee6d8f-ee04-4e16-8245-5823e86662fa", 00:24:37.220 "strip_size_kb": 0, 00:24:37.220 "state": "online", 00:24:37.220 "raid_level": "raid1", 00:24:37.220 "superblock": false, 00:24:37.220 "num_base_bdevs": 2, 00:24:37.220 "num_base_bdevs_discovered": 1, 00:24:37.220 "num_base_bdevs_operational": 1, 00:24:37.220 "base_bdevs_list": [ 00:24:37.220 { 00:24:37.220 "name": null, 00:24:37.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:37.220 "is_configured": false, 00:24:37.220 "data_offset": 0, 00:24:37.220 "data_size": 65536 00:24:37.220 }, 00:24:37.220 { 00:24:37.220 "name": "BaseBdev2", 00:24:37.220 "uuid": "5bc615ea-3f54-55c9-a26d-93d7098f7909", 00:24:37.220 "is_configured": true, 00:24:37.220 "data_offset": 0, 00:24:37.220 "data_size": 65536 00:24:37.220 } 00:24:37.220 ] 00:24:37.220 }' 00:24:37.220 00:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:37.220 00:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:37.478 00:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:37.736 [2024-07-25 00:08:33.373911] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:37.736 [2024-07-25 00:08:33.387918] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d096f0 00:24:37.736 [2024-07-25 00:08:33.390092] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:37.736 00:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:38.671 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:38.671 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:38.671 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:38.671 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:38.671 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:38.671 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.671 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.930 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:38.930 "name": "raid_bdev1", 00:24:38.930 "uuid": "e6ee6d8f-ee04-4e16-8245-5823e86662fa", 00:24:38.930 "strip_size_kb": 0, 00:24:38.930 "state": "online", 00:24:38.930 "raid_level": "raid1", 00:24:38.930 "superblock": false, 00:24:38.930 "num_base_bdevs": 2, 00:24:38.930 "num_base_bdevs_discovered": 2, 00:24:38.930 "num_base_bdevs_operational": 2, 00:24:38.930 "process": { 00:24:38.930 "type": "rebuild", 00:24:38.930 "target": "spare", 00:24:38.930 "progress": { 00:24:38.930 "blocks": 24576, 00:24:38.930 "percent": 37 00:24:38.930 } 00:24:38.930 }, 00:24:38.930 "base_bdevs_list": [ 00:24:38.930 { 00:24:38.930 "name": "spare", 00:24:38.930 "uuid": "9abe1476-83e2-599b-bc20-355182ceeb82", 00:24:38.930 "is_configured": true, 00:24:38.930 "data_offset": 0, 00:24:38.930 "data_size": 65536 00:24:38.930 }, 00:24:38.930 { 00:24:38.930 "name": "BaseBdev2", 00:24:38.930 "uuid": "5bc615ea-3f54-55c9-a26d-93d7098f7909", 00:24:38.930 "is_configured": true, 00:24:38.930 "data_offset": 0, 00:24:38.930 "data_size": 65536 00:24:38.930 } 00:24:38.930 ] 00:24:38.930 }' 00:24:38.930 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:38.930 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:38.930 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:38.930 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:38.930 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:39.189 [2024-07-25 00:08:34.892144] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:39.189 [2024-07-25 00:08:34.897398] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:39.189 [2024-07-25 00:08:34.897520] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:39.189 [2024-07-25 00:08:34.897543] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:39.189 [2024-07-25 00:08:34.897557] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:39.189 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:39.189 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:39.189 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:39.189 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:39.189 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:39.189 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:39.189 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:39.189 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:39.189 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:39.189 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:39.189 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.189 00:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.448 00:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:39.448 "name": "raid_bdev1", 00:24:39.448 "uuid": "e6ee6d8f-ee04-4e16-8245-5823e86662fa", 00:24:39.448 "strip_size_kb": 0, 00:24:39.448 "state": "online", 00:24:39.448 "raid_level": "raid1", 00:24:39.448 "superblock": false, 00:24:39.448 "num_base_bdevs": 2, 00:24:39.448 "num_base_bdevs_discovered": 1, 00:24:39.448 "num_base_bdevs_operational": 1, 00:24:39.448 "base_bdevs_list": [ 00:24:39.448 { 00:24:39.448 "name": null, 00:24:39.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.448 "is_configured": false, 00:24:39.448 "data_offset": 0, 00:24:39.448 "data_size": 65536 00:24:39.448 }, 00:24:39.448 { 00:24:39.448 "name": "BaseBdev2", 00:24:39.448 "uuid": "5bc615ea-3f54-55c9-a26d-93d7098f7909", 00:24:39.448 "is_configured": true, 00:24:39.448 "data_offset": 0, 00:24:39.448 "data_size": 65536 00:24:39.448 } 00:24:39.448 ] 00:24:39.448 }' 00:24:39.448 00:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:39.448 00:08:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.706 00:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:39.706 00:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:39.706 00:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:39.706 00:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:39.706 00:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:39.706 00:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.706 00:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.964 00:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:39.964 "name": "raid_bdev1", 00:24:39.964 "uuid": "e6ee6d8f-ee04-4e16-8245-5823e86662fa", 00:24:39.964 "strip_size_kb": 0, 00:24:39.964 "state": "online", 00:24:39.964 "raid_level": "raid1", 00:24:39.964 "superblock": false, 00:24:39.964 "num_base_bdevs": 2, 00:24:39.964 "num_base_bdevs_discovered": 1, 00:24:39.964 "num_base_bdevs_operational": 1, 00:24:39.964 "base_bdevs_list": [ 00:24:39.964 { 00:24:39.964 "name": null, 00:24:39.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.964 "is_configured": false, 00:24:39.964 "data_offset": 0, 00:24:39.964 "data_size": 65536 00:24:39.964 }, 00:24:39.964 { 00:24:39.964 "name": "BaseBdev2", 00:24:39.965 "uuid": "5bc615ea-3f54-55c9-a26d-93d7098f7909", 00:24:39.965 "is_configured": true, 00:24:39.965 "data_offset": 0, 00:24:39.965 "data_size": 65536 00:24:39.965 } 00:24:39.965 ] 00:24:39.965 }' 00:24:39.965 00:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:39.965 00:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:39.965 00:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:39.965 00:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:39.965 00:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:40.223 [2024-07-25 00:08:35.970958] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:40.223 [2024-07-25 00:08:35.983868] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d097c0 00:24:40.223 [2024-07-25 00:08:35.986009] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:40.223 00:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:24:41.157 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:41.157 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:41.157 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:41.157 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:41.157 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:41.157 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.157 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.415 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:41.415 "name": "raid_bdev1", 00:24:41.415 "uuid": "e6ee6d8f-ee04-4e16-8245-5823e86662fa", 00:24:41.415 "strip_size_kb": 0, 00:24:41.415 "state": "online", 00:24:41.415 "raid_level": "raid1", 00:24:41.415 "superblock": false, 00:24:41.415 "num_base_bdevs": 2, 00:24:41.415 "num_base_bdevs_discovered": 2, 00:24:41.415 "num_base_bdevs_operational": 2, 00:24:41.415 "process": { 00:24:41.415 "type": "rebuild", 00:24:41.415 "target": "spare", 00:24:41.415 "progress": { 00:24:41.415 "blocks": 24576, 00:24:41.415 "percent": 37 00:24:41.415 } 00:24:41.415 }, 00:24:41.415 "base_bdevs_list": [ 00:24:41.415 { 00:24:41.415 "name": "spare", 00:24:41.415 "uuid": "9abe1476-83e2-599b-bc20-355182ceeb82", 00:24:41.415 "is_configured": true, 00:24:41.415 "data_offset": 0, 00:24:41.415 "data_size": 65536 00:24:41.415 }, 00:24:41.415 { 00:24:41.415 "name": "BaseBdev2", 00:24:41.415 "uuid": "5bc615ea-3f54-55c9-a26d-93d7098f7909", 00:24:41.415 "is_configured": true, 00:24:41.415 "data_offset": 0, 00:24:41.415 "data_size": 65536 00:24:41.415 } 00:24:41.415 ] 00:24:41.415 }' 00:24:41.415 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:41.673 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:41.673 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:41.673 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:41.673 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:24:41.673 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:24:41.673 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:24:41.673 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:24:41.673 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=707 00:24:41.673 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:41.673 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:41.673 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:41.673 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:41.673 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:41.673 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:41.673 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.673 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.931 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:41.931 "name": "raid_bdev1", 00:24:41.931 "uuid": "e6ee6d8f-ee04-4e16-8245-5823e86662fa", 00:24:41.931 "strip_size_kb": 0, 00:24:41.931 "state": "online", 00:24:41.931 "raid_level": "raid1", 00:24:41.931 "superblock": false, 00:24:41.931 "num_base_bdevs": 2, 00:24:41.931 "num_base_bdevs_discovered": 2, 00:24:41.931 "num_base_bdevs_operational": 2, 00:24:41.931 "process": { 00:24:41.931 "type": "rebuild", 00:24:41.931 "target": "spare", 00:24:41.931 "progress": { 00:24:41.931 "blocks": 30720, 00:24:41.931 "percent": 46 00:24:41.931 } 00:24:41.931 }, 00:24:41.931 "base_bdevs_list": [ 00:24:41.931 { 00:24:41.931 "name": "spare", 00:24:41.931 "uuid": "9abe1476-83e2-599b-bc20-355182ceeb82", 00:24:41.931 "is_configured": true, 00:24:41.931 "data_offset": 0, 00:24:41.931 "data_size": 65536 00:24:41.931 }, 00:24:41.931 { 00:24:41.931 "name": "BaseBdev2", 00:24:41.931 "uuid": "5bc615ea-3f54-55c9-a26d-93d7098f7909", 00:24:41.931 "is_configured": true, 00:24:41.931 "data_offset": 0, 00:24:41.931 "data_size": 65536 00:24:41.931 } 00:24:41.931 ] 00:24:41.932 }' 00:24:41.932 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:41.932 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:41.932 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:41.932 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:41.932 00:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:42.891 00:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:42.891 00:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:42.891 00:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:42.891 00:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:42.891 00:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:42.891 00:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:42.891 00:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.891 00:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.149 00:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:43.149 "name": "raid_bdev1", 00:24:43.149 "uuid": "e6ee6d8f-ee04-4e16-8245-5823e86662fa", 00:24:43.149 "strip_size_kb": 0, 00:24:43.149 "state": "online", 00:24:43.149 "raid_level": "raid1", 00:24:43.149 "superblock": false, 00:24:43.149 "num_base_bdevs": 2, 00:24:43.149 "num_base_bdevs_discovered": 2, 00:24:43.149 "num_base_bdevs_operational": 2, 00:24:43.149 "process": { 00:24:43.149 "type": "rebuild", 00:24:43.149 "target": "spare", 00:24:43.149 "progress": { 00:24:43.149 "blocks": 57344, 00:24:43.149 "percent": 87 00:24:43.149 } 00:24:43.149 }, 00:24:43.149 "base_bdevs_list": [ 00:24:43.149 { 00:24:43.149 "name": "spare", 00:24:43.149 "uuid": "9abe1476-83e2-599b-bc20-355182ceeb82", 00:24:43.149 "is_configured": true, 00:24:43.149 "data_offset": 0, 00:24:43.149 "data_size": 65536 00:24:43.149 }, 00:24:43.149 { 00:24:43.149 "name": "BaseBdev2", 00:24:43.149 "uuid": "5bc615ea-3f54-55c9-a26d-93d7098f7909", 00:24:43.149 "is_configured": true, 00:24:43.149 "data_offset": 0, 00:24:43.149 "data_size": 65536 00:24:43.149 } 00:24:43.149 ] 00:24:43.149 }' 00:24:43.149 00:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:43.149 00:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:43.149 00:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:43.149 00:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:43.149 00:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:43.407 [2024-07-25 00:08:39.202027] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:43.407 [2024-07-25 00:08:39.202116] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:43.407 [2024-07-25 00:08:39.202193] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:44.341 00:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:44.341 00:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:44.341 00:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:44.341 00:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:44.341 00:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:44.341 00:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:44.341 00:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.342 00:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.342 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:44.342 "name": "raid_bdev1", 00:24:44.342 "uuid": "e6ee6d8f-ee04-4e16-8245-5823e86662fa", 00:24:44.342 "strip_size_kb": 0, 00:24:44.342 "state": "online", 00:24:44.342 "raid_level": "raid1", 00:24:44.342 "superblock": false, 00:24:44.342 "num_base_bdevs": 2, 00:24:44.342 "num_base_bdevs_discovered": 2, 00:24:44.342 "num_base_bdevs_operational": 2, 00:24:44.342 "base_bdevs_list": [ 00:24:44.342 { 00:24:44.342 "name": "spare", 00:24:44.342 "uuid": "9abe1476-83e2-599b-bc20-355182ceeb82", 00:24:44.342 "is_configured": true, 00:24:44.342 "data_offset": 0, 00:24:44.342 "data_size": 65536 00:24:44.342 }, 00:24:44.342 { 00:24:44.342 "name": "BaseBdev2", 00:24:44.342 "uuid": "5bc615ea-3f54-55c9-a26d-93d7098f7909", 00:24:44.342 "is_configured": true, 00:24:44.342 "data_offset": 0, 00:24:44.342 "data_size": 65536 00:24:44.342 } 00:24:44.342 ] 00:24:44.342 }' 00:24:44.342 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:44.342 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:44.342 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:44.600 "name": "raid_bdev1", 00:24:44.600 "uuid": "e6ee6d8f-ee04-4e16-8245-5823e86662fa", 00:24:44.600 "strip_size_kb": 0, 00:24:44.600 "state": "online", 00:24:44.600 "raid_level": "raid1", 00:24:44.600 "superblock": false, 00:24:44.600 "num_base_bdevs": 2, 00:24:44.600 "num_base_bdevs_discovered": 2, 00:24:44.600 "num_base_bdevs_operational": 2, 00:24:44.600 "base_bdevs_list": [ 00:24:44.600 { 00:24:44.600 "name": "spare", 00:24:44.600 "uuid": "9abe1476-83e2-599b-bc20-355182ceeb82", 00:24:44.600 "is_configured": true, 00:24:44.600 "data_offset": 0, 00:24:44.600 "data_size": 65536 00:24:44.600 }, 00:24:44.600 { 00:24:44.600 "name": "BaseBdev2", 00:24:44.600 "uuid": "5bc615ea-3f54-55c9-a26d-93d7098f7909", 00:24:44.600 "is_configured": true, 00:24:44.600 "data_offset": 0, 00:24:44.600 "data_size": 65536 00:24:44.600 } 00:24:44.600 ] 00:24:44.600 }' 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.600 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.859 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:44.859 "name": "raid_bdev1", 00:24:44.859 "uuid": "e6ee6d8f-ee04-4e16-8245-5823e86662fa", 00:24:44.859 "strip_size_kb": 0, 00:24:44.859 "state": "online", 00:24:44.859 "raid_level": "raid1", 00:24:44.859 "superblock": false, 00:24:44.859 "num_base_bdevs": 2, 00:24:44.859 "num_base_bdevs_discovered": 2, 00:24:44.859 "num_base_bdevs_operational": 2, 00:24:44.859 "base_bdevs_list": [ 00:24:44.859 { 00:24:44.859 "name": "spare", 00:24:44.859 "uuid": "9abe1476-83e2-599b-bc20-355182ceeb82", 00:24:44.859 "is_configured": true, 00:24:44.859 "data_offset": 0, 00:24:44.859 "data_size": 65536 00:24:44.859 }, 00:24:44.859 { 00:24:44.859 "name": "BaseBdev2", 00:24:44.859 "uuid": "5bc615ea-3f54-55c9-a26d-93d7098f7909", 00:24:44.859 "is_configured": true, 00:24:44.859 "data_offset": 0, 00:24:44.859 "data_size": 65536 00:24:44.859 } 00:24:44.859 ] 00:24:44.859 }' 00:24:44.859 00:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:44.859 00:08:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.424 00:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:45.424 [2024-07-25 00:08:41.208511] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:45.424 [2024-07-25 00:08:41.208732] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:45.424 [2024-07-25 00:08:41.208970] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:45.424 [2024-07-25 00:08:41.209174] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:45.424 [2024-07-25 00:08:41.209339] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state offline 00:24:45.424 00:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:24:45.424 00:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.682 00:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:24:45.682 00:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:24:45.682 00:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:24:45.682 00:08:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:45.682 00:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:45.682 00:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:45.682 00:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:45.682 00:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:45.682 00:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:45.682 00:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:24:45.682 00:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:45.682 00:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:45.682 00:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:45.940 /dev/nbd0 00:24:45.940 00:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:45.940 00:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:45.940 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:45.940 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:24:45.940 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:45.940 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:45.940 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:45.940 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:24:45.940 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:45.940 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:45.940 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:45.940 1+0 records in 00:24:45.940 1+0 records out 00:24:45.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328209 s, 12.5 MB/s 00:24:45.940 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:45.940 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:24:45.940 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:45.940 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:45.940 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:24:45.940 00:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:45.940 00:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:45.940 00:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:46.197 /dev/nbd1 00:24:46.197 00:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:46.197 00:08:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:46.197 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:46.197 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:24:46.197 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:46.197 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:46.197 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:46.197 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:24:46.197 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:46.197 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:46.197 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:46.197 1+0 records in 00:24:46.197 1+0 records out 00:24:46.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391815 s, 10.5 MB/s 00:24:46.197 00:08:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:46.197 00:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:24:46.197 00:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:46.197 00:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:46.197 00:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:24:46.197 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:46.197 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:46.197 00:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:46.454 00:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:46.454 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:46.454 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:46.454 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:46.454 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:24:46.454 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:46.454 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:46.711 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:46.711 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:46.711 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:46.711 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:46.711 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:46.711 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:46.711 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:46.711 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:46.711 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:46.711 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:46.969 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:46.969 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:46.969 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:46.969 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:46.969 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:46.969 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:46.969 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:46.969 00:08:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:46.969 00:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:24:46.969 00:08:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 96840 00:24:46.969 00:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 96840 ']' 00:24:46.969 00:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 96840 00:24:46.969 00:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:24:46.969 00:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:46.969 00:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96840 00:24:46.969 killing process with pid 96840 00:24:46.969 Received shutdown signal, test time was about 60.000000 seconds 00:24:46.969 00:24:46.969 Latency(us) 00:24:46.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.969 =================================================================================================================== 00:24:46.969 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:46.969 00:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:46.969 00:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:46.970 00:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96840' 00:24:46.970 00:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 96840 00:24:46.970 [2024-07-25 00:08:42.714804] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:46.970 00:08:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 96840 00:24:47.227 [2024-07-25 00:08:42.939899] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:48.160 ************************************ 00:24:48.160 END TEST raid_rebuild_test 00:24:48.160 ************************************ 00:24:48.160 00:08:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:24:48.160 00:24:48.160 real 0m22.288s 00:24:48.160 user 0m28.388s 00:24:48.160 sys 0m4.100s 00:24:48.160 00:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:48.160 00:08:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:48.419 00:08:44 bdev_raid -- bdev/bdev_raid.sh@958 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:24:48.419 00:08:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:24:48.419 00:08:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:48.419 00:08:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:48.419 ************************************ 00:24:48.419 START TEST raid_rebuild_test_sb 00:24:48.419 ************************************ 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=97347 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 97347 /var/tmp/spdk-raid.sock 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 97347 ']' 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:48.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:48.419 00:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:48.419 [2024-07-25 00:08:44.115563] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:24:48.419 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:48.419 Zero copy mechanism will not be used. 00:24:48.419 [2024-07-25 00:08:44.115746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97347 ] 00:24:48.419 [2024-07-25 00:08:44.281312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.677 [2024-07-25 00:08:44.452547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.934 [2024-07-25 00:08:44.612728] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:49.192 00:08:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:49.192 00:08:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:24:49.192 00:08:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:49.192 00:08:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:49.451 BaseBdev1_malloc 00:24:49.709 00:08:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:49.967 [2024-07-25 00:08:45.579501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:49.967 [2024-07-25 00:08:45.579644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.967 [2024-07-25 00:08:45.579696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:24:49.967 [2024-07-25 00:08:45.579714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.967 [2024-07-25 00:08:45.582440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.967 [2024-07-25 00:08:45.582485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:49.967 BaseBdev1 00:24:49.967 00:08:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:49.967 00:08:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:50.226 BaseBdev2_malloc 00:24:50.226 00:08:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:50.226 [2024-07-25 00:08:46.045597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:50.226 [2024-07-25 00:08:46.045708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.226 [2024-07-25 00:08:46.045740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:24:50.226 [2024-07-25 00:08:46.045760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.226 [2024-07-25 00:08:46.048355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.226 [2024-07-25 00:08:46.048416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:50.226 BaseBdev2 00:24:50.226 00:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:50.484 spare_malloc 00:24:50.484 00:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:50.743 spare_delay 00:24:50.743 00:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:51.000 [2024-07-25 00:08:46.718626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:51.000 [2024-07-25 00:08:46.718720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:51.000 [2024-07-25 00:08:46.718751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:24:51.001 [2024-07-25 00:08:46.718766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:51.001 [2024-07-25 00:08:46.721500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:51.001 [2024-07-25 00:08:46.721578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:51.001 spare 00:24:51.001 00:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:24:51.259 [2024-07-25 00:08:46.938755] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:51.259 [2024-07-25 00:08:46.940872] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:51.259 [2024-07-25 00:08:46.941105] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009080 00:24:51.259 [2024-07-25 00:08:46.941158] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:51.259 [2024-07-25 00:08:46.941303] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:24:51.259 [2024-07-25 00:08:46.941684] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009080 00:24:51.259 [2024-07-25 00:08:46.941711] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009080 00:24:51.259 [2024-07-25 00:08:46.941896] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:51.259 00:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:51.259 00:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:51.259 00:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:51.259 00:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:51.259 00:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:51.259 00:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:51.259 00:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:51.259 00:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:51.259 00:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:51.259 00:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:51.259 00:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.259 00:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.524 00:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:51.524 "name": "raid_bdev1", 00:24:51.524 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:24:51.524 "strip_size_kb": 0, 00:24:51.524 "state": "online", 00:24:51.524 "raid_level": "raid1", 00:24:51.524 "superblock": true, 00:24:51.524 "num_base_bdevs": 2, 00:24:51.524 "num_base_bdevs_discovered": 2, 00:24:51.524 "num_base_bdevs_operational": 2, 00:24:51.524 "base_bdevs_list": [ 00:24:51.524 { 00:24:51.524 "name": "BaseBdev1", 00:24:51.524 "uuid": "2cac9198-7add-5fb6-afff-8172d4474afc", 00:24:51.524 "is_configured": true, 00:24:51.524 "data_offset": 2048, 00:24:51.524 "data_size": 63488 00:24:51.524 }, 00:24:51.524 { 00:24:51.524 "name": "BaseBdev2", 00:24:51.524 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:24:51.524 "is_configured": true, 00:24:51.524 "data_offset": 2048, 00:24:51.524 "data_size": 63488 00:24:51.524 } 00:24:51.524 ] 00:24:51.524 }' 00:24:51.524 00:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:51.524 00:08:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:51.813 00:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:24:51.813 00:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:52.086 [2024-07-25 00:08:47.695292] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:52.086 00:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:24:52.086 00:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:52.086 00:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.344 00:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:24:52.344 00:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:24:52.344 00:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:24:52.344 00:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:24:52.344 00:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:52.344 00:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:52.344 00:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:52.344 00:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:52.344 00:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:52.344 00:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:52.344 00:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:52.344 00:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:52.344 00:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:52.344 00:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:52.344 [2024-07-25 00:08:48.187185] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:24:52.344 /dev/nbd0 00:24:52.603 00:08:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:52.603 00:08:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:52.603 00:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:52.603 00:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:24:52.603 00:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:52.603 00:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:52.603 00:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:52.603 00:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:24:52.603 00:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:52.603 00:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:52.603 00:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:52.603 1+0 records in 00:24:52.603 1+0 records out 00:24:52.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231933 s, 17.7 MB/s 00:24:52.603 00:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:52.603 00:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:24:52.603 00:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:52.603 00:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:52.603 00:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:24:52.603 00:08:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:52.603 00:08:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:52.603 00:08:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:24:52.603 00:08:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:24:52.603 00:08:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:24:59.165 63488+0 records in 00:24:59.165 63488+0 records out 00:24:59.165 32505856 bytes (33 MB, 31 MiB) copied, 5.82652 s, 5.6 MB/s 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:59.165 [2024-07-25 00:08:54.336193] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:59.165 [2024-07-25 00:08:54.589846] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:59.165 "name": "raid_bdev1", 00:24:59.165 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:24:59.165 "strip_size_kb": 0, 00:24:59.165 "state": "online", 00:24:59.165 "raid_level": "raid1", 00:24:59.165 "superblock": true, 00:24:59.165 "num_base_bdevs": 2, 00:24:59.165 "num_base_bdevs_discovered": 1, 00:24:59.165 "num_base_bdevs_operational": 1, 00:24:59.165 "base_bdevs_list": [ 00:24:59.165 { 00:24:59.165 "name": null, 00:24:59.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.165 "is_configured": false, 00:24:59.165 "data_offset": 2048, 00:24:59.165 "data_size": 63488 00:24:59.165 }, 00:24:59.165 { 00:24:59.165 "name": "BaseBdev2", 00:24:59.165 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:24:59.165 "is_configured": true, 00:24:59.165 "data_offset": 2048, 00:24:59.165 "data_size": 63488 00:24:59.165 } 00:24:59.165 ] 00:24:59.165 }' 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:59.165 00:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:59.424 00:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:59.682 [2024-07-25 00:08:55.382205] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:59.682 [2024-07-25 00:08:55.396172] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2e80 00:24:59.682 [2024-07-25 00:08:55.398275] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:59.682 00:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:00.618 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:00.618 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:00.618 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:00.618 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:00.618 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:00.618 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.618 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.876 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:00.876 "name": "raid_bdev1", 00:25:00.876 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:25:00.876 "strip_size_kb": 0, 00:25:00.876 "state": "online", 00:25:00.877 "raid_level": "raid1", 00:25:00.877 "superblock": true, 00:25:00.877 "num_base_bdevs": 2, 00:25:00.877 "num_base_bdevs_discovered": 2, 00:25:00.877 "num_base_bdevs_operational": 2, 00:25:00.877 "process": { 00:25:00.877 "type": "rebuild", 00:25:00.877 "target": "spare", 00:25:00.877 "progress": { 00:25:00.877 "blocks": 24576, 00:25:00.877 "percent": 38 00:25:00.877 } 00:25:00.877 }, 00:25:00.877 "base_bdevs_list": [ 00:25:00.877 { 00:25:00.877 "name": "spare", 00:25:00.877 "uuid": "dcb2e5dc-99f3-58ba-ba5c-49e443c8b9f4", 00:25:00.877 "is_configured": true, 00:25:00.877 "data_offset": 2048, 00:25:00.877 "data_size": 63488 00:25:00.877 }, 00:25:00.877 { 00:25:00.877 "name": "BaseBdev2", 00:25:00.877 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:25:00.877 "is_configured": true, 00:25:00.877 "data_offset": 2048, 00:25:00.877 "data_size": 63488 00:25:00.877 } 00:25:00.877 ] 00:25:00.877 }' 00:25:00.877 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:00.877 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:00.877 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:00.877 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:00.877 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:01.135 [2024-07-25 00:08:56.904546] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:01.135 [2024-07-25 00:08:56.905884] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:01.135 [2024-07-25 00:08:56.905983] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:01.135 [2024-07-25 00:08:56.906005] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:01.135 [2024-07-25 00:08:56.906018] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:01.135 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:01.135 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:01.135 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:01.135 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:01.135 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:01.135 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:01.135 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:01.135 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:01.135 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:01.135 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:01.135 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.135 00:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.394 00:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:01.394 "name": "raid_bdev1", 00:25:01.394 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:25:01.394 "strip_size_kb": 0, 00:25:01.394 "state": "online", 00:25:01.394 "raid_level": "raid1", 00:25:01.394 "superblock": true, 00:25:01.394 "num_base_bdevs": 2, 00:25:01.394 "num_base_bdevs_discovered": 1, 00:25:01.394 "num_base_bdevs_operational": 1, 00:25:01.394 "base_bdevs_list": [ 00:25:01.394 { 00:25:01.394 "name": null, 00:25:01.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.394 "is_configured": false, 00:25:01.394 "data_offset": 2048, 00:25:01.394 "data_size": 63488 00:25:01.394 }, 00:25:01.394 { 00:25:01.394 "name": "BaseBdev2", 00:25:01.394 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:25:01.394 "is_configured": true, 00:25:01.394 "data_offset": 2048, 00:25:01.394 "data_size": 63488 00:25:01.394 } 00:25:01.394 ] 00:25:01.394 }' 00:25:01.394 00:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:01.394 00:08:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:01.653 00:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:01.653 00:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:01.653 00:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:01.653 00:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:01.653 00:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:01.653 00:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.653 00:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.911 00:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:01.911 "name": "raid_bdev1", 00:25:01.911 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:25:01.911 "strip_size_kb": 0, 00:25:01.911 "state": "online", 00:25:01.911 "raid_level": "raid1", 00:25:01.911 "superblock": true, 00:25:01.911 "num_base_bdevs": 2, 00:25:01.911 "num_base_bdevs_discovered": 1, 00:25:01.911 "num_base_bdevs_operational": 1, 00:25:01.911 "base_bdevs_list": [ 00:25:01.911 { 00:25:01.911 "name": null, 00:25:01.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.911 "is_configured": false, 00:25:01.911 "data_offset": 2048, 00:25:01.911 "data_size": 63488 00:25:01.911 }, 00:25:01.911 { 00:25:01.911 "name": "BaseBdev2", 00:25:01.911 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:25:01.911 "is_configured": true, 00:25:01.911 "data_offset": 2048, 00:25:01.912 "data_size": 63488 00:25:01.912 } 00:25:01.912 ] 00:25:01.912 }' 00:25:01.912 00:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:01.912 00:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:01.912 00:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:01.912 00:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:01.912 00:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:02.170 [2024-07-25 00:08:57.993115] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:02.170 [2024-07-25 00:08:58.006483] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2f50 00:25:02.170 [2024-07-25 00:08:58.008675] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:02.170 00:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:25:03.546 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:03.546 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:03.546 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:03.546 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:03.546 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:03.546 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.546 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.546 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:03.546 "name": "raid_bdev1", 00:25:03.546 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:25:03.546 "strip_size_kb": 0, 00:25:03.546 "state": "online", 00:25:03.546 "raid_level": "raid1", 00:25:03.546 "superblock": true, 00:25:03.546 "num_base_bdevs": 2, 00:25:03.546 "num_base_bdevs_discovered": 2, 00:25:03.546 "num_base_bdevs_operational": 2, 00:25:03.546 "process": { 00:25:03.546 "type": "rebuild", 00:25:03.546 "target": "spare", 00:25:03.546 "progress": { 00:25:03.546 "blocks": 24576, 00:25:03.546 "percent": 38 00:25:03.546 } 00:25:03.546 }, 00:25:03.546 "base_bdevs_list": [ 00:25:03.546 { 00:25:03.546 "name": "spare", 00:25:03.546 "uuid": "dcb2e5dc-99f3-58ba-ba5c-49e443c8b9f4", 00:25:03.546 "is_configured": true, 00:25:03.546 "data_offset": 2048, 00:25:03.546 "data_size": 63488 00:25:03.546 }, 00:25:03.546 { 00:25:03.546 "name": "BaseBdev2", 00:25:03.546 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:25:03.546 "is_configured": true, 00:25:03.546 "data_offset": 2048, 00:25:03.546 "data_size": 63488 00:25:03.546 } 00:25:03.546 ] 00:25:03.546 }' 00:25:03.546 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:03.546 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:03.546 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:03.546 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:03.546 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:25:03.546 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:25:03.546 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:25:03.546 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:25:03.547 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:25:03.547 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:25:03.547 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=729 00:25:03.547 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:03.547 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:03.547 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:03.547 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:03.547 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:03.547 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:03.547 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.547 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.805 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:03.805 "name": "raid_bdev1", 00:25:03.805 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:25:03.805 "strip_size_kb": 0, 00:25:03.805 "state": "online", 00:25:03.805 "raid_level": "raid1", 00:25:03.805 "superblock": true, 00:25:03.805 "num_base_bdevs": 2, 00:25:03.805 "num_base_bdevs_discovered": 2, 00:25:03.806 "num_base_bdevs_operational": 2, 00:25:03.806 "process": { 00:25:03.806 "type": "rebuild", 00:25:03.806 "target": "spare", 00:25:03.806 "progress": { 00:25:03.806 "blocks": 28672, 00:25:03.806 "percent": 45 00:25:03.806 } 00:25:03.806 }, 00:25:03.806 "base_bdevs_list": [ 00:25:03.806 { 00:25:03.806 "name": "spare", 00:25:03.806 "uuid": "dcb2e5dc-99f3-58ba-ba5c-49e443c8b9f4", 00:25:03.806 "is_configured": true, 00:25:03.806 "data_offset": 2048, 00:25:03.806 "data_size": 63488 00:25:03.806 }, 00:25:03.806 { 00:25:03.806 "name": "BaseBdev2", 00:25:03.806 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:25:03.806 "is_configured": true, 00:25:03.806 "data_offset": 2048, 00:25:03.806 "data_size": 63488 00:25:03.806 } 00:25:03.806 ] 00:25:03.806 }' 00:25:03.806 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:03.806 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:03.806 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:03.806 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:03.806 00:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:04.742 00:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:04.742 00:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:04.742 00:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:04.742 00:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:04.742 00:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:04.742 00:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:04.742 00:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.742 00:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.020 00:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:05.020 "name": "raid_bdev1", 00:25:05.020 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:25:05.020 "strip_size_kb": 0, 00:25:05.020 "state": "online", 00:25:05.020 "raid_level": "raid1", 00:25:05.020 "superblock": true, 00:25:05.020 "num_base_bdevs": 2, 00:25:05.020 "num_base_bdevs_discovered": 2, 00:25:05.020 "num_base_bdevs_operational": 2, 00:25:05.020 "process": { 00:25:05.020 "type": "rebuild", 00:25:05.020 "target": "spare", 00:25:05.020 "progress": { 00:25:05.020 "blocks": 55296, 00:25:05.021 "percent": 87 00:25:05.021 } 00:25:05.021 }, 00:25:05.021 "base_bdevs_list": [ 00:25:05.021 { 00:25:05.021 "name": "spare", 00:25:05.021 "uuid": "dcb2e5dc-99f3-58ba-ba5c-49e443c8b9f4", 00:25:05.021 "is_configured": true, 00:25:05.021 "data_offset": 2048, 00:25:05.021 "data_size": 63488 00:25:05.021 }, 00:25:05.021 { 00:25:05.021 "name": "BaseBdev2", 00:25:05.021 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:25:05.021 "is_configured": true, 00:25:05.021 "data_offset": 2048, 00:25:05.021 "data_size": 63488 00:25:05.021 } 00:25:05.021 ] 00:25:05.021 }' 00:25:05.021 00:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:05.021 00:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:05.021 00:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:05.021 00:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:05.021 00:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:05.293 [2024-07-25 00:09:01.124286] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:05.293 [2024-07-25 00:09:01.124378] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:05.293 [2024-07-25 00:09:01.124522] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:06.229 00:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:06.229 00:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:06.229 00:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:06.229 00:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:06.229 00:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:06.229 00:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:06.229 00:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.229 00:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.229 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:06.229 "name": "raid_bdev1", 00:25:06.229 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:25:06.229 "strip_size_kb": 0, 00:25:06.229 "state": "online", 00:25:06.229 "raid_level": "raid1", 00:25:06.229 "superblock": true, 00:25:06.229 "num_base_bdevs": 2, 00:25:06.229 "num_base_bdevs_discovered": 2, 00:25:06.229 "num_base_bdevs_operational": 2, 00:25:06.229 "base_bdevs_list": [ 00:25:06.229 { 00:25:06.229 "name": "spare", 00:25:06.229 "uuid": "dcb2e5dc-99f3-58ba-ba5c-49e443c8b9f4", 00:25:06.229 "is_configured": true, 00:25:06.229 "data_offset": 2048, 00:25:06.229 "data_size": 63488 00:25:06.229 }, 00:25:06.229 { 00:25:06.229 "name": "BaseBdev2", 00:25:06.229 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:25:06.229 "is_configured": true, 00:25:06.229 "data_offset": 2048, 00:25:06.230 "data_size": 63488 00:25:06.230 } 00:25:06.230 ] 00:25:06.230 }' 00:25:06.230 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:06.230 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:06.230 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:06.230 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:06.230 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:25:06.230 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:06.230 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:06.230 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:06.230 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:06.230 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:06.230 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.230 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.489 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:06.489 "name": "raid_bdev1", 00:25:06.489 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:25:06.489 "strip_size_kb": 0, 00:25:06.489 "state": "online", 00:25:06.489 "raid_level": "raid1", 00:25:06.489 "superblock": true, 00:25:06.489 "num_base_bdevs": 2, 00:25:06.489 "num_base_bdevs_discovered": 2, 00:25:06.489 "num_base_bdevs_operational": 2, 00:25:06.489 "base_bdevs_list": [ 00:25:06.489 { 00:25:06.489 "name": "spare", 00:25:06.489 "uuid": "dcb2e5dc-99f3-58ba-ba5c-49e443c8b9f4", 00:25:06.489 "is_configured": true, 00:25:06.489 "data_offset": 2048, 00:25:06.489 "data_size": 63488 00:25:06.489 }, 00:25:06.489 { 00:25:06.489 "name": "BaseBdev2", 00:25:06.489 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:25:06.489 "is_configured": true, 00:25:06.489 "data_offset": 2048, 00:25:06.489 "data_size": 63488 00:25:06.489 } 00:25:06.489 ] 00:25:06.489 }' 00:25:06.489 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:06.489 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:06.489 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:06.489 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:06.489 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:06.489 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:06.489 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:06.489 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:06.489 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:06.489 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:06.489 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:06.489 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:06.489 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:06.489 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:06.489 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.489 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.749 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:06.749 "name": "raid_bdev1", 00:25:06.749 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:25:06.749 "strip_size_kb": 0, 00:25:06.749 "state": "online", 00:25:06.749 "raid_level": "raid1", 00:25:06.749 "superblock": true, 00:25:06.749 "num_base_bdevs": 2, 00:25:06.749 "num_base_bdevs_discovered": 2, 00:25:06.749 "num_base_bdevs_operational": 2, 00:25:06.749 "base_bdevs_list": [ 00:25:06.749 { 00:25:06.749 "name": "spare", 00:25:06.749 "uuid": "dcb2e5dc-99f3-58ba-ba5c-49e443c8b9f4", 00:25:06.749 "is_configured": true, 00:25:06.749 "data_offset": 2048, 00:25:06.749 "data_size": 63488 00:25:06.749 }, 00:25:06.749 { 00:25:06.749 "name": "BaseBdev2", 00:25:06.749 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:25:06.749 "is_configured": true, 00:25:06.749 "data_offset": 2048, 00:25:06.749 "data_size": 63488 00:25:06.749 } 00:25:06.749 ] 00:25:06.749 }' 00:25:06.749 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:06.749 00:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.317 00:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:07.317 [2024-07-25 00:09:03.176855] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:07.317 [2024-07-25 00:09:03.176918] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:07.317 [2024-07-25 00:09:03.177022] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:07.317 [2024-07-25 00:09:03.177106] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:07.317 [2024-07-25 00:09:03.177122] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state offline 00:25:07.576 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.576 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:25:07.576 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:25:07.576 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:25:07.576 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:25:07.576 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:07.576 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:07.576 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:07.576 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:07.576 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:07.576 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:07.576 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:25:07.576 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:07.576 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:07.576 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:07.834 /dev/nbd0 00:25:07.834 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:07.834 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:07.834 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:25:07.834 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:25:07.834 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:07.834 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:07.834 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:25:07.834 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:25:07.834 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:07.834 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:07.834 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:07.834 1+0 records in 00:25:07.834 1+0 records out 00:25:07.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404018 s, 10.1 MB/s 00:25:07.834 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:07.834 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:25:07.834 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:07.834 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:07.834 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:25:07.834 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:07.834 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:07.834 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:08.092 /dev/nbd1 00:25:08.092 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:08.092 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:08.092 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:25:08.092 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:25:08.092 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:08.092 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:08.092 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:25:08.092 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:25:08.092 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:08.092 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:08.092 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:08.092 1+0 records in 00:25:08.093 1+0 records out 00:25:08.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388397 s, 10.5 MB/s 00:25:08.093 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:08.351 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:25:08.351 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:08.351 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:08.351 00:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:25:08.351 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:08.351 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:08.351 00:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:08.351 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:08.351 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:08.351 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:08.351 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:08.351 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:25:08.351 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:08.351 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:08.610 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:08.610 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:08.610 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:08.610 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:08.611 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:08.611 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:08.611 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:08.611 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:08.611 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:08.611 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:08.870 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:08.870 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:08.870 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:08.870 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:08.870 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:08.870 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:08.870 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:08.870 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:08.870 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:25:08.870 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:09.129 00:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:09.387 [2024-07-25 00:09:05.236843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:09.387 [2024-07-25 00:09:05.236940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:09.387 [2024-07-25 00:09:05.236980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:25:09.387 [2024-07-25 00:09:05.236995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:09.387 [2024-07-25 00:09:05.239712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:09.387 [2024-07-25 00:09:05.239755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:09.388 [2024-07-25 00:09:05.239901] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:09.388 [2024-07-25 00:09:05.239964] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:09.388 [2024-07-25 00:09:05.240143] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:09.388 spare 00:25:09.647 00:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:09.647 00:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:09.647 00:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:09.647 00:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:09.647 00:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:09.647 00:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:09.647 00:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:09.647 00:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:09.647 00:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:09.647 00:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:09.647 00:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.647 00:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.647 [2024-07-25 00:09:05.340260] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:25:09.647 [2024-07-25 00:09:05.340313] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:09.647 [2024-07-25 00:09:05.340490] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cc1600 00:25:09.647 [2024-07-25 00:09:05.340952] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:25:09.647 [2024-07-25 00:09:05.340986] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:25:09.647 [2024-07-25 00:09:05.341175] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:09.647 00:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:09.647 "name": "raid_bdev1", 00:25:09.647 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:25:09.647 "strip_size_kb": 0, 00:25:09.647 "state": "online", 00:25:09.647 "raid_level": "raid1", 00:25:09.647 "superblock": true, 00:25:09.647 "num_base_bdevs": 2, 00:25:09.647 "num_base_bdevs_discovered": 2, 00:25:09.647 "num_base_bdevs_operational": 2, 00:25:09.647 "base_bdevs_list": [ 00:25:09.647 { 00:25:09.647 "name": "spare", 00:25:09.647 "uuid": "dcb2e5dc-99f3-58ba-ba5c-49e443c8b9f4", 00:25:09.647 "is_configured": true, 00:25:09.647 "data_offset": 2048, 00:25:09.647 "data_size": 63488 00:25:09.647 }, 00:25:09.647 { 00:25:09.647 "name": "BaseBdev2", 00:25:09.647 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:25:09.647 "is_configured": true, 00:25:09.647 "data_offset": 2048, 00:25:09.647 "data_size": 63488 00:25:09.647 } 00:25:09.647 ] 00:25:09.647 }' 00:25:09.647 00:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:09.647 00:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.214 00:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:10.214 00:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:10.215 00:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:10.215 00:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:10.215 00:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:10.215 00:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.215 00:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.215 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:10.215 "name": "raid_bdev1", 00:25:10.215 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:25:10.215 "strip_size_kb": 0, 00:25:10.215 "state": "online", 00:25:10.215 "raid_level": "raid1", 00:25:10.215 "superblock": true, 00:25:10.215 "num_base_bdevs": 2, 00:25:10.215 "num_base_bdevs_discovered": 2, 00:25:10.215 "num_base_bdevs_operational": 2, 00:25:10.215 "base_bdevs_list": [ 00:25:10.215 { 00:25:10.215 "name": "spare", 00:25:10.215 "uuid": "dcb2e5dc-99f3-58ba-ba5c-49e443c8b9f4", 00:25:10.215 "is_configured": true, 00:25:10.215 "data_offset": 2048, 00:25:10.215 "data_size": 63488 00:25:10.215 }, 00:25:10.215 { 00:25:10.215 "name": "BaseBdev2", 00:25:10.215 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:25:10.215 "is_configured": true, 00:25:10.215 "data_offset": 2048, 00:25:10.215 "data_size": 63488 00:25:10.215 } 00:25:10.215 ] 00:25:10.215 }' 00:25:10.215 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:10.473 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:10.473 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:10.473 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:10.473 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.473 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:10.473 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:25:10.473 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:10.732 [2024-07-25 00:09:06.553616] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:10.732 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:10.732 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:10.732 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:10.732 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:10.732 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:10.732 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:10.732 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:10.732 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:10.732 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:10.732 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:10.732 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.732 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.990 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:10.990 "name": "raid_bdev1", 00:25:10.990 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:25:10.990 "strip_size_kb": 0, 00:25:10.990 "state": "online", 00:25:10.990 "raid_level": "raid1", 00:25:10.990 "superblock": true, 00:25:10.990 "num_base_bdevs": 2, 00:25:10.990 "num_base_bdevs_discovered": 1, 00:25:10.991 "num_base_bdevs_operational": 1, 00:25:10.991 "base_bdevs_list": [ 00:25:10.991 { 00:25:10.991 "name": null, 00:25:10.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.991 "is_configured": false, 00:25:10.991 "data_offset": 2048, 00:25:10.991 "data_size": 63488 00:25:10.991 }, 00:25:10.991 { 00:25:10.991 "name": "BaseBdev2", 00:25:10.991 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:25:10.991 "is_configured": true, 00:25:10.991 "data_offset": 2048, 00:25:10.991 "data_size": 63488 00:25:10.991 } 00:25:10.991 ] 00:25:10.991 }' 00:25:10.991 00:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:10.991 00:09:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.556 00:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:11.556 [2024-07-25 00:09:07.397939] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:11.556 [2024-07-25 00:09:07.398161] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:11.556 [2024-07-25 00:09:07.398197] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:11.556 [2024-07-25 00:09:07.398248] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:11.556 [2024-07-25 00:09:07.411794] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cc16d0 00:25:11.556 [2024-07-25 00:09:07.417936] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:11.814 00:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:25:12.747 00:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:12.747 00:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:12.747 00:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:12.748 00:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:12.748 00:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:12.748 00:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.748 00:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.028 00:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:13.028 "name": "raid_bdev1", 00:25:13.028 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:25:13.028 "strip_size_kb": 0, 00:25:13.028 "state": "online", 00:25:13.028 "raid_level": "raid1", 00:25:13.028 "superblock": true, 00:25:13.028 "num_base_bdevs": 2, 00:25:13.028 "num_base_bdevs_discovered": 2, 00:25:13.028 "num_base_bdevs_operational": 2, 00:25:13.028 "process": { 00:25:13.028 "type": "rebuild", 00:25:13.028 "target": "spare", 00:25:13.028 "progress": { 00:25:13.028 "blocks": 24576, 00:25:13.028 "percent": 38 00:25:13.028 } 00:25:13.028 }, 00:25:13.028 "base_bdevs_list": [ 00:25:13.028 { 00:25:13.028 "name": "spare", 00:25:13.028 "uuid": "dcb2e5dc-99f3-58ba-ba5c-49e443c8b9f4", 00:25:13.028 "is_configured": true, 00:25:13.028 "data_offset": 2048, 00:25:13.028 "data_size": 63488 00:25:13.028 }, 00:25:13.028 { 00:25:13.028 "name": "BaseBdev2", 00:25:13.028 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:25:13.028 "is_configured": true, 00:25:13.028 "data_offset": 2048, 00:25:13.028 "data_size": 63488 00:25:13.028 } 00:25:13.028 ] 00:25:13.028 }' 00:25:13.028 00:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:13.028 00:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:13.028 00:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:13.028 00:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:13.028 00:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:13.295 [2024-07-25 00:09:08.959425] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:13.295 [2024-07-25 00:09:09.026484] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:13.295 [2024-07-25 00:09:09.026576] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:13.295 [2024-07-25 00:09:09.026598] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:13.295 [2024-07-25 00:09:09.026610] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:13.295 00:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:13.295 00:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:13.295 00:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:13.295 00:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:13.295 00:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:13.295 00:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:13.295 00:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:13.295 00:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:13.295 00:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:13.295 00:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:13.295 00:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.295 00:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.554 00:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:13.554 "name": "raid_bdev1", 00:25:13.554 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:25:13.554 "strip_size_kb": 0, 00:25:13.554 "state": "online", 00:25:13.554 "raid_level": "raid1", 00:25:13.554 "superblock": true, 00:25:13.554 "num_base_bdevs": 2, 00:25:13.554 "num_base_bdevs_discovered": 1, 00:25:13.554 "num_base_bdevs_operational": 1, 00:25:13.554 "base_bdevs_list": [ 00:25:13.554 { 00:25:13.554 "name": null, 00:25:13.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.554 "is_configured": false, 00:25:13.554 "data_offset": 2048, 00:25:13.554 "data_size": 63488 00:25:13.554 }, 00:25:13.554 { 00:25:13.554 "name": "BaseBdev2", 00:25:13.554 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:25:13.554 "is_configured": true, 00:25:13.554 "data_offset": 2048, 00:25:13.554 "data_size": 63488 00:25:13.554 } 00:25:13.554 ] 00:25:13.554 }' 00:25:13.554 00:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:13.554 00:09:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.812 00:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:14.070 [2024-07-25 00:09:09.870427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:14.070 [2024-07-25 00:09:09.870530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:14.070 [2024-07-25 00:09:09.870563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:25:14.070 [2024-07-25 00:09:09.870595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:14.070 [2024-07-25 00:09:09.871270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:14.070 [2024-07-25 00:09:09.871321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:14.070 [2024-07-25 00:09:09.871448] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:14.070 [2024-07-25 00:09:09.871469] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:14.070 [2024-07-25 00:09:09.871483] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:14.070 [2024-07-25 00:09:09.871511] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:14.070 [2024-07-25 00:09:09.884857] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cc17a0 00:25:14.070 spare 00:25:14.070 [2024-07-25 00:09:09.886996] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:14.070 00:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:25:15.450 00:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:15.450 00:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:15.450 00:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:15.450 00:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:15.450 00:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:15.450 00:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.450 00:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.450 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:15.450 "name": "raid_bdev1", 00:25:15.450 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:25:15.450 "strip_size_kb": 0, 00:25:15.450 "state": "online", 00:25:15.450 "raid_level": "raid1", 00:25:15.450 "superblock": true, 00:25:15.450 "num_base_bdevs": 2, 00:25:15.450 "num_base_bdevs_discovered": 2, 00:25:15.450 "num_base_bdevs_operational": 2, 00:25:15.450 "process": { 00:25:15.450 "type": "rebuild", 00:25:15.450 "target": "spare", 00:25:15.450 "progress": { 00:25:15.450 "blocks": 24576, 00:25:15.450 "percent": 38 00:25:15.450 } 00:25:15.450 }, 00:25:15.450 "base_bdevs_list": [ 00:25:15.450 { 00:25:15.450 "name": "spare", 00:25:15.450 "uuid": "dcb2e5dc-99f3-58ba-ba5c-49e443c8b9f4", 00:25:15.450 "is_configured": true, 00:25:15.450 "data_offset": 2048, 00:25:15.450 "data_size": 63488 00:25:15.450 }, 00:25:15.450 { 00:25:15.450 "name": "BaseBdev2", 00:25:15.450 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:25:15.450 "is_configured": true, 00:25:15.450 "data_offset": 2048, 00:25:15.450 "data_size": 63488 00:25:15.450 } 00:25:15.450 ] 00:25:15.450 }' 00:25:15.450 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:15.450 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:15.450 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:15.450 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:15.450 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:15.708 [2024-07-25 00:09:11.353114] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:15.708 [2024-07-25 00:09:11.395247] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:15.708 [2024-07-25 00:09:11.395534] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:15.708 [2024-07-25 00:09:11.395568] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:15.708 [2024-07-25 00:09:11.395581] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:15.708 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:15.708 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:15.708 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:15.708 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:15.708 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:15.708 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:15.708 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:15.708 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:15.708 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:15.708 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:15.708 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.708 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.965 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:15.965 "name": "raid_bdev1", 00:25:15.965 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:25:15.965 "strip_size_kb": 0, 00:25:15.965 "state": "online", 00:25:15.965 "raid_level": "raid1", 00:25:15.965 "superblock": true, 00:25:15.965 "num_base_bdevs": 2, 00:25:15.965 "num_base_bdevs_discovered": 1, 00:25:15.965 "num_base_bdevs_operational": 1, 00:25:15.965 "base_bdevs_list": [ 00:25:15.965 { 00:25:15.965 "name": null, 00:25:15.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.965 "is_configured": false, 00:25:15.965 "data_offset": 2048, 00:25:15.965 "data_size": 63488 00:25:15.965 }, 00:25:15.965 { 00:25:15.965 "name": "BaseBdev2", 00:25:15.965 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:25:15.965 "is_configured": true, 00:25:15.965 "data_offset": 2048, 00:25:15.965 "data_size": 63488 00:25:15.965 } 00:25:15.965 ] 00:25:15.965 }' 00:25:15.965 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:15.965 00:09:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.223 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:16.223 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:16.223 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:16.223 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:16.223 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:16.223 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.223 00:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.480 00:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:16.480 "name": "raid_bdev1", 00:25:16.480 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:25:16.480 "strip_size_kb": 0, 00:25:16.480 "state": "online", 00:25:16.480 "raid_level": "raid1", 00:25:16.480 "superblock": true, 00:25:16.480 "num_base_bdevs": 2, 00:25:16.480 "num_base_bdevs_discovered": 1, 00:25:16.480 "num_base_bdevs_operational": 1, 00:25:16.480 "base_bdevs_list": [ 00:25:16.480 { 00:25:16.480 "name": null, 00:25:16.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.481 "is_configured": false, 00:25:16.481 "data_offset": 2048, 00:25:16.481 "data_size": 63488 00:25:16.481 }, 00:25:16.481 { 00:25:16.481 "name": "BaseBdev2", 00:25:16.481 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:25:16.481 "is_configured": true, 00:25:16.481 "data_offset": 2048, 00:25:16.481 "data_size": 63488 00:25:16.481 } 00:25:16.481 ] 00:25:16.481 }' 00:25:16.481 00:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:16.481 00:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:16.481 00:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:16.481 00:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:16.481 00:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:16.738 00:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:16.996 [2024-07-25 00:09:12.758379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:16.996 [2024-07-25 00:09:12.758472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:16.996 [2024-07-25 00:09:12.758508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:25:16.996 [2024-07-25 00:09:12.758522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:16.996 [2024-07-25 00:09:12.759103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:16.996 [2024-07-25 00:09:12.759129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:16.996 [2024-07-25 00:09:12.759256] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:16.996 [2024-07-25 00:09:12.759276] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:16.996 [2024-07-25 00:09:12.759291] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:16.996 BaseBdev1 00:25:16.996 00:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:25:17.928 00:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:17.929 00:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:17.929 00:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:17.929 00:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:17.929 00:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:17.929 00:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:17.929 00:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:17.929 00:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:17.929 00:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:17.929 00:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:17.929 00:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.929 00:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.186 00:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:18.186 "name": "raid_bdev1", 00:25:18.186 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:25:18.186 "strip_size_kb": 0, 00:25:18.186 "state": "online", 00:25:18.186 "raid_level": "raid1", 00:25:18.186 "superblock": true, 00:25:18.186 "num_base_bdevs": 2, 00:25:18.186 "num_base_bdevs_discovered": 1, 00:25:18.186 "num_base_bdevs_operational": 1, 00:25:18.186 "base_bdevs_list": [ 00:25:18.186 { 00:25:18.186 "name": null, 00:25:18.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.186 "is_configured": false, 00:25:18.186 "data_offset": 2048, 00:25:18.186 "data_size": 63488 00:25:18.186 }, 00:25:18.186 { 00:25:18.186 "name": "BaseBdev2", 00:25:18.186 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:25:18.186 "is_configured": true, 00:25:18.186 "data_offset": 2048, 00:25:18.186 "data_size": 63488 00:25:18.186 } 00:25:18.186 ] 00:25:18.186 }' 00:25:18.186 00:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:18.186 00:09:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.444 00:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:18.444 00:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:18.444 00:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:18.444 00:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:18.444 00:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:18.444 00:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.444 00:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.701 00:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:18.701 "name": "raid_bdev1", 00:25:18.701 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:25:18.701 "strip_size_kb": 0, 00:25:18.701 "state": "online", 00:25:18.701 "raid_level": "raid1", 00:25:18.701 "superblock": true, 00:25:18.701 "num_base_bdevs": 2, 00:25:18.701 "num_base_bdevs_discovered": 1, 00:25:18.701 "num_base_bdevs_operational": 1, 00:25:18.701 "base_bdevs_list": [ 00:25:18.701 { 00:25:18.701 "name": null, 00:25:18.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.701 "is_configured": false, 00:25:18.701 "data_offset": 2048, 00:25:18.701 "data_size": 63488 00:25:18.701 }, 00:25:18.701 { 00:25:18.701 "name": "BaseBdev2", 00:25:18.701 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:25:18.701 "is_configured": true, 00:25:18.701 "data_offset": 2048, 00:25:18.701 "data_size": 63488 00:25:18.701 } 00:25:18.701 ] 00:25:18.701 }' 00:25:18.701 00:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:18.701 00:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:18.701 00:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:18.701 00:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:18.701 00:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:18.701 00:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:25:18.701 00:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:18.701 00:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:18.701 00:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:18.701 00:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:18.959 00:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:18.959 00:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:18.959 00:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:18.959 00:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:18.959 00:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:18.959 00:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:18.959 [2024-07-25 00:09:14.798952] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:18.959 [2024-07-25 00:09:14.799134] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:18.960 [2024-07-25 00:09:14.799155] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:18.960 request: 00:25:18.960 { 00:25:18.960 "base_bdev": "BaseBdev1", 00:25:18.960 "raid_bdev": "raid_bdev1", 00:25:18.960 "method": "bdev_raid_add_base_bdev", 00:25:18.960 "req_id": 1 00:25:18.960 } 00:25:18.960 Got JSON-RPC error response 00:25:18.960 response: 00:25:18.960 { 00:25:18.960 "code": -22, 00:25:18.960 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:18.960 } 00:25:18.960 00:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:25:18.960 00:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:18.960 00:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:18.960 00:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:18.960 00:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:25:20.334 00:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:20.334 00:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:20.334 00:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:20.334 00:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:20.334 00:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:20.334 00:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:20.334 00:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:20.334 00:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:20.334 00:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:20.334 00:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:20.334 00:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.334 00:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.334 00:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:20.334 "name": "raid_bdev1", 00:25:20.334 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:25:20.334 "strip_size_kb": 0, 00:25:20.334 "state": "online", 00:25:20.334 "raid_level": "raid1", 00:25:20.334 "superblock": true, 00:25:20.334 "num_base_bdevs": 2, 00:25:20.334 "num_base_bdevs_discovered": 1, 00:25:20.334 "num_base_bdevs_operational": 1, 00:25:20.334 "base_bdevs_list": [ 00:25:20.334 { 00:25:20.334 "name": null, 00:25:20.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.334 "is_configured": false, 00:25:20.334 "data_offset": 2048, 00:25:20.334 "data_size": 63488 00:25:20.334 }, 00:25:20.335 { 00:25:20.335 "name": "BaseBdev2", 00:25:20.335 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:25:20.335 "is_configured": true, 00:25:20.335 "data_offset": 2048, 00:25:20.335 "data_size": 63488 00:25:20.335 } 00:25:20.335 ] 00:25:20.335 }' 00:25:20.335 00:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:20.335 00:09:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.592 00:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:20.592 00:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:20.592 00:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:20.592 00:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:20.592 00:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:20.592 00:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.592 00:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.851 00:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:20.851 "name": "raid_bdev1", 00:25:20.851 "uuid": "15217355-b0fa-4f4d-b1d3-e78b131a11cc", 00:25:20.851 "strip_size_kb": 0, 00:25:20.851 "state": "online", 00:25:20.851 "raid_level": "raid1", 00:25:20.851 "superblock": true, 00:25:20.851 "num_base_bdevs": 2, 00:25:20.851 "num_base_bdevs_discovered": 1, 00:25:20.851 "num_base_bdevs_operational": 1, 00:25:20.851 "base_bdevs_list": [ 00:25:20.851 { 00:25:20.851 "name": null, 00:25:20.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.851 "is_configured": false, 00:25:20.851 "data_offset": 2048, 00:25:20.851 "data_size": 63488 00:25:20.851 }, 00:25:20.851 { 00:25:20.851 "name": "BaseBdev2", 00:25:20.851 "uuid": "6f540b33-87d6-5fec-9b35-ede237369f30", 00:25:20.851 "is_configured": true, 00:25:20.851 "data_offset": 2048, 00:25:20.851 "data_size": 63488 00:25:20.851 } 00:25:20.851 ] 00:25:20.851 }' 00:25:20.851 00:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:20.851 00:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:20.851 00:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:20.851 00:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:20.851 00:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 97347 00:25:20.851 00:09:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 97347 ']' 00:25:20.851 00:09:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 97347 00:25:20.851 00:09:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:25:20.851 00:09:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:20.851 00:09:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97347 00:25:20.851 00:09:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:20.851 00:09:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:20.851 00:09:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97347' 00:25:20.851 killing process with pid 97347 00:25:20.851 Received shutdown signal, test time was about 60.000000 seconds 00:25:20.851 00:25:20.851 Latency(us) 00:25:20.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.851 =================================================================================================================== 00:25:20.851 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:20.851 00:09:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 97347 00:25:20.851 [2024-07-25 00:09:16.670000] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:20.851 00:09:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 97347 00:25:20.851 [2024-07-25 00:09:16.670134] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:20.851 [2024-07-25 00:09:16.670231] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:20.851 [2024-07-25 00:09:16.670247] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:25:21.110 [2024-07-25 00:09:16.898585] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:22.492 ************************************ 00:25:22.492 END TEST raid_rebuild_test_sb 00:25:22.492 ************************************ 00:25:22.492 00:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:25:22.492 00:25:22.492 real 0m33.905s 00:25:22.492 user 0m46.775s 00:25:22.492 sys 0m5.224s 00:25:22.492 00:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:22.492 00:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.492 00:09:17 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:25:22.492 00:09:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:25:22.492 00:09:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:22.492 00:09:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:22.492 ************************************ 00:25:22.492 START TEST raid_rebuild_test_io 00:25:22.492 ************************************ 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:25:22.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # raid_pid=98216 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 98216 /var/tmp/spdk-raid.sock 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 98216 ']' 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:22.492 00:09:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:22.492 [2024-07-25 00:09:18.075143] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:25:22.492 [2024-07-25 00:09:18.076070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98216 ] 00:25:22.492 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:22.492 Zero copy mechanism will not be used. 00:25:22.492 [2024-07-25 00:09:18.249667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.750 [2024-07-25 00:09:18.419361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.750 [2024-07-25 00:09:18.583491] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:23.315 00:09:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:23.315 00:09:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:25:23.315 00:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:23.315 00:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:23.574 BaseBdev1_malloc 00:25:23.574 00:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:23.833 [2024-07-25 00:09:19.478234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:23.833 [2024-07-25 00:09:19.478530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:23.833 [2024-07-25 00:09:19.478607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:25:23.833 [2024-07-25 00:09:19.478932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:23.833 [2024-07-25 00:09:19.481700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:23.833 [2024-07-25 00:09:19.481882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:23.833 BaseBdev1 00:25:23.833 00:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:23.833 00:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:24.091 BaseBdev2_malloc 00:25:24.091 00:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:24.091 [2024-07-25 00:09:19.937436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:24.091 [2024-07-25 00:09:19.937527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:24.091 [2024-07-25 00:09:19.937558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:25:24.091 [2024-07-25 00:09:19.937577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:24.091 [2024-07-25 00:09:19.940096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:24.091 [2024-07-25 00:09:19.940144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:24.091 BaseBdev2 00:25:24.091 00:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:24.349 spare_malloc 00:25:24.607 00:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:24.607 spare_delay 00:25:24.865 00:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:24.865 [2024-07-25 00:09:20.698623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:24.865 [2024-07-25 00:09:20.698889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:24.865 [2024-07-25 00:09:20.699074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:25:24.865 [2024-07-25 00:09:20.699201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:24.865 [2024-07-25 00:09:20.701727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:24.865 [2024-07-25 00:09:20.701917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:24.865 spare 00:25:24.865 00:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:25:25.122 [2024-07-25 00:09:20.914792] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:25.122 [2024-07-25 00:09:20.917437] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:25.122 [2024-07-25 00:09:20.917741] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009080 00:25:25.122 [2024-07-25 00:09:20.917769] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:25.123 [2024-07-25 00:09:20.917975] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:25:25.123 [2024-07-25 00:09:20.918433] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009080 00:25:25.123 [2024-07-25 00:09:20.918449] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009080 00:25:25.123 [2024-07-25 00:09:20.918693] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:25.123 00:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:25.123 00:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:25.123 00:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:25.123 00:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:25.123 00:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:25.123 00:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:25.123 00:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:25.123 00:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:25.123 00:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:25.123 00:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:25.123 00:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.123 00:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.382 00:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:25.382 "name": "raid_bdev1", 00:25:25.382 "uuid": "c6163296-e284-47b2-b1fc-1a12b5fdd157", 00:25:25.382 "strip_size_kb": 0, 00:25:25.382 "state": "online", 00:25:25.382 "raid_level": "raid1", 00:25:25.382 "superblock": false, 00:25:25.382 "num_base_bdevs": 2, 00:25:25.382 "num_base_bdevs_discovered": 2, 00:25:25.382 "num_base_bdevs_operational": 2, 00:25:25.382 "base_bdevs_list": [ 00:25:25.382 { 00:25:25.382 "name": "BaseBdev1", 00:25:25.382 "uuid": "1691418a-f354-537f-a8df-303791b2a286", 00:25:25.382 "is_configured": true, 00:25:25.382 "data_offset": 0, 00:25:25.382 "data_size": 65536 00:25:25.382 }, 00:25:25.382 { 00:25:25.382 "name": "BaseBdev2", 00:25:25.382 "uuid": "652423bd-e741-5eb0-99d0-87955370a7c4", 00:25:25.382 "is_configured": true, 00:25:25.382 "data_offset": 0, 00:25:25.382 "data_size": 65536 00:25:25.382 } 00:25:25.382 ] 00:25:25.382 }' 00:25:25.382 00:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:25.382 00:09:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:25.950 00:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:25.950 00:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:25:25.950 [2024-07-25 00:09:21.715347] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:25.950 00:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:25:25.950 00:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.950 00:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:26.208 00:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:25:26.208 00:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:25:26.208 00:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:26.208 00:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:26.208 [2024-07-25 00:09:22.062407] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ba0 00:25:26.208 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:26.208 Zero copy mechanism will not be used. 00:25:26.208 Running I/O for 60 seconds... 00:25:26.465 [2024-07-25 00:09:22.174285] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:26.465 [2024-07-25 00:09:22.181212] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005ba0 00:25:26.465 00:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:26.465 00:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:26.465 00:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:26.465 00:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:26.465 00:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:26.465 00:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:26.465 00:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:26.465 00:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:26.465 00:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:26.465 00:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:26.465 00:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.465 00:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.723 00:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:26.723 "name": "raid_bdev1", 00:25:26.723 "uuid": "c6163296-e284-47b2-b1fc-1a12b5fdd157", 00:25:26.723 "strip_size_kb": 0, 00:25:26.723 "state": "online", 00:25:26.723 "raid_level": "raid1", 00:25:26.723 "superblock": false, 00:25:26.723 "num_base_bdevs": 2, 00:25:26.723 "num_base_bdevs_discovered": 1, 00:25:26.723 "num_base_bdevs_operational": 1, 00:25:26.723 "base_bdevs_list": [ 00:25:26.723 { 00:25:26.723 "name": null, 00:25:26.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.723 "is_configured": false, 00:25:26.723 "data_offset": 0, 00:25:26.723 "data_size": 65536 00:25:26.723 }, 00:25:26.723 { 00:25:26.723 "name": "BaseBdev2", 00:25:26.723 "uuid": "652423bd-e741-5eb0-99d0-87955370a7c4", 00:25:26.723 "is_configured": true, 00:25:26.723 "data_offset": 0, 00:25:26.723 "data_size": 65536 00:25:26.723 } 00:25:26.723 ] 00:25:26.723 }' 00:25:26.723 00:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:26.723 00:09:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:26.981 00:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:27.240 [2024-07-25 00:09:23.058433] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:27.498 00:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:27.498 [2024-07-25 00:09:23.120027] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005c70 00:25:27.498 [2024-07-25 00:09:23.122316] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:27.498 [2024-07-25 00:09:23.238888] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:27.498 [2024-07-25 00:09:23.239612] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:27.756 [2024-07-25 00:09:23.382691] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:27.756 [2024-07-25 00:09:23.624466] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:28.014 [2024-07-25 00:09:23.828807] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:28.014 [2024-07-25 00:09:23.829099] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:28.272 [2024-07-25 00:09:24.089594] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:28.272 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:28.272 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:28.272 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:28.272 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:28.272 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:28.272 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.272 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.531 [2024-07-25 00:09:24.205884] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:28.531 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:28.531 "name": "raid_bdev1", 00:25:28.531 "uuid": "c6163296-e284-47b2-b1fc-1a12b5fdd157", 00:25:28.531 "strip_size_kb": 0, 00:25:28.531 "state": "online", 00:25:28.531 "raid_level": "raid1", 00:25:28.531 "superblock": false, 00:25:28.531 "num_base_bdevs": 2, 00:25:28.531 "num_base_bdevs_discovered": 2, 00:25:28.531 "num_base_bdevs_operational": 2, 00:25:28.531 "process": { 00:25:28.531 "type": "rebuild", 00:25:28.531 "target": "spare", 00:25:28.531 "progress": { 00:25:28.531 "blocks": 18432, 00:25:28.531 "percent": 28 00:25:28.531 } 00:25:28.531 }, 00:25:28.531 "base_bdevs_list": [ 00:25:28.531 { 00:25:28.531 "name": "spare", 00:25:28.531 "uuid": "af87042d-f5ff-5e5c-af69-0ad6a78b8ebd", 00:25:28.531 "is_configured": true, 00:25:28.531 "data_offset": 0, 00:25:28.531 "data_size": 65536 00:25:28.531 }, 00:25:28.531 { 00:25:28.531 "name": "BaseBdev2", 00:25:28.531 "uuid": "652423bd-e741-5eb0-99d0-87955370a7c4", 00:25:28.531 "is_configured": true, 00:25:28.531 "data_offset": 0, 00:25:28.531 "data_size": 65536 00:25:28.531 } 00:25:28.531 ] 00:25:28.531 }' 00:25:28.531 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:28.531 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:28.790 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:28.790 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:28.790 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:28.790 [2024-07-25 00:09:24.433114] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:28.790 [2024-07-25 00:09:24.644674] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:29.060 [2024-07-25 00:09:24.675350] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:29.060 [2024-07-25 00:09:24.685090] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.060 [2024-07-25 00:09:24.685133] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:29.060 [2024-07-25 00:09:24.685151] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:29.060 [2024-07-25 00:09:24.728357] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005ba0 00:25:29.060 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:29.060 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:29.060 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:29.060 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:29.060 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:29.060 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:29.060 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:29.060 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:29.060 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:29.060 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:29.060 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.060 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.331 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:29.331 "name": "raid_bdev1", 00:25:29.331 "uuid": "c6163296-e284-47b2-b1fc-1a12b5fdd157", 00:25:29.331 "strip_size_kb": 0, 00:25:29.331 "state": "online", 00:25:29.331 "raid_level": "raid1", 00:25:29.331 "superblock": false, 00:25:29.331 "num_base_bdevs": 2, 00:25:29.331 "num_base_bdevs_discovered": 1, 00:25:29.331 "num_base_bdevs_operational": 1, 00:25:29.331 "base_bdevs_list": [ 00:25:29.331 { 00:25:29.331 "name": null, 00:25:29.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.331 "is_configured": false, 00:25:29.331 "data_offset": 0, 00:25:29.331 "data_size": 65536 00:25:29.331 }, 00:25:29.331 { 00:25:29.331 "name": "BaseBdev2", 00:25:29.331 "uuid": "652423bd-e741-5eb0-99d0-87955370a7c4", 00:25:29.331 "is_configured": true, 00:25:29.331 "data_offset": 0, 00:25:29.331 "data_size": 65536 00:25:29.331 } 00:25:29.331 ] 00:25:29.331 }' 00:25:29.331 00:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:29.331 00:09:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:29.589 00:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:29.589 00:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:29.589 00:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:29.589 00:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:29.589 00:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:29.589 00:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.589 00:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.847 00:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:29.847 "name": "raid_bdev1", 00:25:29.847 "uuid": "c6163296-e284-47b2-b1fc-1a12b5fdd157", 00:25:29.847 "strip_size_kb": 0, 00:25:29.847 "state": "online", 00:25:29.847 "raid_level": "raid1", 00:25:29.847 "superblock": false, 00:25:29.847 "num_base_bdevs": 2, 00:25:29.847 "num_base_bdevs_discovered": 1, 00:25:29.847 "num_base_bdevs_operational": 1, 00:25:29.847 "base_bdevs_list": [ 00:25:29.847 { 00:25:29.847 "name": null, 00:25:29.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.847 "is_configured": false, 00:25:29.847 "data_offset": 0, 00:25:29.847 "data_size": 65536 00:25:29.847 }, 00:25:29.847 { 00:25:29.847 "name": "BaseBdev2", 00:25:29.847 "uuid": "652423bd-e741-5eb0-99d0-87955370a7c4", 00:25:29.847 "is_configured": true, 00:25:29.847 "data_offset": 0, 00:25:29.847 "data_size": 65536 00:25:29.847 } 00:25:29.847 ] 00:25:29.847 }' 00:25:29.847 00:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:29.847 00:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:29.847 00:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:29.847 00:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:29.847 00:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:30.106 [2024-07-25 00:09:25.858413] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:30.106 [2024-07-25 00:09:25.912023] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005d40 00:25:30.106 00:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:25:30.106 [2024-07-25 00:09:25.914297] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:30.372 [2024-07-25 00:09:26.032232] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:30.372 [2024-07-25 00:09:26.032639] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:30.636 [2024-07-25 00:09:26.248662] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:30.636 [2024-07-25 00:09:26.249199] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:30.893 [2024-07-25 00:09:26.620733] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:31.150 00:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:31.150 00:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:31.150 00:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:31.150 00:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:31.150 00:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:31.150 00:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.150 00:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.150 [2024-07-25 00:09:27.003962] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:31.408 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:31.408 "name": "raid_bdev1", 00:25:31.408 "uuid": "c6163296-e284-47b2-b1fc-1a12b5fdd157", 00:25:31.408 "strip_size_kb": 0, 00:25:31.408 "state": "online", 00:25:31.408 "raid_level": "raid1", 00:25:31.408 "superblock": false, 00:25:31.408 "num_base_bdevs": 2, 00:25:31.408 "num_base_bdevs_discovered": 2, 00:25:31.408 "num_base_bdevs_operational": 2, 00:25:31.408 "process": { 00:25:31.408 "type": "rebuild", 00:25:31.408 "target": "spare", 00:25:31.408 "progress": { 00:25:31.408 "blocks": 14336, 00:25:31.408 "percent": 21 00:25:31.408 } 00:25:31.408 }, 00:25:31.408 "base_bdevs_list": [ 00:25:31.408 { 00:25:31.408 "name": "spare", 00:25:31.408 "uuid": "af87042d-f5ff-5e5c-af69-0ad6a78b8ebd", 00:25:31.408 "is_configured": true, 00:25:31.408 "data_offset": 0, 00:25:31.408 "data_size": 65536 00:25:31.408 }, 00:25:31.408 { 00:25:31.408 "name": "BaseBdev2", 00:25:31.408 "uuid": "652423bd-e741-5eb0-99d0-87955370a7c4", 00:25:31.408 "is_configured": true, 00:25:31.408 "data_offset": 0, 00:25:31.408 "data_size": 65536 00:25:31.408 } 00:25:31.408 ] 00:25:31.408 }' 00:25:31.408 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:31.408 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:31.408 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:31.408 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:31.408 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:25:31.408 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:25:31.408 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:25:31.408 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:25:31.408 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # local timeout=757 00:25:31.408 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:31.408 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:31.408 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:31.408 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:31.408 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:31.408 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:31.408 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.408 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.408 [2024-07-25 00:09:27.230545] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:31.667 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:31.667 "name": "raid_bdev1", 00:25:31.667 "uuid": "c6163296-e284-47b2-b1fc-1a12b5fdd157", 00:25:31.667 "strip_size_kb": 0, 00:25:31.667 "state": "online", 00:25:31.667 "raid_level": "raid1", 00:25:31.667 "superblock": false, 00:25:31.667 "num_base_bdevs": 2, 00:25:31.667 "num_base_bdevs_discovered": 2, 00:25:31.667 "num_base_bdevs_operational": 2, 00:25:31.667 "process": { 00:25:31.667 "type": "rebuild", 00:25:31.667 "target": "spare", 00:25:31.667 "progress": { 00:25:31.667 "blocks": 16384, 00:25:31.667 "percent": 25 00:25:31.667 } 00:25:31.667 }, 00:25:31.667 "base_bdevs_list": [ 00:25:31.667 { 00:25:31.667 "name": "spare", 00:25:31.667 "uuid": "af87042d-f5ff-5e5c-af69-0ad6a78b8ebd", 00:25:31.667 "is_configured": true, 00:25:31.667 "data_offset": 0, 00:25:31.667 "data_size": 65536 00:25:31.667 }, 00:25:31.667 { 00:25:31.667 "name": "BaseBdev2", 00:25:31.667 "uuid": "652423bd-e741-5eb0-99d0-87955370a7c4", 00:25:31.667 "is_configured": true, 00:25:31.667 "data_offset": 0, 00:25:31.667 "data_size": 65536 00:25:31.667 } 00:25:31.667 ] 00:25:31.667 }' 00:25:31.667 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:31.667 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:31.667 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:31.667 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:31.667 00:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:31.926 [2024-07-25 00:09:27.571477] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:31.926 [2024-07-25 00:09:27.572013] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:31.926 [2024-07-25 00:09:27.705676] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:32.492 [2024-07-25 00:09:28.153579] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:25:32.750 [2024-07-25 00:09:28.377290] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:25:32.750 00:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:32.750 00:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:32.750 00:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:32.750 00:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:32.750 00:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:32.750 00:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:32.750 00:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.750 00:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:32.750 [2024-07-25 00:09:28.587239] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:25:32.750 [2024-07-25 00:09:28.587538] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:25:33.008 00:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:33.008 "name": "raid_bdev1", 00:25:33.008 "uuid": "c6163296-e284-47b2-b1fc-1a12b5fdd157", 00:25:33.008 "strip_size_kb": 0, 00:25:33.008 "state": "online", 00:25:33.008 "raid_level": "raid1", 00:25:33.008 "superblock": false, 00:25:33.008 "num_base_bdevs": 2, 00:25:33.008 "num_base_bdevs_discovered": 2, 00:25:33.008 "num_base_bdevs_operational": 2, 00:25:33.008 "process": { 00:25:33.008 "type": "rebuild", 00:25:33.008 "target": "spare", 00:25:33.008 "progress": { 00:25:33.008 "blocks": 36864, 00:25:33.008 "percent": 56 00:25:33.008 } 00:25:33.008 }, 00:25:33.008 "base_bdevs_list": [ 00:25:33.008 { 00:25:33.008 "name": "spare", 00:25:33.008 "uuid": "af87042d-f5ff-5e5c-af69-0ad6a78b8ebd", 00:25:33.008 "is_configured": true, 00:25:33.008 "data_offset": 0, 00:25:33.008 "data_size": 65536 00:25:33.008 }, 00:25:33.008 { 00:25:33.008 "name": "BaseBdev2", 00:25:33.008 "uuid": "652423bd-e741-5eb0-99d0-87955370a7c4", 00:25:33.008 "is_configured": true, 00:25:33.008 "data_offset": 0, 00:25:33.008 "data_size": 65536 00:25:33.008 } 00:25:33.008 ] 00:25:33.008 }' 00:25:33.008 00:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:33.008 00:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:33.009 00:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:33.009 00:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:33.009 00:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:33.267 [2024-07-25 00:09:28.938169] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:25:33.525 [2024-07-25 00:09:29.269929] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:25:34.092 [2024-07-25 00:09:29.689002] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:25:34.092 00:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:34.092 00:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:34.092 00:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:34.092 00:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:34.092 00:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:34.092 00:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:34.092 00:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.092 00:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:34.351 00:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:34.351 "name": "raid_bdev1", 00:25:34.351 "uuid": "c6163296-e284-47b2-b1fc-1a12b5fdd157", 00:25:34.351 "strip_size_kb": 0, 00:25:34.351 "state": "online", 00:25:34.351 "raid_level": "raid1", 00:25:34.351 "superblock": false, 00:25:34.351 "num_base_bdevs": 2, 00:25:34.351 "num_base_bdevs_discovered": 2, 00:25:34.351 "num_base_bdevs_operational": 2, 00:25:34.351 "process": { 00:25:34.351 "type": "rebuild", 00:25:34.351 "target": "spare", 00:25:34.351 "progress": { 00:25:34.351 "blocks": 57344, 00:25:34.351 "percent": 87 00:25:34.351 } 00:25:34.351 }, 00:25:34.351 "base_bdevs_list": [ 00:25:34.351 { 00:25:34.351 "name": "spare", 00:25:34.351 "uuid": "af87042d-f5ff-5e5c-af69-0ad6a78b8ebd", 00:25:34.351 "is_configured": true, 00:25:34.351 "data_offset": 0, 00:25:34.351 "data_size": 65536 00:25:34.351 }, 00:25:34.351 { 00:25:34.351 "name": "BaseBdev2", 00:25:34.351 "uuid": "652423bd-e741-5eb0-99d0-87955370a7c4", 00:25:34.351 "is_configured": true, 00:25:34.351 "data_offset": 0, 00:25:34.351 "data_size": 65536 00:25:34.351 } 00:25:34.351 ] 00:25:34.351 }' 00:25:34.351 00:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:34.351 00:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:34.351 00:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:34.351 00:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:34.351 00:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:34.608 [2024-07-25 00:09:30.352908] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:34.608 [2024-07-25 00:09:30.459494] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:34.608 [2024-07-25 00:09:30.461325] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:35.542 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:35.542 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:35.542 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:35.542 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:35.542 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:35.542 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:35.542 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.542 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.542 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:35.542 "name": "raid_bdev1", 00:25:35.542 "uuid": "c6163296-e284-47b2-b1fc-1a12b5fdd157", 00:25:35.542 "strip_size_kb": 0, 00:25:35.542 "state": "online", 00:25:35.542 "raid_level": "raid1", 00:25:35.542 "superblock": false, 00:25:35.542 "num_base_bdevs": 2, 00:25:35.542 "num_base_bdevs_discovered": 2, 00:25:35.542 "num_base_bdevs_operational": 2, 00:25:35.542 "base_bdevs_list": [ 00:25:35.542 { 00:25:35.542 "name": "spare", 00:25:35.542 "uuid": "af87042d-f5ff-5e5c-af69-0ad6a78b8ebd", 00:25:35.542 "is_configured": true, 00:25:35.542 "data_offset": 0, 00:25:35.542 "data_size": 65536 00:25:35.542 }, 00:25:35.542 { 00:25:35.542 "name": "BaseBdev2", 00:25:35.542 "uuid": "652423bd-e741-5eb0-99d0-87955370a7c4", 00:25:35.542 "is_configured": true, 00:25:35.542 "data_offset": 0, 00:25:35.542 "data_size": 65536 00:25:35.542 } 00:25:35.542 ] 00:25:35.542 }' 00:25:35.542 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:35.542 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:35.542 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:35.542 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:35.542 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # break 00:25:35.542 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:35.542 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:35.542 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:35.542 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:35.542 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:35.542 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.542 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.801 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:35.801 "name": "raid_bdev1", 00:25:35.801 "uuid": "c6163296-e284-47b2-b1fc-1a12b5fdd157", 00:25:35.801 "strip_size_kb": 0, 00:25:35.801 "state": "online", 00:25:35.801 "raid_level": "raid1", 00:25:35.801 "superblock": false, 00:25:35.801 "num_base_bdevs": 2, 00:25:35.801 "num_base_bdevs_discovered": 2, 00:25:35.801 "num_base_bdevs_operational": 2, 00:25:35.801 "base_bdevs_list": [ 00:25:35.801 { 00:25:35.801 "name": "spare", 00:25:35.801 "uuid": "af87042d-f5ff-5e5c-af69-0ad6a78b8ebd", 00:25:35.801 "is_configured": true, 00:25:35.801 "data_offset": 0, 00:25:35.801 "data_size": 65536 00:25:35.801 }, 00:25:35.801 { 00:25:35.801 "name": "BaseBdev2", 00:25:35.801 "uuid": "652423bd-e741-5eb0-99d0-87955370a7c4", 00:25:35.801 "is_configured": true, 00:25:35.801 "data_offset": 0, 00:25:35.801 "data_size": 65536 00:25:35.801 } 00:25:35.801 ] 00:25:35.801 }' 00:25:35.801 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:35.801 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:35.801 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:35.801 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:35.801 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:35.801 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:35.801 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:35.801 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:35.801 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:35.801 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:35.801 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:35.801 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:35.801 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:35.801 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:35.801 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.801 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.060 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:36.060 "name": "raid_bdev1", 00:25:36.060 "uuid": "c6163296-e284-47b2-b1fc-1a12b5fdd157", 00:25:36.060 "strip_size_kb": 0, 00:25:36.060 "state": "online", 00:25:36.060 "raid_level": "raid1", 00:25:36.060 "superblock": false, 00:25:36.060 "num_base_bdevs": 2, 00:25:36.060 "num_base_bdevs_discovered": 2, 00:25:36.060 "num_base_bdevs_operational": 2, 00:25:36.060 "base_bdevs_list": [ 00:25:36.060 { 00:25:36.060 "name": "spare", 00:25:36.060 "uuid": "af87042d-f5ff-5e5c-af69-0ad6a78b8ebd", 00:25:36.060 "is_configured": true, 00:25:36.060 "data_offset": 0, 00:25:36.060 "data_size": 65536 00:25:36.060 }, 00:25:36.060 { 00:25:36.060 "name": "BaseBdev2", 00:25:36.060 "uuid": "652423bd-e741-5eb0-99d0-87955370a7c4", 00:25:36.060 "is_configured": true, 00:25:36.060 "data_offset": 0, 00:25:36.060 "data_size": 65536 00:25:36.060 } 00:25:36.060 ] 00:25:36.060 }' 00:25:36.060 00:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:36.060 00:09:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:36.625 00:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:36.626 [2024-07-25 00:09:32.464309] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:36.626 [2024-07-25 00:09:32.464352] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:36.884 00:25:36.884 Latency(us) 00:25:36.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.884 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:25:36.884 raid_bdev1 : 10.46 100.20 300.59 0.00 0.00 13302.92 268.10 110577.11 00:25:36.884 =================================================================================================================== 00:25:36.884 Total : 100.20 300.59 0.00 0.00 13302.92 268.10 110577.11 00:25:36.884 0 00:25:36.884 [2024-07-25 00:09:32.542350] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:36.884 [2024-07-25 00:09:32.542404] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:36.884 [2024-07-25 00:09:32.542501] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:36.884 [2024-07-25 00:09:32.542521] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state offline 00:25:36.884 00:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # jq length 00:25:36.884 00:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.142 00:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:25:37.142 00:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:25:37.142 00:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:25:37.142 00:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:25:37.142 00:09:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:37.142 00:09:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:25:37.143 00:09:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:37.143 00:09:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:37.143 00:09:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:37.143 00:09:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:25:37.143 00:09:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:37.143 00:09:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:37.143 00:09:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:25:37.401 /dev/nbd0 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:37.401 1+0 records in 00:25:37.401 1+0 records out 00:25:37.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224631 s, 18.2 MB/s 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:37.401 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:25:37.660 /dev/nbd1 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:37.660 1+0 records in 00:25:37.660 1+0 records out 00:25:37.660 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229179 s, 17.9 MB/s 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:37.660 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:37.918 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:37.918 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:37.918 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:37.918 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:37.918 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:37.918 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:37.918 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:25:37.918 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:37.918 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:37.918 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:37.918 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:37.918 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:37.918 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:25:37.918 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:37.918 00:09:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:38.175 00:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:38.175 00:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:38.175 00:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:38.175 00:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:38.175 00:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:38.175 00:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:38.433 00:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:25:38.433 00:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:38.433 00:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:25:38.433 00:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@798 -- # killprocess 98216 00:25:38.433 00:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 98216 ']' 00:25:38.433 00:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 98216 00:25:38.433 00:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:25:38.433 00:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:38.433 00:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98216 00:25:38.433 killing process with pid 98216 00:25:38.433 Received shutdown signal, test time was about 12.006244 seconds 00:25:38.433 00:25:38.433 Latency(us) 00:25:38.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.433 =================================================================================================================== 00:25:38.433 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:38.433 00:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:38.433 00:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:38.433 00:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98216' 00:25:38.433 00:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 98216 00:25:38.433 [2024-07-25 00:09:34.071020] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:38.433 00:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 98216 00:25:38.433 [2024-07-25 00:09:34.244057] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@800 -- # return 0 00:25:39.833 00:25:39.833 real 0m17.345s 00:25:39.833 user 0m25.212s 00:25:39.833 sys 0m2.076s 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:39.833 ************************************ 00:25:39.833 END TEST raid_rebuild_test_io 00:25:39.833 ************************************ 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:39.833 00:09:35 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:25:39.833 00:09:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:25:39.833 00:09:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:39.833 00:09:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:39.833 ************************************ 00:25:39.833 START TEST raid_rebuild_test_sb_io 00:25:39.833 ************************************ 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:25:39.833 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:25:39.834 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:25:39.834 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:25:39.834 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:25:39.834 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:25:39.834 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:25:39.834 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # raid_pid=98658 00:25:39.834 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 98658 /var/tmp/spdk-raid.sock 00:25:39.834 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:39.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:39.834 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 98658 ']' 00:25:39.834 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:39.834 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:39.834 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:39.834 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:39.834 00:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:39.834 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:39.834 Zero copy mechanism will not be used. 00:25:39.834 [2024-07-25 00:09:35.473659] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:25:39.834 [2024-07-25 00:09:35.473884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98658 ] 00:25:39.834 [2024-07-25 00:09:35.646088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.092 [2024-07-25 00:09:35.814541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.351 [2024-07-25 00:09:35.978518] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:40.608 00:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:40.608 00:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:25:40.608 00:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:40.608 00:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:40.867 BaseBdev1_malloc 00:25:40.867 00:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:41.125 [2024-07-25 00:09:36.947946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:41.125 [2024-07-25 00:09:36.948050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:41.125 [2024-07-25 00:09:36.948085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:25:41.125 [2024-07-25 00:09:36.948103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:41.125 [2024-07-25 00:09:36.950524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:41.125 [2024-07-25 00:09:36.950588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:41.125 BaseBdev1 00:25:41.125 00:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:41.125 00:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:41.383 BaseBdev2_malloc 00:25:41.383 00:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:41.642 [2024-07-25 00:09:37.438781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:41.642 [2024-07-25 00:09:37.438898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:41.642 [2024-07-25 00:09:37.438955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:25:41.642 [2024-07-25 00:09:37.439010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:41.642 [2024-07-25 00:09:37.441366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:41.642 [2024-07-25 00:09:37.441428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:41.642 BaseBdev2 00:25:41.642 00:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:41.901 spare_malloc 00:25:41.901 00:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:42.159 spare_delay 00:25:42.159 00:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:42.417 [2024-07-25 00:09:38.115834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:42.417 [2024-07-25 00:09:38.115920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:42.417 [2024-07-25 00:09:38.115955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:25:42.417 [2024-07-25 00:09:38.115973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:42.417 [2024-07-25 00:09:38.118620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:42.417 [2024-07-25 00:09:38.118671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:42.417 spare 00:25:42.417 00:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:25:42.675 [2024-07-25 00:09:38.335965] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:42.675 [2024-07-25 00:09:38.337949] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:42.675 [2024-07-25 00:09:38.338181] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009080 00:25:42.675 [2024-07-25 00:09:38.338202] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:42.675 [2024-07-25 00:09:38.338391] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:25:42.675 [2024-07-25 00:09:38.338927] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009080 00:25:42.675 [2024-07-25 00:09:38.338996] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009080 00:25:42.675 [2024-07-25 00:09:38.339192] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:42.675 00:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:42.675 00:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:42.675 00:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:42.675 00:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:42.675 00:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:42.675 00:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:42.675 00:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:42.675 00:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:42.675 00:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:42.675 00:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:42.675 00:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.675 00:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.933 00:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:42.933 "name": "raid_bdev1", 00:25:42.933 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:25:42.933 "strip_size_kb": 0, 00:25:42.933 "state": "online", 00:25:42.933 "raid_level": "raid1", 00:25:42.933 "superblock": true, 00:25:42.933 "num_base_bdevs": 2, 00:25:42.933 "num_base_bdevs_discovered": 2, 00:25:42.933 "num_base_bdevs_operational": 2, 00:25:42.933 "base_bdevs_list": [ 00:25:42.933 { 00:25:42.933 "name": "BaseBdev1", 00:25:42.933 "uuid": "7ce67c5a-9974-5a30-ad5e-93d98f8c10d6", 00:25:42.933 "is_configured": true, 00:25:42.933 "data_offset": 2048, 00:25:42.933 "data_size": 63488 00:25:42.933 }, 00:25:42.933 { 00:25:42.933 "name": "BaseBdev2", 00:25:42.933 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:25:42.933 "is_configured": true, 00:25:42.933 "data_offset": 2048, 00:25:42.933 "data_size": 63488 00:25:42.933 } 00:25:42.933 ] 00:25:42.933 }' 00:25:42.933 00:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:42.933 00:09:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:43.190 00:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:25:43.190 00:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:43.448 [2024-07-25 00:09:39.112430] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:43.448 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:25:43.448 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.448 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:43.707 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:25:43.707 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:25:43.707 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:43.707 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:43.707 [2024-07-25 00:09:39.498714] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ba0 00:25:43.707 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:43.707 Zero copy mechanism will not be used. 00:25:43.707 Running I/O for 60 seconds... 00:25:43.965 [2024-07-25 00:09:39.600792] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:43.965 [2024-07-25 00:09:39.607956] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005ba0 00:25:43.965 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:43.965 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:43.965 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:43.965 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:43.965 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:43.965 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:43.965 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:43.965 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:43.965 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:43.965 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:43.965 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.965 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.224 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:44.224 "name": "raid_bdev1", 00:25:44.224 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:25:44.224 "strip_size_kb": 0, 00:25:44.224 "state": "online", 00:25:44.224 "raid_level": "raid1", 00:25:44.224 "superblock": true, 00:25:44.224 "num_base_bdevs": 2, 00:25:44.224 "num_base_bdevs_discovered": 1, 00:25:44.224 "num_base_bdevs_operational": 1, 00:25:44.224 "base_bdevs_list": [ 00:25:44.224 { 00:25:44.224 "name": null, 00:25:44.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.224 "is_configured": false, 00:25:44.224 "data_offset": 2048, 00:25:44.224 "data_size": 63488 00:25:44.224 }, 00:25:44.224 { 00:25:44.224 "name": "BaseBdev2", 00:25:44.224 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:25:44.224 "is_configured": true, 00:25:44.224 "data_offset": 2048, 00:25:44.224 "data_size": 63488 00:25:44.224 } 00:25:44.224 ] 00:25:44.224 }' 00:25:44.224 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:44.224 00:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:44.482 00:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:44.739 [2024-07-25 00:09:40.410743] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:44.740 [2024-07-25 00:09:40.442830] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005c70 00:25:44.740 [2024-07-25 00:09:40.444954] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:44.740 00:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:44.740 [2024-07-25 00:09:40.569592] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:44.740 [2024-07-25 00:09:40.570156] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:44.997 [2024-07-25 00:09:40.788673] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:44.997 [2024-07-25 00:09:40.789005] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:45.563 [2024-07-25 00:09:41.141185] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:45.563 [2024-07-25 00:09:41.352222] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:45.821 00:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:45.821 00:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:45.821 00:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:45.821 00:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:45.821 00:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:45.821 00:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.821 00:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.079 00:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:46.079 "name": "raid_bdev1", 00:25:46.079 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:25:46.079 "strip_size_kb": 0, 00:25:46.079 "state": "online", 00:25:46.079 "raid_level": "raid1", 00:25:46.079 "superblock": true, 00:25:46.079 "num_base_bdevs": 2, 00:25:46.079 "num_base_bdevs_discovered": 2, 00:25:46.079 "num_base_bdevs_operational": 2, 00:25:46.079 "process": { 00:25:46.079 "type": "rebuild", 00:25:46.079 "target": "spare", 00:25:46.079 "progress": { 00:25:46.079 "blocks": 12288, 00:25:46.079 "percent": 19 00:25:46.079 } 00:25:46.079 }, 00:25:46.079 "base_bdevs_list": [ 00:25:46.079 { 00:25:46.079 "name": "spare", 00:25:46.079 "uuid": "08e81dbb-1f20-5452-b69b-69a169759ed5", 00:25:46.079 "is_configured": true, 00:25:46.079 "data_offset": 2048, 00:25:46.079 "data_size": 63488 00:25:46.079 }, 00:25:46.079 { 00:25:46.079 "name": "BaseBdev2", 00:25:46.079 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:25:46.079 "is_configured": true, 00:25:46.079 "data_offset": 2048, 00:25:46.079 "data_size": 63488 00:25:46.079 } 00:25:46.079 ] 00:25:46.079 }' 00:25:46.079 00:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:46.079 00:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:46.079 00:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:46.079 00:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:46.079 00:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:46.079 [2024-07-25 00:09:41.821706] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:46.337 [2024-07-25 00:09:41.989477] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:46.337 [2024-07-25 00:09:42.117915] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:46.337 [2024-07-25 00:09:42.119787] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:46.337 [2024-07-25 00:09:42.119867] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:46.337 [2024-07-25 00:09:42.119890] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:46.337 [2024-07-25 00:09:42.153346] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005ba0 00:25:46.337 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:46.337 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:46.337 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:46.337 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:46.337 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:46.337 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:46.337 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:46.337 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:46.337 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:46.337 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:46.337 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.337 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.595 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:46.595 "name": "raid_bdev1", 00:25:46.595 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:25:46.595 "strip_size_kb": 0, 00:25:46.595 "state": "online", 00:25:46.595 "raid_level": "raid1", 00:25:46.595 "superblock": true, 00:25:46.595 "num_base_bdevs": 2, 00:25:46.595 "num_base_bdevs_discovered": 1, 00:25:46.595 "num_base_bdevs_operational": 1, 00:25:46.595 "base_bdevs_list": [ 00:25:46.595 { 00:25:46.595 "name": null, 00:25:46.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.595 "is_configured": false, 00:25:46.595 "data_offset": 2048, 00:25:46.595 "data_size": 63488 00:25:46.595 }, 00:25:46.595 { 00:25:46.595 "name": "BaseBdev2", 00:25:46.595 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:25:46.595 "is_configured": true, 00:25:46.595 "data_offset": 2048, 00:25:46.595 "data_size": 63488 00:25:46.595 } 00:25:46.595 ] 00:25:46.595 }' 00:25:46.595 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:46.595 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:47.162 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:47.162 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:47.162 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:47.162 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:47.162 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:47.162 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.162 00:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.422 00:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:47.422 "name": "raid_bdev1", 00:25:47.422 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:25:47.422 "strip_size_kb": 0, 00:25:47.422 "state": "online", 00:25:47.422 "raid_level": "raid1", 00:25:47.422 "superblock": true, 00:25:47.422 "num_base_bdevs": 2, 00:25:47.422 "num_base_bdevs_discovered": 1, 00:25:47.422 "num_base_bdevs_operational": 1, 00:25:47.422 "base_bdevs_list": [ 00:25:47.422 { 00:25:47.422 "name": null, 00:25:47.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:47.422 "is_configured": false, 00:25:47.422 "data_offset": 2048, 00:25:47.422 "data_size": 63488 00:25:47.422 }, 00:25:47.422 { 00:25:47.422 "name": "BaseBdev2", 00:25:47.422 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:25:47.422 "is_configured": true, 00:25:47.422 "data_offset": 2048, 00:25:47.422 "data_size": 63488 00:25:47.422 } 00:25:47.422 ] 00:25:47.422 }' 00:25:47.422 00:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:47.422 00:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:47.422 00:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:47.422 00:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:47.422 00:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:47.681 [2024-07-25 00:09:43.338693] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:47.681 00:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:25:47.681 [2024-07-25 00:09:43.400661] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005d40 00:25:47.681 [2024-07-25 00:09:43.402842] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:47.681 [2024-07-25 00:09:43.512885] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:47.681 [2024-07-25 00:09:43.513393] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:47.939 [2024-07-25 00:09:43.736968] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:47.939 [2024-07-25 00:09:43.737252] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:48.505 [2024-07-25 00:09:44.073611] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:48.505 [2024-07-25 00:09:44.081181] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:48.505 [2024-07-25 00:09:44.313094] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:48.764 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:48.764 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:48.764 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:48.764 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:48.764 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:48.764 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.764 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.764 [2024-07-25 00:09:44.526713] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:49.022 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:49.022 "name": "raid_bdev1", 00:25:49.022 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:25:49.022 "strip_size_kb": 0, 00:25:49.022 "state": "online", 00:25:49.022 "raid_level": "raid1", 00:25:49.022 "superblock": true, 00:25:49.022 "num_base_bdevs": 2, 00:25:49.022 "num_base_bdevs_discovered": 2, 00:25:49.022 "num_base_bdevs_operational": 2, 00:25:49.022 "process": { 00:25:49.022 "type": "rebuild", 00:25:49.022 "target": "spare", 00:25:49.022 "progress": { 00:25:49.022 "blocks": 14336, 00:25:49.022 "percent": 22 00:25:49.022 } 00:25:49.022 }, 00:25:49.022 "base_bdevs_list": [ 00:25:49.022 { 00:25:49.022 "name": "spare", 00:25:49.022 "uuid": "08e81dbb-1f20-5452-b69b-69a169759ed5", 00:25:49.022 "is_configured": true, 00:25:49.022 "data_offset": 2048, 00:25:49.022 "data_size": 63488 00:25:49.022 }, 00:25:49.022 { 00:25:49.022 "name": "BaseBdev2", 00:25:49.022 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:25:49.022 "is_configured": true, 00:25:49.022 "data_offset": 2048, 00:25:49.022 "data_size": 63488 00:25:49.022 } 00:25:49.022 ] 00:25:49.022 }' 00:25:49.022 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:49.022 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:49.022 [2024-07-25 00:09:44.661409] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:49.022 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:49.022 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:49.022 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:25:49.022 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:25:49.022 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:25:49.022 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:25:49.022 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:25:49.022 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:25:49.022 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # local timeout=774 00:25:49.022 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:49.022 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:49.022 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:49.022 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:49.022 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:49.022 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:49.022 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.022 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.280 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:49.280 "name": "raid_bdev1", 00:25:49.280 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:25:49.280 "strip_size_kb": 0, 00:25:49.280 "state": "online", 00:25:49.280 "raid_level": "raid1", 00:25:49.280 "superblock": true, 00:25:49.280 "num_base_bdevs": 2, 00:25:49.280 "num_base_bdevs_discovered": 2, 00:25:49.280 "num_base_bdevs_operational": 2, 00:25:49.280 "process": { 00:25:49.280 "type": "rebuild", 00:25:49.280 "target": "spare", 00:25:49.280 "progress": { 00:25:49.280 "blocks": 18432, 00:25:49.280 "percent": 29 00:25:49.280 } 00:25:49.280 }, 00:25:49.280 "base_bdevs_list": [ 00:25:49.280 { 00:25:49.280 "name": "spare", 00:25:49.280 "uuid": "08e81dbb-1f20-5452-b69b-69a169759ed5", 00:25:49.280 "is_configured": true, 00:25:49.280 "data_offset": 2048, 00:25:49.280 "data_size": 63488 00:25:49.280 }, 00:25:49.280 { 00:25:49.280 "name": "BaseBdev2", 00:25:49.280 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:25:49.280 "is_configured": true, 00:25:49.280 "data_offset": 2048, 00:25:49.280 "data_size": 63488 00:25:49.280 } 00:25:49.280 ] 00:25:49.280 }' 00:25:49.280 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:49.280 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:49.280 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:49.280 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:49.280 00:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:49.280 [2024-07-25 00:09:45.132705] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:49.280 [2024-07-25 00:09:45.133036] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:49.846 [2024-07-25 00:09:45.446695] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:25:49.846 [2024-07-25 00:09:45.573770] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:25:50.104 00:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:50.104 00:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:50.363 00:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:50.363 00:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:50.363 00:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:50.363 00:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:50.363 00:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.363 00:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.363 [2024-07-25 00:09:46.034634] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:25:50.363 00:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:50.363 "name": "raid_bdev1", 00:25:50.363 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:25:50.363 "strip_size_kb": 0, 00:25:50.363 "state": "online", 00:25:50.363 "raid_level": "raid1", 00:25:50.363 "superblock": true, 00:25:50.363 "num_base_bdevs": 2, 00:25:50.363 "num_base_bdevs_discovered": 2, 00:25:50.363 "num_base_bdevs_operational": 2, 00:25:50.363 "process": { 00:25:50.363 "type": "rebuild", 00:25:50.363 "target": "spare", 00:25:50.363 "progress": { 00:25:50.363 "blocks": 36864, 00:25:50.363 "percent": 58 00:25:50.363 } 00:25:50.363 }, 00:25:50.363 "base_bdevs_list": [ 00:25:50.363 { 00:25:50.363 "name": "spare", 00:25:50.363 "uuid": "08e81dbb-1f20-5452-b69b-69a169759ed5", 00:25:50.363 "is_configured": true, 00:25:50.363 "data_offset": 2048, 00:25:50.363 "data_size": 63488 00:25:50.363 }, 00:25:50.363 { 00:25:50.363 "name": "BaseBdev2", 00:25:50.363 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:25:50.363 "is_configured": true, 00:25:50.363 "data_offset": 2048, 00:25:50.363 "data_size": 63488 00:25:50.363 } 00:25:50.363 ] 00:25:50.363 }' 00:25:50.621 00:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:50.621 00:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:50.621 00:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:50.621 00:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:50.621 00:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:51.188 [2024-07-25 00:09:46.945293] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:25:51.188 [2024-07-25 00:09:47.054492] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:25:51.188 [2024-07-25 00:09:47.054812] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:25:51.446 00:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:51.446 00:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:51.446 00:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:51.446 00:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:51.446 00:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:51.446 00:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:51.446 00:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.446 00:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.705 00:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:51.705 "name": "raid_bdev1", 00:25:51.705 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:25:51.705 "strip_size_kb": 0, 00:25:51.705 "state": "online", 00:25:51.705 "raid_level": "raid1", 00:25:51.705 "superblock": true, 00:25:51.705 "num_base_bdevs": 2, 00:25:51.705 "num_base_bdevs_discovered": 2, 00:25:51.705 "num_base_bdevs_operational": 2, 00:25:51.705 "process": { 00:25:51.705 "type": "rebuild", 00:25:51.705 "target": "spare", 00:25:51.705 "progress": { 00:25:51.705 "blocks": 59392, 00:25:51.705 "percent": 93 00:25:51.705 } 00:25:51.705 }, 00:25:51.705 "base_bdevs_list": [ 00:25:51.705 { 00:25:51.705 "name": "spare", 00:25:51.705 "uuid": "08e81dbb-1f20-5452-b69b-69a169759ed5", 00:25:51.705 "is_configured": true, 00:25:51.705 "data_offset": 2048, 00:25:51.705 "data_size": 63488 00:25:51.705 }, 00:25:51.705 { 00:25:51.705 "name": "BaseBdev2", 00:25:51.705 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:25:51.705 "is_configured": true, 00:25:51.705 "data_offset": 2048, 00:25:51.705 "data_size": 63488 00:25:51.705 } 00:25:51.705 ] 00:25:51.705 }' 00:25:51.705 00:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:51.705 00:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:51.705 00:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:51.705 00:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:51.705 00:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:51.964 [2024-07-25 00:09:47.702856] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:51.964 [2024-07-25 00:09:47.809417] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:51.964 [2024-07-25 00:09:47.811441] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:52.900 00:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:52.900 00:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:52.900 00:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:52.900 00:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:52.900 00:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:52.900 00:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:52.900 00:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.900 00:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.900 00:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:52.900 "name": "raid_bdev1", 00:25:52.900 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:25:52.900 "strip_size_kb": 0, 00:25:52.900 "state": "online", 00:25:52.900 "raid_level": "raid1", 00:25:52.900 "superblock": true, 00:25:52.900 "num_base_bdevs": 2, 00:25:52.900 "num_base_bdevs_discovered": 2, 00:25:52.900 "num_base_bdevs_operational": 2, 00:25:52.900 "base_bdevs_list": [ 00:25:52.900 { 00:25:52.900 "name": "spare", 00:25:52.900 "uuid": "08e81dbb-1f20-5452-b69b-69a169759ed5", 00:25:52.900 "is_configured": true, 00:25:52.900 "data_offset": 2048, 00:25:52.900 "data_size": 63488 00:25:52.900 }, 00:25:52.900 { 00:25:52.900 "name": "BaseBdev2", 00:25:52.900 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:25:52.900 "is_configured": true, 00:25:52.900 "data_offset": 2048, 00:25:52.900 "data_size": 63488 00:25:52.900 } 00:25:52.900 ] 00:25:52.900 }' 00:25:52.900 00:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:53.159 00:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:53.159 00:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:53.159 00:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:53.159 00:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # break 00:25:53.159 00:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:53.159 00:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:53.159 00:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:53.159 00:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:53.159 00:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:53.159 00:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.159 00:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.417 00:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:53.417 "name": "raid_bdev1", 00:25:53.417 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:25:53.417 "strip_size_kb": 0, 00:25:53.417 "state": "online", 00:25:53.417 "raid_level": "raid1", 00:25:53.417 "superblock": true, 00:25:53.417 "num_base_bdevs": 2, 00:25:53.417 "num_base_bdevs_discovered": 2, 00:25:53.417 "num_base_bdevs_operational": 2, 00:25:53.417 "base_bdevs_list": [ 00:25:53.417 { 00:25:53.417 "name": "spare", 00:25:53.417 "uuid": "08e81dbb-1f20-5452-b69b-69a169759ed5", 00:25:53.417 "is_configured": true, 00:25:53.417 "data_offset": 2048, 00:25:53.417 "data_size": 63488 00:25:53.417 }, 00:25:53.417 { 00:25:53.417 "name": "BaseBdev2", 00:25:53.417 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:25:53.417 "is_configured": true, 00:25:53.417 "data_offset": 2048, 00:25:53.417 "data_size": 63488 00:25:53.417 } 00:25:53.417 ] 00:25:53.417 }' 00:25:53.417 00:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:53.417 00:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:53.417 00:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:53.417 00:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:53.417 00:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:53.417 00:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:53.417 00:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:53.417 00:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:53.417 00:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:53.417 00:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:53.417 00:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:53.417 00:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:53.417 00:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:53.417 00:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:53.417 00:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.417 00:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.675 00:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:53.675 "name": "raid_bdev1", 00:25:53.675 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:25:53.675 "strip_size_kb": 0, 00:25:53.675 "state": "online", 00:25:53.675 "raid_level": "raid1", 00:25:53.675 "superblock": true, 00:25:53.675 "num_base_bdevs": 2, 00:25:53.675 "num_base_bdevs_discovered": 2, 00:25:53.675 "num_base_bdevs_operational": 2, 00:25:53.675 "base_bdevs_list": [ 00:25:53.675 { 00:25:53.675 "name": "spare", 00:25:53.675 "uuid": "08e81dbb-1f20-5452-b69b-69a169759ed5", 00:25:53.675 "is_configured": true, 00:25:53.675 "data_offset": 2048, 00:25:53.675 "data_size": 63488 00:25:53.675 }, 00:25:53.675 { 00:25:53.675 "name": "BaseBdev2", 00:25:53.675 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:25:53.675 "is_configured": true, 00:25:53.675 "data_offset": 2048, 00:25:53.675 "data_size": 63488 00:25:53.675 } 00:25:53.675 ] 00:25:53.675 }' 00:25:53.675 00:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:53.675 00:09:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:53.934 00:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:54.192 [2024-07-25 00:09:49.928895] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:54.192 [2024-07-25 00:09:49.928954] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:54.192 00:25:54.192 Latency(us) 00:25:54.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.192 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:25:54.192 raid_bdev1 : 10.50 99.24 297.72 0.00 0.00 13557.54 262.52 116296.61 00:25:54.192 =================================================================================================================== 00:25:54.192 Total : 99.24 297.72 0.00 0.00 13557.54 262.52 116296.61 00:25:54.192 [2024-07-25 00:09:50.020336] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:54.192 [2024-07-25 00:09:50.020415] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:54.192 0 00:25:54.192 [2024-07-25 00:09:50.020523] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:54.192 [2024-07-25 00:09:50.020546] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state offline 00:25:54.192 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.192 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # jq length 00:25:54.451 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:25:54.451 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:25:54.451 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:25:54.451 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:25:54.451 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:54.451 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:25:54.451 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:54.451 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:54.451 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:54.451 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:25:54.451 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:54.451 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:54.451 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:25:54.709 /dev/nbd0 00:25:54.709 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:54.709 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:54.709 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:25:54.709 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:25:54.709 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:54.709 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:54.709 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:25:54.709 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:25:54.709 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:54.709 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:54.709 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:54.709 1+0 records in 00:25:54.709 1+0 records out 00:25:54.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265046 s, 15.5 MB/s 00:25:54.709 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:54.709 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:25:54.709 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:54.709 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:54.709 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:25:54.709 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:54.709 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:54.709 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:25:54.710 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:25:54.710 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:25:54.710 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:54.710 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:25:54.710 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:54.710 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:54.710 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:54.710 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:25:54.710 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:54.710 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:54.710 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:25:54.968 /dev/nbd1 00:25:54.968 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:54.968 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:54.968 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:25:54.968 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:25:54.968 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:54.968 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:54.968 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:25:54.968 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:25:54.968 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:54.968 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:54.968 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:54.968 1+0 records in 00:25:54.968 1+0 records out 00:25:54.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028306 s, 14.5 MB/s 00:25:54.968 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:54.968 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:25:54.968 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:54.968 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:54.968 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:25:54.968 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:54.968 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:54.968 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:55.236 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:55.236 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:55.236 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:55.236 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:55.236 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:25:55.236 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:55.236 00:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:55.525 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:55.525 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:55.525 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:55.525 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:55.525 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:55.525 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:55.525 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:25:55.525 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:55.525 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:55.525 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:55.525 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:55.525 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:55.525 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:25:55.525 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:55.525 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:55.783 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:55.783 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:55.783 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:55.783 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:55.783 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:55.783 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:55.783 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:25:55.783 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:55.783 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:25:55.783 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:56.041 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:56.300 [2024-07-25 00:09:51.940610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:56.300 [2024-07-25 00:09:51.940696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:56.300 [2024-07-25 00:09:51.940730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:25:56.300 [2024-07-25 00:09:51.940746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:56.300 [2024-07-25 00:09:51.943339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:56.300 [2024-07-25 00:09:51.943413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:56.300 [2024-07-25 00:09:51.943525] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:56.300 [2024-07-25 00:09:51.943592] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:56.300 [2024-07-25 00:09:51.943773] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:56.300 spare 00:25:56.300 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:56.300 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:56.300 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:56.300 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:56.300 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:56.300 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:56.300 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:56.300 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:56.300 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:56.300 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:56.300 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.300 00:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.300 [2024-07-25 00:09:52.043909] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a880 00:25:56.300 [2024-07-25 00:09:52.043949] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:56.300 [2024-07-25 00:09:52.044158] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002abf0 00:25:56.300 [2024-07-25 00:09:52.044616] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a880 00:25:56.300 [2024-07-25 00:09:52.044657] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a880 00:25:56.300 [2024-07-25 00:09:52.044854] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:56.558 00:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:56.558 "name": "raid_bdev1", 00:25:56.558 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:25:56.558 "strip_size_kb": 0, 00:25:56.558 "state": "online", 00:25:56.558 "raid_level": "raid1", 00:25:56.558 "superblock": true, 00:25:56.558 "num_base_bdevs": 2, 00:25:56.558 "num_base_bdevs_discovered": 2, 00:25:56.558 "num_base_bdevs_operational": 2, 00:25:56.558 "base_bdevs_list": [ 00:25:56.558 { 00:25:56.558 "name": "spare", 00:25:56.558 "uuid": "08e81dbb-1f20-5452-b69b-69a169759ed5", 00:25:56.558 "is_configured": true, 00:25:56.558 "data_offset": 2048, 00:25:56.558 "data_size": 63488 00:25:56.558 }, 00:25:56.558 { 00:25:56.558 "name": "BaseBdev2", 00:25:56.558 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:25:56.558 "is_configured": true, 00:25:56.558 "data_offset": 2048, 00:25:56.558 "data_size": 63488 00:25:56.558 } 00:25:56.558 ] 00:25:56.558 }' 00:25:56.558 00:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:56.558 00:09:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:56.816 00:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:56.816 00:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:56.816 00:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:56.816 00:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:56.816 00:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:56.816 00:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.816 00:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.075 00:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:57.075 "name": "raid_bdev1", 00:25:57.075 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:25:57.075 "strip_size_kb": 0, 00:25:57.075 "state": "online", 00:25:57.075 "raid_level": "raid1", 00:25:57.075 "superblock": true, 00:25:57.075 "num_base_bdevs": 2, 00:25:57.075 "num_base_bdevs_discovered": 2, 00:25:57.075 "num_base_bdevs_operational": 2, 00:25:57.075 "base_bdevs_list": [ 00:25:57.075 { 00:25:57.075 "name": "spare", 00:25:57.075 "uuid": "08e81dbb-1f20-5452-b69b-69a169759ed5", 00:25:57.075 "is_configured": true, 00:25:57.075 "data_offset": 2048, 00:25:57.075 "data_size": 63488 00:25:57.075 }, 00:25:57.075 { 00:25:57.075 "name": "BaseBdev2", 00:25:57.075 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:25:57.075 "is_configured": true, 00:25:57.075 "data_offset": 2048, 00:25:57.075 "data_size": 63488 00:25:57.075 } 00:25:57.075 ] 00:25:57.075 }' 00:25:57.075 00:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:57.075 00:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:57.075 00:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:57.075 00:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:57.075 00:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.075 00:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:57.333 00:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:25:57.333 00:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:57.592 [2024-07-25 00:09:53.245429] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:57.592 00:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:57.592 00:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:57.592 00:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:57.592 00:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:57.592 00:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:57.592 00:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:57.592 00:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:57.592 00:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:57.592 00:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:57.592 00:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:57.592 00:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.592 00:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.849 00:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:57.849 "name": "raid_bdev1", 00:25:57.849 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:25:57.849 "strip_size_kb": 0, 00:25:57.849 "state": "online", 00:25:57.849 "raid_level": "raid1", 00:25:57.849 "superblock": true, 00:25:57.849 "num_base_bdevs": 2, 00:25:57.849 "num_base_bdevs_discovered": 1, 00:25:57.849 "num_base_bdevs_operational": 1, 00:25:57.849 "base_bdevs_list": [ 00:25:57.849 { 00:25:57.849 "name": null, 00:25:57.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.849 "is_configured": false, 00:25:57.849 "data_offset": 2048, 00:25:57.849 "data_size": 63488 00:25:57.849 }, 00:25:57.849 { 00:25:57.849 "name": "BaseBdev2", 00:25:57.849 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:25:57.849 "is_configured": true, 00:25:57.849 "data_offset": 2048, 00:25:57.849 "data_size": 63488 00:25:57.849 } 00:25:57.849 ] 00:25:57.849 }' 00:25:57.849 00:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:57.849 00:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:58.106 00:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:58.364 [2024-07-25 00:09:54.057748] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:58.364 [2024-07-25 00:09:54.058006] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:58.364 [2024-07-25 00:09:54.058028] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:58.364 [2024-07-25 00:09:54.058069] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:58.364 [2024-07-25 00:09:54.070565] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002acc0 00:25:58.364 [2024-07-25 00:09:54.072656] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:58.364 00:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # sleep 1 00:25:59.298 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:59.298 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:59.298 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:59.298 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:59.298 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:59.298 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.298 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.556 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:59.556 "name": "raid_bdev1", 00:25:59.556 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:25:59.556 "strip_size_kb": 0, 00:25:59.556 "state": "online", 00:25:59.556 "raid_level": "raid1", 00:25:59.556 "superblock": true, 00:25:59.556 "num_base_bdevs": 2, 00:25:59.556 "num_base_bdevs_discovered": 2, 00:25:59.556 "num_base_bdevs_operational": 2, 00:25:59.556 "process": { 00:25:59.556 "type": "rebuild", 00:25:59.556 "target": "spare", 00:25:59.556 "progress": { 00:25:59.556 "blocks": 24576, 00:25:59.556 "percent": 38 00:25:59.556 } 00:25:59.556 }, 00:25:59.556 "base_bdevs_list": [ 00:25:59.556 { 00:25:59.556 "name": "spare", 00:25:59.556 "uuid": "08e81dbb-1f20-5452-b69b-69a169759ed5", 00:25:59.556 "is_configured": true, 00:25:59.556 "data_offset": 2048, 00:25:59.556 "data_size": 63488 00:25:59.556 }, 00:25:59.556 { 00:25:59.556 "name": "BaseBdev2", 00:25:59.556 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:25:59.556 "is_configured": true, 00:25:59.556 "data_offset": 2048, 00:25:59.556 "data_size": 63488 00:25:59.556 } 00:25:59.556 ] 00:25:59.556 }' 00:25:59.556 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:59.556 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:59.556 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:59.556 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:59.556 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:59.815 [2024-07-25 00:09:55.539607] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:59.815 [2024-07-25 00:09:55.580474] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:59.815 [2024-07-25 00:09:55.580583] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:59.815 [2024-07-25 00:09:55.580611] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:59.815 [2024-07-25 00:09:55.580622] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:59.815 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:59.815 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:59.815 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:59.815 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:59.815 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:59.815 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:59.815 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:59.815 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:59.815 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:59.815 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:59.815 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.815 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.073 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:00.073 "name": "raid_bdev1", 00:26:00.073 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:26:00.073 "strip_size_kb": 0, 00:26:00.073 "state": "online", 00:26:00.073 "raid_level": "raid1", 00:26:00.073 "superblock": true, 00:26:00.073 "num_base_bdevs": 2, 00:26:00.073 "num_base_bdevs_discovered": 1, 00:26:00.073 "num_base_bdevs_operational": 1, 00:26:00.073 "base_bdevs_list": [ 00:26:00.073 { 00:26:00.073 "name": null, 00:26:00.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.073 "is_configured": false, 00:26:00.073 "data_offset": 2048, 00:26:00.073 "data_size": 63488 00:26:00.073 }, 00:26:00.073 { 00:26:00.073 "name": "BaseBdev2", 00:26:00.073 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:26:00.073 "is_configured": true, 00:26:00.073 "data_offset": 2048, 00:26:00.073 "data_size": 63488 00:26:00.073 } 00:26:00.073 ] 00:26:00.073 }' 00:26:00.073 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:00.073 00:09:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:00.330 00:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:00.589 [2024-07-25 00:09:56.430210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:00.589 [2024-07-25 00:09:56.430573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:00.589 [2024-07-25 00:09:56.430623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:26:00.589 [2024-07-25 00:09:56.430639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:00.589 [2024-07-25 00:09:56.431362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:00.589 [2024-07-25 00:09:56.431401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:00.589 [2024-07-25 00:09:56.431510] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:00.589 [2024-07-25 00:09:56.431526] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:00.589 [2024-07-25 00:09:56.431542] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:00.589 [2024-07-25 00:09:56.431566] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:00.589 spare 00:26:00.589 [2024-07-25 00:09:56.443938] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002ad90 00:26:00.589 [2024-07-25 00:09:56.445928] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:00.847 00:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # sleep 1 00:26:01.783 00:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:01.783 00:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:01.783 00:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:01.783 00:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:01.783 00:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:01.783 00:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.783 00:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.041 00:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:02.041 "name": "raid_bdev1", 00:26:02.041 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:26:02.041 "strip_size_kb": 0, 00:26:02.041 "state": "online", 00:26:02.041 "raid_level": "raid1", 00:26:02.041 "superblock": true, 00:26:02.041 "num_base_bdevs": 2, 00:26:02.041 "num_base_bdevs_discovered": 2, 00:26:02.041 "num_base_bdevs_operational": 2, 00:26:02.041 "process": { 00:26:02.041 "type": "rebuild", 00:26:02.041 "target": "spare", 00:26:02.041 "progress": { 00:26:02.041 "blocks": 24576, 00:26:02.041 "percent": 38 00:26:02.041 } 00:26:02.041 }, 00:26:02.041 "base_bdevs_list": [ 00:26:02.041 { 00:26:02.041 "name": "spare", 00:26:02.041 "uuid": "08e81dbb-1f20-5452-b69b-69a169759ed5", 00:26:02.041 "is_configured": true, 00:26:02.041 "data_offset": 2048, 00:26:02.041 "data_size": 63488 00:26:02.041 }, 00:26:02.041 { 00:26:02.041 "name": "BaseBdev2", 00:26:02.041 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:26:02.041 "is_configured": true, 00:26:02.041 "data_offset": 2048, 00:26:02.041 "data_size": 63488 00:26:02.041 } 00:26:02.041 ] 00:26:02.041 }' 00:26:02.041 00:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:02.041 00:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:02.041 00:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:02.041 00:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:02.041 00:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:02.299 [2024-07-25 00:09:58.012375] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:02.299 [2024-07-25 00:09:58.054395] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:02.299 [2024-07-25 00:09:58.054478] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:02.299 [2024-07-25 00:09:58.054499] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:02.299 [2024-07-25 00:09:58.054516] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:02.299 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:02.299 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:02.299 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:02.299 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:02.299 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:02.299 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:02.299 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:02.299 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:02.299 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:02.299 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:02.299 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.299 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.557 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:02.557 "name": "raid_bdev1", 00:26:02.557 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:26:02.557 "strip_size_kb": 0, 00:26:02.557 "state": "online", 00:26:02.557 "raid_level": "raid1", 00:26:02.557 "superblock": true, 00:26:02.557 "num_base_bdevs": 2, 00:26:02.557 "num_base_bdevs_discovered": 1, 00:26:02.557 "num_base_bdevs_operational": 1, 00:26:02.557 "base_bdevs_list": [ 00:26:02.557 { 00:26:02.557 "name": null, 00:26:02.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.557 "is_configured": false, 00:26:02.557 "data_offset": 2048, 00:26:02.557 "data_size": 63488 00:26:02.557 }, 00:26:02.557 { 00:26:02.557 "name": "BaseBdev2", 00:26:02.557 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:26:02.557 "is_configured": true, 00:26:02.557 "data_offset": 2048, 00:26:02.557 "data_size": 63488 00:26:02.557 } 00:26:02.557 ] 00:26:02.557 }' 00:26:02.557 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:02.557 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:02.816 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:02.816 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:02.816 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:02.816 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:02.816 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:02.816 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.816 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.074 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:03.074 "name": "raid_bdev1", 00:26:03.074 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:26:03.074 "strip_size_kb": 0, 00:26:03.074 "state": "online", 00:26:03.074 "raid_level": "raid1", 00:26:03.074 "superblock": true, 00:26:03.074 "num_base_bdevs": 2, 00:26:03.074 "num_base_bdevs_discovered": 1, 00:26:03.074 "num_base_bdevs_operational": 1, 00:26:03.074 "base_bdevs_list": [ 00:26:03.074 { 00:26:03.074 "name": null, 00:26:03.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.074 "is_configured": false, 00:26:03.074 "data_offset": 2048, 00:26:03.074 "data_size": 63488 00:26:03.074 }, 00:26:03.074 { 00:26:03.074 "name": "BaseBdev2", 00:26:03.074 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:26:03.074 "is_configured": true, 00:26:03.074 "data_offset": 2048, 00:26:03.074 "data_size": 63488 00:26:03.074 } 00:26:03.074 ] 00:26:03.074 }' 00:26:03.074 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:03.074 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:03.074 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:03.074 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:03.074 00:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:03.384 00:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:03.641 [2024-07-25 00:09:59.381615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:03.641 [2024-07-25 00:09:59.381729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:03.641 [2024-07-25 00:09:59.381782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:26:03.641 [2024-07-25 00:09:59.381800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:03.641 [2024-07-25 00:09:59.382387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:03.641 [2024-07-25 00:09:59.382423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:03.641 [2024-07-25 00:09:59.382524] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:03.641 [2024-07-25 00:09:59.382545] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:03.641 [2024-07-25 00:09:59.382556] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:03.641 BaseBdev1 00:26:03.641 00:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # sleep 1 00:26:04.577 00:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:04.577 00:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:04.577 00:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:04.577 00:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:04.577 00:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:04.577 00:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:04.577 00:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:04.577 00:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:04.577 00:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:04.577 00:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:04.577 00:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.577 00:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.835 00:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:04.835 "name": "raid_bdev1", 00:26:04.835 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:26:04.835 "strip_size_kb": 0, 00:26:04.835 "state": "online", 00:26:04.835 "raid_level": "raid1", 00:26:04.835 "superblock": true, 00:26:04.835 "num_base_bdevs": 2, 00:26:04.835 "num_base_bdevs_discovered": 1, 00:26:04.835 "num_base_bdevs_operational": 1, 00:26:04.835 "base_bdevs_list": [ 00:26:04.835 { 00:26:04.835 "name": null, 00:26:04.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.835 "is_configured": false, 00:26:04.835 "data_offset": 2048, 00:26:04.835 "data_size": 63488 00:26:04.835 }, 00:26:04.835 { 00:26:04.835 "name": "BaseBdev2", 00:26:04.835 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:26:04.835 "is_configured": true, 00:26:04.835 "data_offset": 2048, 00:26:04.835 "data_size": 63488 00:26:04.835 } 00:26:04.836 ] 00:26:04.836 }' 00:26:04.836 00:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:04.836 00:10:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:05.401 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:05.401 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:05.401 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:05.401 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:05.401 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:05.401 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:05.401 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.659 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:05.659 "name": "raid_bdev1", 00:26:05.659 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:26:05.659 "strip_size_kb": 0, 00:26:05.659 "state": "online", 00:26:05.659 "raid_level": "raid1", 00:26:05.659 "superblock": true, 00:26:05.659 "num_base_bdevs": 2, 00:26:05.659 "num_base_bdevs_discovered": 1, 00:26:05.659 "num_base_bdevs_operational": 1, 00:26:05.659 "base_bdevs_list": [ 00:26:05.659 { 00:26:05.659 "name": null, 00:26:05.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.659 "is_configured": false, 00:26:05.659 "data_offset": 2048, 00:26:05.659 "data_size": 63488 00:26:05.659 }, 00:26:05.659 { 00:26:05.659 "name": "BaseBdev2", 00:26:05.659 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:26:05.659 "is_configured": true, 00:26:05.659 "data_offset": 2048, 00:26:05.659 "data_size": 63488 00:26:05.659 } 00:26:05.659 ] 00:26:05.659 }' 00:26:05.659 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:05.659 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:05.659 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:05.659 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:05.659 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:05.659 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:26:05.659 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:05.659 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:05.659 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:05.659 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:05.659 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:05.659 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:05.659 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:05.659 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:05.659 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:05.659 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:05.659 [2024-07-25 00:10:01.510563] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:05.659 [2024-07-25 00:10:01.510744] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:05.659 [2024-07-25 00:10:01.510767] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:05.659 request: 00:26:05.659 { 00:26:05.659 "base_bdev": "BaseBdev1", 00:26:05.659 "raid_bdev": "raid_bdev1", 00:26:05.659 "method": "bdev_raid_add_base_bdev", 00:26:05.659 "req_id": 1 00:26:05.659 } 00:26:05.659 Got JSON-RPC error response 00:26:05.659 response: 00:26:05.659 { 00:26:05.659 "code": -22, 00:26:05.659 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:26:05.659 } 00:26:05.916 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:26:05.916 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:05.916 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:05.916 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:05.916 00:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@793 -- # sleep 1 00:26:06.851 00:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:06.851 00:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:06.851 00:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:06.851 00:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:06.851 00:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:06.851 00:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:06.851 00:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:06.851 00:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:06.851 00:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:06.851 00:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:06.851 00:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.851 00:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:07.109 00:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:07.109 "name": "raid_bdev1", 00:26:07.109 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:26:07.109 "strip_size_kb": 0, 00:26:07.109 "state": "online", 00:26:07.109 "raid_level": "raid1", 00:26:07.109 "superblock": true, 00:26:07.109 "num_base_bdevs": 2, 00:26:07.109 "num_base_bdevs_discovered": 1, 00:26:07.109 "num_base_bdevs_operational": 1, 00:26:07.109 "base_bdevs_list": [ 00:26:07.109 { 00:26:07.109 "name": null, 00:26:07.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.109 "is_configured": false, 00:26:07.109 "data_offset": 2048, 00:26:07.109 "data_size": 63488 00:26:07.109 }, 00:26:07.109 { 00:26:07.109 "name": "BaseBdev2", 00:26:07.109 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:26:07.109 "is_configured": true, 00:26:07.109 "data_offset": 2048, 00:26:07.109 "data_size": 63488 00:26:07.109 } 00:26:07.109 ] 00:26:07.109 }' 00:26:07.109 00:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:07.109 00:10:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:07.368 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:07.368 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:07.368 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:07.368 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:07.368 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:07.368 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.368 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:07.627 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:07.627 "name": "raid_bdev1", 00:26:07.627 "uuid": "3882e982-ea7b-4a86-a0cb-169dda41dc13", 00:26:07.627 "strip_size_kb": 0, 00:26:07.627 "state": "online", 00:26:07.627 "raid_level": "raid1", 00:26:07.627 "superblock": true, 00:26:07.627 "num_base_bdevs": 2, 00:26:07.627 "num_base_bdevs_discovered": 1, 00:26:07.627 "num_base_bdevs_operational": 1, 00:26:07.627 "base_bdevs_list": [ 00:26:07.627 { 00:26:07.627 "name": null, 00:26:07.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.627 "is_configured": false, 00:26:07.627 "data_offset": 2048, 00:26:07.627 "data_size": 63488 00:26:07.627 }, 00:26:07.627 { 00:26:07.627 "name": "BaseBdev2", 00:26:07.627 "uuid": "1a6ac9cc-286b-52e0-954f-92d2f3331f58", 00:26:07.627 "is_configured": true, 00:26:07.627 "data_offset": 2048, 00:26:07.627 "data_size": 63488 00:26:07.627 } 00:26:07.627 ] 00:26:07.627 }' 00:26:07.627 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:07.627 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:07.627 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:07.627 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:07.627 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@798 -- # killprocess 98658 00:26:07.627 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 98658 ']' 00:26:07.627 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 98658 00:26:07.627 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:26:07.627 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:07.627 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98658 00:26:07.627 killing process with pid 98658 00:26:07.627 Received shutdown signal, test time was about 23.929177 seconds 00:26:07.627 00:26:07.627 Latency(us) 00:26:07.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.627 =================================================================================================================== 00:26:07.627 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:07.627 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:07.627 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:07.627 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98658' 00:26:07.627 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 98658 00:26:07.627 00:10:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 98658 00:26:07.627 [2024-07-25 00:10:03.430001] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:07.627 [2024-07-25 00:10:03.430161] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:07.627 [2024-07-25 00:10:03.430274] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:07.627 [2024-07-25 00:10:03.430305] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state offline 00:26:07.885 [2024-07-25 00:10:03.603130] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@800 -- # return 0 00:26:09.263 00:26:09.263 real 0m29.291s 00:26:09.263 user 0m43.574s 00:26:09.263 sys 0m3.493s 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:09.263 ************************************ 00:26:09.263 END TEST raid_rebuild_test_sb_io 00:26:09.263 ************************************ 00:26:09.263 00:10:04 bdev_raid -- bdev/bdev_raid.sh@956 -- # for n in 2 4 00:26:09.263 00:10:04 bdev_raid -- bdev/bdev_raid.sh@957 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:26:09.263 00:10:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:26:09.263 00:10:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:09.263 00:10:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:09.263 ************************************ 00:26:09.263 START TEST raid_rebuild_test 00:26:09.263 ************************************ 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=99457 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 99457 /var/tmp/spdk-raid.sock 00:26:09.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 99457 ']' 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:09.263 00:10:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.263 [2024-07-25 00:10:04.826373] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:26:09.263 [2024-07-25 00:10:04.826734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99457 ] 00:26:09.263 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:09.263 Zero copy mechanism will not be used. 00:26:09.263 [2024-07-25 00:10:04.994977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.522 [2024-07-25 00:10:05.170311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.522 [2024-07-25 00:10:05.334077] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:10.089 00:10:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:10.089 00:10:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:26:10.089 00:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:10.089 00:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:10.348 BaseBdev1_malloc 00:26:10.348 00:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:10.607 [2024-07-25 00:10:06.222463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:10.607 [2024-07-25 00:10:06.222560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:10.607 [2024-07-25 00:10:06.222592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:26:10.607 [2024-07-25 00:10:06.222607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:10.607 [2024-07-25 00:10:06.225139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:10.607 [2024-07-25 00:10:06.225203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:10.607 BaseBdev1 00:26:10.607 00:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:10.607 00:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:10.866 BaseBdev2_malloc 00:26:10.866 00:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:11.124 [2024-07-25 00:10:06.745153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:11.124 [2024-07-25 00:10:06.745478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:11.124 [2024-07-25 00:10:06.745628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:26:11.124 [2024-07-25 00:10:06.745764] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:11.124 [2024-07-25 00:10:06.748456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:11.124 [2024-07-25 00:10:06.748647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:11.124 BaseBdev2 00:26:11.124 00:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:11.124 00:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:11.124 BaseBdev3_malloc 00:26:11.389 00:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:11.389 [2024-07-25 00:10:07.192780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:11.389 [2024-07-25 00:10:07.192899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:11.389 [2024-07-25 00:10:07.192931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:26:11.389 [2024-07-25 00:10:07.192947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:11.389 [2024-07-25 00:10:07.195563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:11.389 [2024-07-25 00:10:07.195624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:11.389 BaseBdev3 00:26:11.389 00:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:11.389 00:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:11.649 BaseBdev4_malloc 00:26:11.649 00:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:11.907 [2024-07-25 00:10:07.652854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:11.907 [2024-07-25 00:10:07.652960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:11.907 [2024-07-25 00:10:07.652994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:26:11.907 [2024-07-25 00:10:07.653010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:11.907 [2024-07-25 00:10:07.655453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:11.907 BaseBdev4 00:26:11.907 [2024-07-25 00:10:07.655663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:11.907 00:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:12.166 spare_malloc 00:26:12.166 00:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:12.424 spare_delay 00:26:12.424 00:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:12.683 [2024-07-25 00:10:08.326979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:12.683 [2024-07-25 00:10:08.327084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:12.683 [2024-07-25 00:10:08.327117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:26:12.683 [2024-07-25 00:10:08.327134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:12.683 [2024-07-25 00:10:08.329537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:12.683 [2024-07-25 00:10:08.329599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:12.683 spare 00:26:12.683 00:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:26:12.683 [2024-07-25 00:10:08.539040] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:12.683 [2024-07-25 00:10:08.541077] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:12.683 [2024-07-25 00:10:08.541158] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:12.683 [2024-07-25 00:10:08.541228] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:12.683 [2024-07-25 00:10:08.541359] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a880 00:26:12.683 [2024-07-25 00:10:08.541375] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:12.683 [2024-07-25 00:10:08.541507] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:26:12.683 [2024-07-25 00:10:08.541901] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a880 00:26:12.683 [2024-07-25 00:10:08.541917] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a880 00:26:12.683 [2024-07-25 00:10:08.542084] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:12.942 00:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:12.942 00:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:12.942 00:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:12.942 00:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:12.942 00:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:12.942 00:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:12.942 00:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:12.942 00:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:12.942 00:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:12.942 00:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:12.942 00:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.942 00:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:13.201 00:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:13.201 "name": "raid_bdev1", 00:26:13.201 "uuid": "f0d2afb8-4503-4755-b497-68943f61f60d", 00:26:13.201 "strip_size_kb": 0, 00:26:13.201 "state": "online", 00:26:13.201 "raid_level": "raid1", 00:26:13.201 "superblock": false, 00:26:13.201 "num_base_bdevs": 4, 00:26:13.201 "num_base_bdevs_discovered": 4, 00:26:13.201 "num_base_bdevs_operational": 4, 00:26:13.201 "base_bdevs_list": [ 00:26:13.201 { 00:26:13.201 "name": "BaseBdev1", 00:26:13.201 "uuid": "b0cd38d2-4efc-5dab-9862-f48cc4fdb19c", 00:26:13.201 "is_configured": true, 00:26:13.201 "data_offset": 0, 00:26:13.201 "data_size": 65536 00:26:13.201 }, 00:26:13.201 { 00:26:13.201 "name": "BaseBdev2", 00:26:13.201 "uuid": "07f5e77c-0c38-5043-a456-c7f2f8f806bd", 00:26:13.201 "is_configured": true, 00:26:13.201 "data_offset": 0, 00:26:13.201 "data_size": 65536 00:26:13.201 }, 00:26:13.201 { 00:26:13.201 "name": "BaseBdev3", 00:26:13.201 "uuid": "6353e62a-5780-5ff5-b92b-eb39eb95414a", 00:26:13.201 "is_configured": true, 00:26:13.201 "data_offset": 0, 00:26:13.201 "data_size": 65536 00:26:13.201 }, 00:26:13.201 { 00:26:13.201 "name": "BaseBdev4", 00:26:13.201 "uuid": "acdb0157-fff0-510b-a2c8-c09da5dc2a92", 00:26:13.201 "is_configured": true, 00:26:13.201 "data_offset": 0, 00:26:13.201 "data_size": 65536 00:26:13.201 } 00:26:13.201 ] 00:26:13.201 }' 00:26:13.201 00:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:13.201 00:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.460 00:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:13.460 00:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:26:13.718 [2024-07-25 00:10:09.423593] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:13.718 00:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:26:13.718 00:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.718 00:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:13.995 00:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:26:13.995 00:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:26:13.995 00:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:26:13.995 00:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:26:13.995 00:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:13.995 00:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:13.995 00:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:13.995 00:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:13.995 00:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:13.995 00:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:13.995 00:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:13.995 00:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:13.995 00:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:13.995 00:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:14.266 [2024-07-25 00:10:09.855496] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005c70 00:26:14.266 /dev/nbd0 00:26:14.266 00:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:14.266 00:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:14.266 00:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:14.266 00:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:26:14.266 00:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:14.266 00:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:14.266 00:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:14.266 00:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:26:14.266 00:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:14.267 00:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:14.267 00:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:14.267 1+0 records in 00:26:14.267 1+0 records out 00:26:14.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472347 s, 8.7 MB/s 00:26:14.267 00:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:14.267 00:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:26:14.267 00:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:14.267 00:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:14.267 00:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:26:14.267 00:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:14.267 00:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:14.267 00:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:26:14.267 00:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:26:14.267 00:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:26:22.377 65536+0 records in 00:26:22.377 65536+0 records out 00:26:22.377 33554432 bytes (34 MB, 32 MiB) copied, 7.72862 s, 4.3 MB/s 00:26:22.377 00:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:22.377 00:10:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:22.377 00:10:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:22.377 00:10:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:22.377 00:10:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:22.377 00:10:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:22.377 00:10:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:22.377 [2024-07-25 00:10:17.893238] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:22.377 00:10:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:22.377 00:10:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:22.377 00:10:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:22.377 00:10:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:22.377 00:10:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:22.377 00:10:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:22.377 00:10:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:22.377 00:10:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:22.377 00:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:22.377 [2024-07-25 00:10:18.166541] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:22.377 00:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:22.377 00:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:22.377 00:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:22.377 00:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:22.377 00:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:22.377 00:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:22.377 00:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:22.377 00:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:22.377 00:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:22.377 00:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:22.377 00:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:22.377 00:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:22.634 00:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:22.634 "name": "raid_bdev1", 00:26:22.634 "uuid": "f0d2afb8-4503-4755-b497-68943f61f60d", 00:26:22.634 "strip_size_kb": 0, 00:26:22.634 "state": "online", 00:26:22.634 "raid_level": "raid1", 00:26:22.634 "superblock": false, 00:26:22.634 "num_base_bdevs": 4, 00:26:22.634 "num_base_bdevs_discovered": 3, 00:26:22.634 "num_base_bdevs_operational": 3, 00:26:22.634 "base_bdevs_list": [ 00:26:22.634 { 00:26:22.634 "name": null, 00:26:22.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:22.634 "is_configured": false, 00:26:22.634 "data_offset": 0, 00:26:22.634 "data_size": 65536 00:26:22.634 }, 00:26:22.634 { 00:26:22.634 "name": "BaseBdev2", 00:26:22.634 "uuid": "07f5e77c-0c38-5043-a456-c7f2f8f806bd", 00:26:22.634 "is_configured": true, 00:26:22.634 "data_offset": 0, 00:26:22.634 "data_size": 65536 00:26:22.634 }, 00:26:22.634 { 00:26:22.634 "name": "BaseBdev3", 00:26:22.634 "uuid": "6353e62a-5780-5ff5-b92b-eb39eb95414a", 00:26:22.634 "is_configured": true, 00:26:22.634 "data_offset": 0, 00:26:22.634 "data_size": 65536 00:26:22.634 }, 00:26:22.634 { 00:26:22.634 "name": "BaseBdev4", 00:26:22.634 "uuid": "acdb0157-fff0-510b-a2c8-c09da5dc2a92", 00:26:22.634 "is_configured": true, 00:26:22.634 "data_offset": 0, 00:26:22.634 "data_size": 65536 00:26:22.634 } 00:26:22.634 ] 00:26:22.634 }' 00:26:22.634 00:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:22.634 00:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.892 00:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:23.150 [2024-07-25 00:10:18.982784] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:23.150 [2024-07-25 00:10:18.995016] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d09890 00:26:23.150 [2024-07-25 00:10:18.997212] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:23.150 00:10:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:24.523 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:24.523 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:24.523 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:24.523 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:24.523 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:24.523 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.523 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:24.524 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:24.524 "name": "raid_bdev1", 00:26:24.524 "uuid": "f0d2afb8-4503-4755-b497-68943f61f60d", 00:26:24.524 "strip_size_kb": 0, 00:26:24.524 "state": "online", 00:26:24.524 "raid_level": "raid1", 00:26:24.524 "superblock": false, 00:26:24.524 "num_base_bdevs": 4, 00:26:24.524 "num_base_bdevs_discovered": 4, 00:26:24.524 "num_base_bdevs_operational": 4, 00:26:24.524 "process": { 00:26:24.524 "type": "rebuild", 00:26:24.524 "target": "spare", 00:26:24.524 "progress": { 00:26:24.524 "blocks": 24576, 00:26:24.524 "percent": 37 00:26:24.524 } 00:26:24.524 }, 00:26:24.524 "base_bdevs_list": [ 00:26:24.524 { 00:26:24.524 "name": "spare", 00:26:24.524 "uuid": "ac340c59-4c5c-5a23-88b7-087739e1f6b9", 00:26:24.524 "is_configured": true, 00:26:24.524 "data_offset": 0, 00:26:24.524 "data_size": 65536 00:26:24.524 }, 00:26:24.524 { 00:26:24.524 "name": "BaseBdev2", 00:26:24.524 "uuid": "07f5e77c-0c38-5043-a456-c7f2f8f806bd", 00:26:24.524 "is_configured": true, 00:26:24.524 "data_offset": 0, 00:26:24.524 "data_size": 65536 00:26:24.524 }, 00:26:24.524 { 00:26:24.524 "name": "BaseBdev3", 00:26:24.524 "uuid": "6353e62a-5780-5ff5-b92b-eb39eb95414a", 00:26:24.524 "is_configured": true, 00:26:24.524 "data_offset": 0, 00:26:24.524 "data_size": 65536 00:26:24.524 }, 00:26:24.524 { 00:26:24.524 "name": "BaseBdev4", 00:26:24.524 "uuid": "acdb0157-fff0-510b-a2c8-c09da5dc2a92", 00:26:24.524 "is_configured": true, 00:26:24.524 "data_offset": 0, 00:26:24.524 "data_size": 65536 00:26:24.524 } 00:26:24.524 ] 00:26:24.524 }' 00:26:24.524 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:24.524 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:24.524 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:24.524 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:24.524 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:24.782 [2024-07-25 00:10:20.511900] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:24.782 [2024-07-25 00:10:20.605192] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:24.782 [2024-07-25 00:10:20.605293] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:24.782 [2024-07-25 00:10:20.605320] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:24.782 [2024-07-25 00:10:20.605346] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:24.782 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:24.782 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:24.782 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:24.782 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:24.782 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:24.782 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:24.782 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:24.782 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:24.782 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:24.782 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:24.782 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.782 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.040 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:25.040 "name": "raid_bdev1", 00:26:25.040 "uuid": "f0d2afb8-4503-4755-b497-68943f61f60d", 00:26:25.040 "strip_size_kb": 0, 00:26:25.040 "state": "online", 00:26:25.040 "raid_level": "raid1", 00:26:25.040 "superblock": false, 00:26:25.040 "num_base_bdevs": 4, 00:26:25.040 "num_base_bdevs_discovered": 3, 00:26:25.041 "num_base_bdevs_operational": 3, 00:26:25.041 "base_bdevs_list": [ 00:26:25.041 { 00:26:25.041 "name": null, 00:26:25.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.041 "is_configured": false, 00:26:25.041 "data_offset": 0, 00:26:25.041 "data_size": 65536 00:26:25.041 }, 00:26:25.041 { 00:26:25.041 "name": "BaseBdev2", 00:26:25.041 "uuid": "07f5e77c-0c38-5043-a456-c7f2f8f806bd", 00:26:25.041 "is_configured": true, 00:26:25.041 "data_offset": 0, 00:26:25.041 "data_size": 65536 00:26:25.041 }, 00:26:25.041 { 00:26:25.041 "name": "BaseBdev3", 00:26:25.041 "uuid": "6353e62a-5780-5ff5-b92b-eb39eb95414a", 00:26:25.041 "is_configured": true, 00:26:25.041 "data_offset": 0, 00:26:25.041 "data_size": 65536 00:26:25.041 }, 00:26:25.041 { 00:26:25.041 "name": "BaseBdev4", 00:26:25.041 "uuid": "acdb0157-fff0-510b-a2c8-c09da5dc2a92", 00:26:25.041 "is_configured": true, 00:26:25.041 "data_offset": 0, 00:26:25.041 "data_size": 65536 00:26:25.041 } 00:26:25.041 ] 00:26:25.041 }' 00:26:25.041 00:10:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:25.041 00:10:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.298 00:10:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:25.298 00:10:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:25.298 00:10:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:25.298 00:10:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:25.298 00:10:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:25.298 00:10:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.556 00:10:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.814 00:10:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:25.814 "name": "raid_bdev1", 00:26:25.814 "uuid": "f0d2afb8-4503-4755-b497-68943f61f60d", 00:26:25.814 "strip_size_kb": 0, 00:26:25.814 "state": "online", 00:26:25.814 "raid_level": "raid1", 00:26:25.814 "superblock": false, 00:26:25.814 "num_base_bdevs": 4, 00:26:25.814 "num_base_bdevs_discovered": 3, 00:26:25.814 "num_base_bdevs_operational": 3, 00:26:25.814 "base_bdevs_list": [ 00:26:25.814 { 00:26:25.814 "name": null, 00:26:25.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.814 "is_configured": false, 00:26:25.814 "data_offset": 0, 00:26:25.814 "data_size": 65536 00:26:25.814 }, 00:26:25.814 { 00:26:25.814 "name": "BaseBdev2", 00:26:25.814 "uuid": "07f5e77c-0c38-5043-a456-c7f2f8f806bd", 00:26:25.814 "is_configured": true, 00:26:25.814 "data_offset": 0, 00:26:25.814 "data_size": 65536 00:26:25.814 }, 00:26:25.814 { 00:26:25.814 "name": "BaseBdev3", 00:26:25.814 "uuid": "6353e62a-5780-5ff5-b92b-eb39eb95414a", 00:26:25.814 "is_configured": true, 00:26:25.814 "data_offset": 0, 00:26:25.814 "data_size": 65536 00:26:25.814 }, 00:26:25.814 { 00:26:25.814 "name": "BaseBdev4", 00:26:25.814 "uuid": "acdb0157-fff0-510b-a2c8-c09da5dc2a92", 00:26:25.814 "is_configured": true, 00:26:25.814 "data_offset": 0, 00:26:25.814 "data_size": 65536 00:26:25.814 } 00:26:25.814 ] 00:26:25.814 }' 00:26:25.814 00:10:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:25.814 00:10:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:25.814 00:10:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:25.814 00:10:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:25.814 00:10:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:25.814 [2024-07-25 00:10:21.654263] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:25.814 [2024-07-25 00:10:21.665663] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d09960 00:26:25.814 [2024-07-25 00:10:21.668000] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:26.071 00:10:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:26:27.004 00:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:27.004 00:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:27.004 00:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:27.004 00:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:27.004 00:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:27.004 00:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.004 00:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.262 00:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:27.262 "name": "raid_bdev1", 00:26:27.262 "uuid": "f0d2afb8-4503-4755-b497-68943f61f60d", 00:26:27.262 "strip_size_kb": 0, 00:26:27.262 "state": "online", 00:26:27.262 "raid_level": "raid1", 00:26:27.262 "superblock": false, 00:26:27.262 "num_base_bdevs": 4, 00:26:27.262 "num_base_bdevs_discovered": 4, 00:26:27.262 "num_base_bdevs_operational": 4, 00:26:27.262 "process": { 00:26:27.262 "type": "rebuild", 00:26:27.262 "target": "spare", 00:26:27.262 "progress": { 00:26:27.262 "blocks": 24576, 00:26:27.262 "percent": 37 00:26:27.262 } 00:26:27.262 }, 00:26:27.262 "base_bdevs_list": [ 00:26:27.262 { 00:26:27.262 "name": "spare", 00:26:27.262 "uuid": "ac340c59-4c5c-5a23-88b7-087739e1f6b9", 00:26:27.262 "is_configured": true, 00:26:27.262 "data_offset": 0, 00:26:27.262 "data_size": 65536 00:26:27.262 }, 00:26:27.262 { 00:26:27.262 "name": "BaseBdev2", 00:26:27.262 "uuid": "07f5e77c-0c38-5043-a456-c7f2f8f806bd", 00:26:27.262 "is_configured": true, 00:26:27.262 "data_offset": 0, 00:26:27.262 "data_size": 65536 00:26:27.262 }, 00:26:27.262 { 00:26:27.262 "name": "BaseBdev3", 00:26:27.262 "uuid": "6353e62a-5780-5ff5-b92b-eb39eb95414a", 00:26:27.262 "is_configured": true, 00:26:27.262 "data_offset": 0, 00:26:27.262 "data_size": 65536 00:26:27.262 }, 00:26:27.262 { 00:26:27.262 "name": "BaseBdev4", 00:26:27.262 "uuid": "acdb0157-fff0-510b-a2c8-c09da5dc2a92", 00:26:27.262 "is_configured": true, 00:26:27.262 "data_offset": 0, 00:26:27.262 "data_size": 65536 00:26:27.262 } 00:26:27.262 ] 00:26:27.262 }' 00:26:27.262 00:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:27.262 00:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:27.262 00:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:27.262 00:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:27.262 00:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:26:27.262 00:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:26:27.262 00:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:26:27.262 00:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:26:27.262 00:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:27.519 [2024-07-25 00:10:23.202240] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:27.519 [2024-07-25 00:10:23.276209] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000d09960 00:26:27.519 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:26:27.519 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:26:27.519 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:27.519 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:27.519 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:27.519 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:27.519 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:27.519 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.519 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.777 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:27.777 "name": "raid_bdev1", 00:26:27.777 "uuid": "f0d2afb8-4503-4755-b497-68943f61f60d", 00:26:27.777 "strip_size_kb": 0, 00:26:27.777 "state": "online", 00:26:27.777 "raid_level": "raid1", 00:26:27.777 "superblock": false, 00:26:27.777 "num_base_bdevs": 4, 00:26:27.777 "num_base_bdevs_discovered": 3, 00:26:27.777 "num_base_bdevs_operational": 3, 00:26:27.777 "process": { 00:26:27.777 "type": "rebuild", 00:26:27.777 "target": "spare", 00:26:27.777 "progress": { 00:26:27.777 "blocks": 36864, 00:26:27.777 "percent": 56 00:26:27.777 } 00:26:27.777 }, 00:26:27.777 "base_bdevs_list": [ 00:26:27.777 { 00:26:27.777 "name": "spare", 00:26:27.777 "uuid": "ac340c59-4c5c-5a23-88b7-087739e1f6b9", 00:26:27.777 "is_configured": true, 00:26:27.777 "data_offset": 0, 00:26:27.777 "data_size": 65536 00:26:27.777 }, 00:26:27.777 { 00:26:27.777 "name": null, 00:26:27.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.777 "is_configured": false, 00:26:27.777 "data_offset": 0, 00:26:27.777 "data_size": 65536 00:26:27.777 }, 00:26:27.777 { 00:26:27.777 "name": "BaseBdev3", 00:26:27.777 "uuid": "6353e62a-5780-5ff5-b92b-eb39eb95414a", 00:26:27.777 "is_configured": true, 00:26:27.777 "data_offset": 0, 00:26:27.777 "data_size": 65536 00:26:27.777 }, 00:26:27.777 { 00:26:27.777 "name": "BaseBdev4", 00:26:27.777 "uuid": "acdb0157-fff0-510b-a2c8-c09da5dc2a92", 00:26:27.777 "is_configured": true, 00:26:27.777 "data_offset": 0, 00:26:27.777 "data_size": 65536 00:26:27.777 } 00:26:27.777 ] 00:26:27.777 }' 00:26:27.777 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:27.777 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:27.777 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:27.777 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:27.777 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=813 00:26:27.777 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:27.777 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:27.777 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:27.777 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:27.777 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:27.777 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:27.777 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.777 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:28.035 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:28.035 "name": "raid_bdev1", 00:26:28.035 "uuid": "f0d2afb8-4503-4755-b497-68943f61f60d", 00:26:28.035 "strip_size_kb": 0, 00:26:28.035 "state": "online", 00:26:28.035 "raid_level": "raid1", 00:26:28.035 "superblock": false, 00:26:28.035 "num_base_bdevs": 4, 00:26:28.035 "num_base_bdevs_discovered": 3, 00:26:28.035 "num_base_bdevs_operational": 3, 00:26:28.035 "process": { 00:26:28.035 "type": "rebuild", 00:26:28.035 "target": "spare", 00:26:28.035 "progress": { 00:26:28.035 "blocks": 43008, 00:26:28.035 "percent": 65 00:26:28.035 } 00:26:28.035 }, 00:26:28.035 "base_bdevs_list": [ 00:26:28.035 { 00:26:28.035 "name": "spare", 00:26:28.035 "uuid": "ac340c59-4c5c-5a23-88b7-087739e1f6b9", 00:26:28.035 "is_configured": true, 00:26:28.035 "data_offset": 0, 00:26:28.035 "data_size": 65536 00:26:28.035 }, 00:26:28.035 { 00:26:28.035 "name": null, 00:26:28.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.035 "is_configured": false, 00:26:28.035 "data_offset": 0, 00:26:28.035 "data_size": 65536 00:26:28.035 }, 00:26:28.035 { 00:26:28.035 "name": "BaseBdev3", 00:26:28.035 "uuid": "6353e62a-5780-5ff5-b92b-eb39eb95414a", 00:26:28.035 "is_configured": true, 00:26:28.035 "data_offset": 0, 00:26:28.035 "data_size": 65536 00:26:28.035 }, 00:26:28.035 { 00:26:28.035 "name": "BaseBdev4", 00:26:28.035 "uuid": "acdb0157-fff0-510b-a2c8-c09da5dc2a92", 00:26:28.035 "is_configured": true, 00:26:28.035 "data_offset": 0, 00:26:28.035 "data_size": 65536 00:26:28.035 } 00:26:28.035 ] 00:26:28.035 }' 00:26:28.035 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:28.035 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:28.035 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:28.292 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:28.292 00:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:26:29.224 [2024-07-25 00:10:24.884311] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:29.224 [2024-07-25 00:10:24.884394] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:29.224 [2024-07-25 00:10:24.884464] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:29.224 00:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:29.224 00:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:29.224 00:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:29.224 00:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:29.224 00:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:29.224 00:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:29.224 00:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.224 00:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.530 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:29.530 "name": "raid_bdev1", 00:26:29.530 "uuid": "f0d2afb8-4503-4755-b497-68943f61f60d", 00:26:29.530 "strip_size_kb": 0, 00:26:29.530 "state": "online", 00:26:29.530 "raid_level": "raid1", 00:26:29.530 "superblock": false, 00:26:29.530 "num_base_bdevs": 4, 00:26:29.530 "num_base_bdevs_discovered": 3, 00:26:29.530 "num_base_bdevs_operational": 3, 00:26:29.530 "base_bdevs_list": [ 00:26:29.530 { 00:26:29.530 "name": "spare", 00:26:29.530 "uuid": "ac340c59-4c5c-5a23-88b7-087739e1f6b9", 00:26:29.530 "is_configured": true, 00:26:29.530 "data_offset": 0, 00:26:29.530 "data_size": 65536 00:26:29.530 }, 00:26:29.530 { 00:26:29.530 "name": null, 00:26:29.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.530 "is_configured": false, 00:26:29.530 "data_offset": 0, 00:26:29.530 "data_size": 65536 00:26:29.530 }, 00:26:29.530 { 00:26:29.530 "name": "BaseBdev3", 00:26:29.530 "uuid": "6353e62a-5780-5ff5-b92b-eb39eb95414a", 00:26:29.530 "is_configured": true, 00:26:29.530 "data_offset": 0, 00:26:29.530 "data_size": 65536 00:26:29.530 }, 00:26:29.530 { 00:26:29.530 "name": "BaseBdev4", 00:26:29.530 "uuid": "acdb0157-fff0-510b-a2c8-c09da5dc2a92", 00:26:29.530 "is_configured": true, 00:26:29.530 "data_offset": 0, 00:26:29.530 "data_size": 65536 00:26:29.530 } 00:26:29.530 ] 00:26:29.530 }' 00:26:29.530 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:29.530 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:29.530 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:29.530 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:29.530 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:26:29.530 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:29.530 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:29.530 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:29.530 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:29.530 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:29.530 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.530 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.787 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:29.787 "name": "raid_bdev1", 00:26:29.787 "uuid": "f0d2afb8-4503-4755-b497-68943f61f60d", 00:26:29.787 "strip_size_kb": 0, 00:26:29.787 "state": "online", 00:26:29.787 "raid_level": "raid1", 00:26:29.787 "superblock": false, 00:26:29.787 "num_base_bdevs": 4, 00:26:29.787 "num_base_bdevs_discovered": 3, 00:26:29.787 "num_base_bdevs_operational": 3, 00:26:29.787 "base_bdevs_list": [ 00:26:29.787 { 00:26:29.787 "name": "spare", 00:26:29.787 "uuid": "ac340c59-4c5c-5a23-88b7-087739e1f6b9", 00:26:29.787 "is_configured": true, 00:26:29.787 "data_offset": 0, 00:26:29.788 "data_size": 65536 00:26:29.788 }, 00:26:29.788 { 00:26:29.788 "name": null, 00:26:29.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.788 "is_configured": false, 00:26:29.788 "data_offset": 0, 00:26:29.788 "data_size": 65536 00:26:29.788 }, 00:26:29.788 { 00:26:29.788 "name": "BaseBdev3", 00:26:29.788 "uuid": "6353e62a-5780-5ff5-b92b-eb39eb95414a", 00:26:29.788 "is_configured": true, 00:26:29.788 "data_offset": 0, 00:26:29.788 "data_size": 65536 00:26:29.788 }, 00:26:29.788 { 00:26:29.788 "name": "BaseBdev4", 00:26:29.788 "uuid": "acdb0157-fff0-510b-a2c8-c09da5dc2a92", 00:26:29.788 "is_configured": true, 00:26:29.788 "data_offset": 0, 00:26:29.788 "data_size": 65536 00:26:29.788 } 00:26:29.788 ] 00:26:29.788 }' 00:26:29.788 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:29.788 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:29.788 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:29.788 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:29.788 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:29.788 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:29.788 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:29.788 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:29.788 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:29.788 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:29.788 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:29.788 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:29.788 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:29.788 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:29.788 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.788 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:30.046 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:30.046 "name": "raid_bdev1", 00:26:30.046 "uuid": "f0d2afb8-4503-4755-b497-68943f61f60d", 00:26:30.046 "strip_size_kb": 0, 00:26:30.046 "state": "online", 00:26:30.046 "raid_level": "raid1", 00:26:30.046 "superblock": false, 00:26:30.046 "num_base_bdevs": 4, 00:26:30.046 "num_base_bdevs_discovered": 3, 00:26:30.046 "num_base_bdevs_operational": 3, 00:26:30.046 "base_bdevs_list": [ 00:26:30.046 { 00:26:30.046 "name": "spare", 00:26:30.046 "uuid": "ac340c59-4c5c-5a23-88b7-087739e1f6b9", 00:26:30.046 "is_configured": true, 00:26:30.046 "data_offset": 0, 00:26:30.046 "data_size": 65536 00:26:30.046 }, 00:26:30.046 { 00:26:30.046 "name": null, 00:26:30.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.046 "is_configured": false, 00:26:30.046 "data_offset": 0, 00:26:30.046 "data_size": 65536 00:26:30.046 }, 00:26:30.046 { 00:26:30.046 "name": "BaseBdev3", 00:26:30.046 "uuid": "6353e62a-5780-5ff5-b92b-eb39eb95414a", 00:26:30.046 "is_configured": true, 00:26:30.046 "data_offset": 0, 00:26:30.046 "data_size": 65536 00:26:30.046 }, 00:26:30.046 { 00:26:30.046 "name": "BaseBdev4", 00:26:30.046 "uuid": "acdb0157-fff0-510b-a2c8-c09da5dc2a92", 00:26:30.046 "is_configured": true, 00:26:30.046 "data_offset": 0, 00:26:30.046 "data_size": 65536 00:26:30.046 } 00:26:30.046 ] 00:26:30.046 }' 00:26:30.046 00:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:30.046 00:10:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.303 00:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:30.560 [2024-07-25 00:10:26.251772] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:30.560 [2024-07-25 00:10:26.252045] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:30.560 [2024-07-25 00:10:26.252153] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:30.560 [2024-07-25 00:10:26.252254] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:30.560 [2024-07-25 00:10:26.252275] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state offline 00:26:30.560 00:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.560 00:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:26:30.818 00:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:26:30.818 00:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:26:30.818 00:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:26:30.818 00:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:30.818 00:10:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:30.818 00:10:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:30.818 00:10:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:30.818 00:10:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:30.818 00:10:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:30.818 00:10:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:30.818 00:10:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:30.818 00:10:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:30.818 00:10:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:31.077 /dev/nbd0 00:26:31.077 00:10:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:31.077 00:10:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:31.077 00:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:31.077 00:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:26:31.077 00:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:31.077 00:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:31.077 00:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:31.077 00:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:26:31.077 00:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:31.077 00:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:31.077 00:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:31.077 1+0 records in 00:26:31.077 1+0 records out 00:26:31.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244738 s, 16.7 MB/s 00:26:31.077 00:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:31.077 00:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:26:31.077 00:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:31.077 00:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:31.077 00:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:26:31.077 00:10:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:31.077 00:10:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:31.077 00:10:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:31.336 /dev/nbd1 00:26:31.336 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:31.336 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:31.336 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:26:31.336 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:26:31.336 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:31.336 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:31.336 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:26:31.336 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:26:31.336 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:31.336 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:31.336 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:31.336 1+0 records in 00:26:31.336 1+0 records out 00:26:31.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623368 s, 6.6 MB/s 00:26:31.336 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:31.336 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:26:31.336 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:31.336 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:31.336 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:26:31.336 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:31.336 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:31.336 00:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:31.594 00:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:31.594 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:31.594 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:31.594 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:31.594 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:31.594 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:31.594 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:31.852 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:31.852 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:31.852 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:31.852 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:31.852 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:31.852 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:31.852 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:31.852 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:31.852 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:31.852 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:32.110 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:32.110 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:32.110 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:32.110 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:32.110 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:32.110 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:32.110 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:32.110 00:10:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:32.110 00:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:26:32.110 00:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 99457 00:26:32.110 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 99457 ']' 00:26:32.110 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 99457 00:26:32.110 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:26:32.110 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:32.110 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99457 00:26:32.110 killing process with pid 99457 00:26:32.110 Received shutdown signal, test time was about 60.000000 seconds 00:26:32.110 00:26:32.110 Latency(us) 00:26:32.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.110 =================================================================================================================== 00:26:32.110 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:32.110 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:32.110 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:32.110 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99457' 00:26:32.110 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 99457 00:26:32.110 [2024-07-25 00:10:27.810463] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:32.110 00:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 99457 00:26:32.368 [2024-07-25 00:10:28.187803] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:33.742 ************************************ 00:26:33.742 END TEST raid_rebuild_test 00:26:33.742 ************************************ 00:26:33.742 00:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:26:33.742 00:26:33.742 real 0m24.503s 00:26:33.742 user 0m31.503s 00:26:33.742 sys 0m4.077s 00:26:33.742 00:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:33.742 00:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.742 00:10:29 bdev_raid -- bdev/bdev_raid.sh@958 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:26:33.742 00:10:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:26:33.742 00:10:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:33.742 00:10:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:33.742 ************************************ 00:26:33.742 START TEST raid_rebuild_test_sb 00:26:33.742 ************************************ 00:26:33.742 00:10:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:26:33.742 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:26:33.742 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:26:33.742 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:26:33.742 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:26:33.742 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:26:33.742 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:26:33.742 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:33.742 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:26:33.742 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:33.742 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:33.742 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:26:33.742 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=99990 00:26:33.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 99990 /var/tmp/spdk-raid.sock 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 99990 ']' 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:33.743 00:10:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:33.743 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:33.743 Zero copy mechanism will not be used. 00:26:33.743 [2024-07-25 00:10:29.393390] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:26:33.743 [2024-07-25 00:10:29.393565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99990 ] 00:26:33.743 [2024-07-25 00:10:29.564583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.001 [2024-07-25 00:10:29.746325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.259 [2024-07-25 00:10:29.927219] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:34.517 00:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:34.517 00:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:26:34.517 00:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:34.517 00:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:34.775 BaseBdev1_malloc 00:26:34.775 00:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:35.032 [2024-07-25 00:10:30.787215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:35.032 [2024-07-25 00:10:30.787607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:35.032 [2024-07-25 00:10:30.787653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:26:35.032 [2024-07-25 00:10:30.787672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:35.032 [2024-07-25 00:10:30.790329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:35.032 [2024-07-25 00:10:30.790389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:35.032 BaseBdev1 00:26:35.032 00:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:35.032 00:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:35.290 BaseBdev2_malloc 00:26:35.290 00:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:35.548 [2024-07-25 00:10:31.270694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:35.548 [2024-07-25 00:10:31.270779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:35.548 [2024-07-25 00:10:31.270876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:26:35.548 [2024-07-25 00:10:31.270905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:35.548 [2024-07-25 00:10:31.273262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:35.548 [2024-07-25 00:10:31.273308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:35.548 BaseBdev2 00:26:35.548 00:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:35.548 00:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:35.806 BaseBdev3_malloc 00:26:35.806 00:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:36.065 [2024-07-25 00:10:31.774475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:36.065 [2024-07-25 00:10:31.774581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:36.065 [2024-07-25 00:10:31.774614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:26:36.065 [2024-07-25 00:10:31.774631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:36.065 [2024-07-25 00:10:31.777069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:36.065 [2024-07-25 00:10:31.777116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:36.065 BaseBdev3 00:26:36.065 00:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:36.065 00:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:36.322 BaseBdev4_malloc 00:26:36.322 00:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:36.580 [2024-07-25 00:10:32.211696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:36.580 [2024-07-25 00:10:32.211795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:36.580 [2024-07-25 00:10:32.211865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:26:36.580 [2024-07-25 00:10:32.211886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:36.580 [2024-07-25 00:10:32.214079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:36.580 [2024-07-25 00:10:32.214125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:36.580 BaseBdev4 00:26:36.580 00:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:36.838 spare_malloc 00:26:36.838 00:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:36.838 spare_delay 00:26:37.096 00:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:37.096 [2024-07-25 00:10:32.959995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:37.096 [2024-07-25 00:10:32.960298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.096 [2024-07-25 00:10:32.960381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:26:37.096 [2024-07-25 00:10:32.960618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.096 [2024-07-25 00:10:32.963287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.096 [2024-07-25 00:10:32.963461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:37.096 spare 00:26:37.353 00:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:26:37.353 [2024-07-25 00:10:33.192089] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:37.353 [2024-07-25 00:10:33.194480] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:37.353 [2024-07-25 00:10:33.194708] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:37.353 [2024-07-25 00:10:33.194947] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:37.353 [2024-07-25 00:10:33.195369] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a880 00:26:37.353 [2024-07-25 00:10:33.195400] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:37.353 [2024-07-25 00:10:33.195553] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:26:37.353 [2024-07-25 00:10:33.195988] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a880 00:26:37.353 [2024-07-25 00:10:33.196007] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a880 00:26:37.353 [2024-07-25 00:10:33.196232] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:37.353 00:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:37.353 00:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:37.353 00:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:37.353 00:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:37.353 00:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:37.353 00:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:37.353 00:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:37.353 00:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:37.353 00:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:37.353 00:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:37.353 00:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.353 00:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.615 00:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:37.615 "name": "raid_bdev1", 00:26:37.615 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:26:37.615 "strip_size_kb": 0, 00:26:37.615 "state": "online", 00:26:37.615 "raid_level": "raid1", 00:26:37.615 "superblock": true, 00:26:37.615 "num_base_bdevs": 4, 00:26:37.615 "num_base_bdevs_discovered": 4, 00:26:37.615 "num_base_bdevs_operational": 4, 00:26:37.616 "base_bdevs_list": [ 00:26:37.616 { 00:26:37.616 "name": "BaseBdev1", 00:26:37.616 "uuid": "d0cc2d89-8d5f-5542-a0ef-7286356730c8", 00:26:37.616 "is_configured": true, 00:26:37.616 "data_offset": 2048, 00:26:37.616 "data_size": 63488 00:26:37.616 }, 00:26:37.616 { 00:26:37.616 "name": "BaseBdev2", 00:26:37.616 "uuid": "e5f3bd5e-971e-5caf-87cd-6550e7815eab", 00:26:37.616 "is_configured": true, 00:26:37.616 "data_offset": 2048, 00:26:37.616 "data_size": 63488 00:26:37.616 }, 00:26:37.616 { 00:26:37.616 "name": "BaseBdev3", 00:26:37.616 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:26:37.616 "is_configured": true, 00:26:37.616 "data_offset": 2048, 00:26:37.616 "data_size": 63488 00:26:37.616 }, 00:26:37.616 { 00:26:37.616 "name": "BaseBdev4", 00:26:37.616 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:26:37.616 "is_configured": true, 00:26:37.616 "data_offset": 2048, 00:26:37.616 "data_size": 63488 00:26:37.616 } 00:26:37.616 ] 00:26:37.616 }' 00:26:37.616 00:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:37.616 00:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:38.181 00:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:38.181 00:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:26:38.181 [2024-07-25 00:10:33.984741] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:38.181 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:26:38.181 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:38.181 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.438 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:26:38.438 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:26:38.438 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:26:38.438 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:26:38.438 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:38.438 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:38.438 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:38.438 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:38.438 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:38.438 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:38.438 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:26:38.438 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:38.438 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:38.438 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:38.696 [2024-07-25 00:10:34.492576] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005c70 00:26:38.696 /dev/nbd0 00:26:38.696 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:38.696 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:38.696 00:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:38.696 00:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:26:38.696 00:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:38.696 00:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:38.696 00:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:38.696 00:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:26:38.696 00:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:38.696 00:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:38.696 00:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:38.696 1+0 records in 00:26:38.696 1+0 records out 00:26:38.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281548 s, 14.5 MB/s 00:26:38.696 00:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:38.696 00:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:26:38.696 00:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:38.696 00:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:38.696 00:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:26:38.696 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:38.696 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:38.696 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:26:38.696 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:26:38.696 00:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:26:46.828 63488+0 records in 00:26:46.828 63488+0 records out 00:26:46.828 32505856 bytes (33 MB, 31 MiB) copied, 8.14815 s, 4.0 MB/s 00:26:46.828 00:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:46.828 00:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:46.828 00:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:46.828 00:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:46.828 00:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:26:46.828 00:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:46.828 00:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:47.395 00:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:47.395 00:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:47.395 00:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:47.395 00:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:47.395 00:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:47.395 00:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:47.395 [2024-07-25 00:10:42.963139] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:47.395 00:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:47.395 00:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:47.395 00:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:47.395 [2024-07-25 00:10:43.155340] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:47.395 00:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:47.396 00:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:47.396 00:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:47.396 00:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:47.396 00:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:47.396 00:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:47.396 00:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:47.396 00:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:47.396 00:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:47.396 00:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:47.396 00:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.396 00:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:47.654 00:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:47.654 "name": "raid_bdev1", 00:26:47.654 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:26:47.654 "strip_size_kb": 0, 00:26:47.654 "state": "online", 00:26:47.654 "raid_level": "raid1", 00:26:47.654 "superblock": true, 00:26:47.654 "num_base_bdevs": 4, 00:26:47.654 "num_base_bdevs_discovered": 3, 00:26:47.654 "num_base_bdevs_operational": 3, 00:26:47.654 "base_bdevs_list": [ 00:26:47.654 { 00:26:47.654 "name": null, 00:26:47.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.654 "is_configured": false, 00:26:47.654 "data_offset": 2048, 00:26:47.654 "data_size": 63488 00:26:47.654 }, 00:26:47.654 { 00:26:47.654 "name": "BaseBdev2", 00:26:47.654 "uuid": "e5f3bd5e-971e-5caf-87cd-6550e7815eab", 00:26:47.654 "is_configured": true, 00:26:47.654 "data_offset": 2048, 00:26:47.654 "data_size": 63488 00:26:47.654 }, 00:26:47.654 { 00:26:47.654 "name": "BaseBdev3", 00:26:47.654 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:26:47.654 "is_configured": true, 00:26:47.654 "data_offset": 2048, 00:26:47.654 "data_size": 63488 00:26:47.654 }, 00:26:47.654 { 00:26:47.654 "name": "BaseBdev4", 00:26:47.654 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:26:47.654 "is_configured": true, 00:26:47.654 "data_offset": 2048, 00:26:47.654 "data_size": 63488 00:26:47.654 } 00:26:47.654 ] 00:26:47.654 }' 00:26:47.654 00:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:47.654 00:10:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.218 00:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:48.218 [2024-07-25 00:10:43.995586] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:48.218 [2024-07-25 00:10:44.010861] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca3020 00:26:48.218 [2024-07-25 00:10:44.012773] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:48.218 00:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:49.591 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:49.591 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:49.591 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:49.591 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:49.591 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:49.591 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.591 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.591 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:49.591 "name": "raid_bdev1", 00:26:49.591 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:26:49.591 "strip_size_kb": 0, 00:26:49.591 "state": "online", 00:26:49.591 "raid_level": "raid1", 00:26:49.591 "superblock": true, 00:26:49.591 "num_base_bdevs": 4, 00:26:49.591 "num_base_bdevs_discovered": 4, 00:26:49.591 "num_base_bdevs_operational": 4, 00:26:49.591 "process": { 00:26:49.591 "type": "rebuild", 00:26:49.591 "target": "spare", 00:26:49.591 "progress": { 00:26:49.591 "blocks": 24576, 00:26:49.591 "percent": 38 00:26:49.591 } 00:26:49.591 }, 00:26:49.591 "base_bdevs_list": [ 00:26:49.591 { 00:26:49.591 "name": "spare", 00:26:49.591 "uuid": "8ed0d28e-0bd9-5911-8f65-670de5dc6e8f", 00:26:49.591 "is_configured": true, 00:26:49.591 "data_offset": 2048, 00:26:49.591 "data_size": 63488 00:26:49.591 }, 00:26:49.591 { 00:26:49.591 "name": "BaseBdev2", 00:26:49.591 "uuid": "e5f3bd5e-971e-5caf-87cd-6550e7815eab", 00:26:49.591 "is_configured": true, 00:26:49.591 "data_offset": 2048, 00:26:49.591 "data_size": 63488 00:26:49.591 }, 00:26:49.591 { 00:26:49.591 "name": "BaseBdev3", 00:26:49.591 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:26:49.591 "is_configured": true, 00:26:49.591 "data_offset": 2048, 00:26:49.591 "data_size": 63488 00:26:49.591 }, 00:26:49.591 { 00:26:49.591 "name": "BaseBdev4", 00:26:49.591 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:26:49.591 "is_configured": true, 00:26:49.591 "data_offset": 2048, 00:26:49.591 "data_size": 63488 00:26:49.591 } 00:26:49.591 ] 00:26:49.591 }' 00:26:49.591 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:49.591 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:49.591 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:49.591 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:49.591 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:49.850 [2024-07-25 00:10:45.560098] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:49.850 [2024-07-25 00:10:45.620952] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:49.850 [2024-07-25 00:10:45.621055] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:49.850 [2024-07-25 00:10:45.621082] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:49.850 [2024-07-25 00:10:45.621097] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:49.850 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:49.850 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:49.850 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:49.850 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:49.850 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:49.850 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:49.850 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:49.850 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:49.850 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:49.850 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:49.850 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.850 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.108 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:50.108 "name": "raid_bdev1", 00:26:50.108 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:26:50.108 "strip_size_kb": 0, 00:26:50.108 "state": "online", 00:26:50.108 "raid_level": "raid1", 00:26:50.108 "superblock": true, 00:26:50.108 "num_base_bdevs": 4, 00:26:50.108 "num_base_bdevs_discovered": 3, 00:26:50.108 "num_base_bdevs_operational": 3, 00:26:50.108 "base_bdevs_list": [ 00:26:50.108 { 00:26:50.108 "name": null, 00:26:50.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.108 "is_configured": false, 00:26:50.108 "data_offset": 2048, 00:26:50.108 "data_size": 63488 00:26:50.108 }, 00:26:50.108 { 00:26:50.108 "name": "BaseBdev2", 00:26:50.108 "uuid": "e5f3bd5e-971e-5caf-87cd-6550e7815eab", 00:26:50.108 "is_configured": true, 00:26:50.108 "data_offset": 2048, 00:26:50.108 "data_size": 63488 00:26:50.108 }, 00:26:50.108 { 00:26:50.108 "name": "BaseBdev3", 00:26:50.108 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:26:50.108 "is_configured": true, 00:26:50.108 "data_offset": 2048, 00:26:50.108 "data_size": 63488 00:26:50.108 }, 00:26:50.108 { 00:26:50.108 "name": "BaseBdev4", 00:26:50.108 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:26:50.108 "is_configured": true, 00:26:50.108 "data_offset": 2048, 00:26:50.108 "data_size": 63488 00:26:50.108 } 00:26:50.108 ] 00:26:50.108 }' 00:26:50.108 00:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:50.108 00:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.366 00:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:50.366 00:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:50.366 00:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:50.366 00:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:50.366 00:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:50.366 00:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.366 00:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.625 00:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:50.625 "name": "raid_bdev1", 00:26:50.625 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:26:50.625 "strip_size_kb": 0, 00:26:50.625 "state": "online", 00:26:50.625 "raid_level": "raid1", 00:26:50.625 "superblock": true, 00:26:50.625 "num_base_bdevs": 4, 00:26:50.625 "num_base_bdevs_discovered": 3, 00:26:50.625 "num_base_bdevs_operational": 3, 00:26:50.625 "base_bdevs_list": [ 00:26:50.625 { 00:26:50.625 "name": null, 00:26:50.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.625 "is_configured": false, 00:26:50.625 "data_offset": 2048, 00:26:50.625 "data_size": 63488 00:26:50.625 }, 00:26:50.625 { 00:26:50.625 "name": "BaseBdev2", 00:26:50.625 "uuid": "e5f3bd5e-971e-5caf-87cd-6550e7815eab", 00:26:50.625 "is_configured": true, 00:26:50.625 "data_offset": 2048, 00:26:50.625 "data_size": 63488 00:26:50.625 }, 00:26:50.625 { 00:26:50.625 "name": "BaseBdev3", 00:26:50.625 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:26:50.625 "is_configured": true, 00:26:50.625 "data_offset": 2048, 00:26:50.625 "data_size": 63488 00:26:50.625 }, 00:26:50.625 { 00:26:50.625 "name": "BaseBdev4", 00:26:50.625 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:26:50.625 "is_configured": true, 00:26:50.625 "data_offset": 2048, 00:26:50.625 "data_size": 63488 00:26:50.625 } 00:26:50.625 ] 00:26:50.625 }' 00:26:50.625 00:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:50.625 00:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:50.625 00:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:50.625 00:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:50.625 00:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:50.883 [2024-07-25 00:10:46.714179] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:50.883 [2024-07-25 00:10:46.726626] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca30f0 00:26:50.884 [2024-07-25 00:10:46.728919] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:50.884 00:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:26:52.259 00:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:52.259 00:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:52.259 00:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:52.259 00:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:52.259 00:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:52.259 00:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.259 00:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.259 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:52.259 "name": "raid_bdev1", 00:26:52.259 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:26:52.259 "strip_size_kb": 0, 00:26:52.259 "state": "online", 00:26:52.259 "raid_level": "raid1", 00:26:52.259 "superblock": true, 00:26:52.259 "num_base_bdevs": 4, 00:26:52.259 "num_base_bdevs_discovered": 4, 00:26:52.259 "num_base_bdevs_operational": 4, 00:26:52.259 "process": { 00:26:52.259 "type": "rebuild", 00:26:52.259 "target": "spare", 00:26:52.259 "progress": { 00:26:52.259 "blocks": 24576, 00:26:52.259 "percent": 38 00:26:52.259 } 00:26:52.259 }, 00:26:52.259 "base_bdevs_list": [ 00:26:52.259 { 00:26:52.259 "name": "spare", 00:26:52.259 "uuid": "8ed0d28e-0bd9-5911-8f65-670de5dc6e8f", 00:26:52.259 "is_configured": true, 00:26:52.259 "data_offset": 2048, 00:26:52.259 "data_size": 63488 00:26:52.259 }, 00:26:52.259 { 00:26:52.259 "name": "BaseBdev2", 00:26:52.259 "uuid": "e5f3bd5e-971e-5caf-87cd-6550e7815eab", 00:26:52.259 "is_configured": true, 00:26:52.259 "data_offset": 2048, 00:26:52.259 "data_size": 63488 00:26:52.259 }, 00:26:52.259 { 00:26:52.259 "name": "BaseBdev3", 00:26:52.259 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:26:52.259 "is_configured": true, 00:26:52.259 "data_offset": 2048, 00:26:52.259 "data_size": 63488 00:26:52.259 }, 00:26:52.259 { 00:26:52.259 "name": "BaseBdev4", 00:26:52.259 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:26:52.259 "is_configured": true, 00:26:52.259 "data_offset": 2048, 00:26:52.259 "data_size": 63488 00:26:52.259 } 00:26:52.259 ] 00:26:52.259 }' 00:26:52.259 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:52.259 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:52.259 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:52.259 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:52.259 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:26:52.259 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:26:52.259 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:26:52.259 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:26:52.259 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:26:52.259 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:26:52.259 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:52.517 [2024-07-25 00:10:48.231252] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:52.517 [2024-07-25 00:10:48.336644] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000ca30f0 00:26:52.517 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:26:52.517 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:26:52.517 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:52.517 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:52.517 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:52.517 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:52.517 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:52.517 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.517 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.775 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:52.775 "name": "raid_bdev1", 00:26:52.775 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:26:52.775 "strip_size_kb": 0, 00:26:52.775 "state": "online", 00:26:52.775 "raid_level": "raid1", 00:26:52.775 "superblock": true, 00:26:52.775 "num_base_bdevs": 4, 00:26:52.775 "num_base_bdevs_discovered": 3, 00:26:52.775 "num_base_bdevs_operational": 3, 00:26:52.775 "process": { 00:26:52.775 "type": "rebuild", 00:26:52.775 "target": "spare", 00:26:52.775 "progress": { 00:26:52.775 "blocks": 34816, 00:26:52.775 "percent": 54 00:26:52.775 } 00:26:52.775 }, 00:26:52.775 "base_bdevs_list": [ 00:26:52.775 { 00:26:52.775 "name": "spare", 00:26:52.775 "uuid": "8ed0d28e-0bd9-5911-8f65-670de5dc6e8f", 00:26:52.775 "is_configured": true, 00:26:52.775 "data_offset": 2048, 00:26:52.775 "data_size": 63488 00:26:52.775 }, 00:26:52.775 { 00:26:52.775 "name": null, 00:26:52.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.775 "is_configured": false, 00:26:52.775 "data_offset": 2048, 00:26:52.775 "data_size": 63488 00:26:52.775 }, 00:26:52.775 { 00:26:52.775 "name": "BaseBdev3", 00:26:52.775 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:26:52.775 "is_configured": true, 00:26:52.775 "data_offset": 2048, 00:26:52.775 "data_size": 63488 00:26:52.775 }, 00:26:52.775 { 00:26:52.775 "name": "BaseBdev4", 00:26:52.775 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:26:52.775 "is_configured": true, 00:26:52.775 "data_offset": 2048, 00:26:52.775 "data_size": 63488 00:26:52.775 } 00:26:52.775 ] 00:26:52.775 }' 00:26:52.775 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:52.775 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:53.033 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:53.033 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:53.033 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=838 00:26:53.033 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:53.033 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:53.033 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:53.033 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:53.033 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:53.033 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:53.033 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.033 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:53.291 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:53.291 "name": "raid_bdev1", 00:26:53.291 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:26:53.291 "strip_size_kb": 0, 00:26:53.291 "state": "online", 00:26:53.291 "raid_level": "raid1", 00:26:53.291 "superblock": true, 00:26:53.291 "num_base_bdevs": 4, 00:26:53.291 "num_base_bdevs_discovered": 3, 00:26:53.291 "num_base_bdevs_operational": 3, 00:26:53.291 "process": { 00:26:53.291 "type": "rebuild", 00:26:53.291 "target": "spare", 00:26:53.291 "progress": { 00:26:53.291 "blocks": 40960, 00:26:53.291 "percent": 64 00:26:53.291 } 00:26:53.291 }, 00:26:53.291 "base_bdevs_list": [ 00:26:53.291 { 00:26:53.291 "name": "spare", 00:26:53.291 "uuid": "8ed0d28e-0bd9-5911-8f65-670de5dc6e8f", 00:26:53.291 "is_configured": true, 00:26:53.291 "data_offset": 2048, 00:26:53.291 "data_size": 63488 00:26:53.291 }, 00:26:53.291 { 00:26:53.291 "name": null, 00:26:53.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:53.291 "is_configured": false, 00:26:53.291 "data_offset": 2048, 00:26:53.291 "data_size": 63488 00:26:53.291 }, 00:26:53.291 { 00:26:53.291 "name": "BaseBdev3", 00:26:53.291 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:26:53.291 "is_configured": true, 00:26:53.291 "data_offset": 2048, 00:26:53.291 "data_size": 63488 00:26:53.291 }, 00:26:53.291 { 00:26:53.291 "name": "BaseBdev4", 00:26:53.291 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:26:53.291 "is_configured": true, 00:26:53.291 "data_offset": 2048, 00:26:53.291 "data_size": 63488 00:26:53.291 } 00:26:53.291 ] 00:26:53.291 }' 00:26:53.291 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:53.291 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:53.291 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:53.291 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:53.291 00:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:26:54.223 [2024-07-25 00:10:49.945140] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:54.223 [2024-07-25 00:10:49.945269] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:54.223 [2024-07-25 00:10:49.945437] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:54.223 00:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:54.223 00:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:54.223 00:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:54.223 00:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:54.223 00:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:54.223 00:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:54.223 00:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.223 00:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.482 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:54.482 "name": "raid_bdev1", 00:26:54.482 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:26:54.482 "strip_size_kb": 0, 00:26:54.482 "state": "online", 00:26:54.482 "raid_level": "raid1", 00:26:54.482 "superblock": true, 00:26:54.482 "num_base_bdevs": 4, 00:26:54.482 "num_base_bdevs_discovered": 3, 00:26:54.482 "num_base_bdevs_operational": 3, 00:26:54.482 "base_bdevs_list": [ 00:26:54.482 { 00:26:54.482 "name": "spare", 00:26:54.482 "uuid": "8ed0d28e-0bd9-5911-8f65-670de5dc6e8f", 00:26:54.482 "is_configured": true, 00:26:54.482 "data_offset": 2048, 00:26:54.482 "data_size": 63488 00:26:54.482 }, 00:26:54.482 { 00:26:54.482 "name": null, 00:26:54.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.482 "is_configured": false, 00:26:54.482 "data_offset": 2048, 00:26:54.482 "data_size": 63488 00:26:54.482 }, 00:26:54.482 { 00:26:54.482 "name": "BaseBdev3", 00:26:54.482 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:26:54.482 "is_configured": true, 00:26:54.482 "data_offset": 2048, 00:26:54.482 "data_size": 63488 00:26:54.482 }, 00:26:54.482 { 00:26:54.482 "name": "BaseBdev4", 00:26:54.482 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:26:54.482 "is_configured": true, 00:26:54.482 "data_offset": 2048, 00:26:54.482 "data_size": 63488 00:26:54.482 } 00:26:54.482 ] 00:26:54.482 }' 00:26:54.482 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:54.482 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:54.482 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:54.482 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:54.482 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:26:54.482 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:54.482 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:54.482 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:54.482 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:54.482 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:54.482 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.482 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.744 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:54.744 "name": "raid_bdev1", 00:26:54.744 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:26:54.744 "strip_size_kb": 0, 00:26:54.744 "state": "online", 00:26:54.744 "raid_level": "raid1", 00:26:54.744 "superblock": true, 00:26:54.744 "num_base_bdevs": 4, 00:26:54.744 "num_base_bdevs_discovered": 3, 00:26:54.744 "num_base_bdevs_operational": 3, 00:26:54.744 "base_bdevs_list": [ 00:26:54.744 { 00:26:54.744 "name": "spare", 00:26:54.744 "uuid": "8ed0d28e-0bd9-5911-8f65-670de5dc6e8f", 00:26:54.744 "is_configured": true, 00:26:54.744 "data_offset": 2048, 00:26:54.744 "data_size": 63488 00:26:54.744 }, 00:26:54.744 { 00:26:54.744 "name": null, 00:26:54.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.744 "is_configured": false, 00:26:54.744 "data_offset": 2048, 00:26:54.744 "data_size": 63488 00:26:54.745 }, 00:26:54.745 { 00:26:54.745 "name": "BaseBdev3", 00:26:54.745 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:26:54.745 "is_configured": true, 00:26:54.745 "data_offset": 2048, 00:26:54.745 "data_size": 63488 00:26:54.745 }, 00:26:54.745 { 00:26:54.745 "name": "BaseBdev4", 00:26:54.745 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:26:54.745 "is_configured": true, 00:26:54.745 "data_offset": 2048, 00:26:54.745 "data_size": 63488 00:26:54.745 } 00:26:54.745 ] 00:26:54.745 }' 00:26:54.745 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:54.745 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:54.745 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:54.745 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:54.745 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:54.745 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:54.745 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:54.745 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:54.745 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:54.745 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:54.745 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:54.745 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:54.745 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:54.745 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:54.745 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.745 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.012 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:55.012 "name": "raid_bdev1", 00:26:55.012 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:26:55.012 "strip_size_kb": 0, 00:26:55.012 "state": "online", 00:26:55.012 "raid_level": "raid1", 00:26:55.012 "superblock": true, 00:26:55.012 "num_base_bdevs": 4, 00:26:55.012 "num_base_bdevs_discovered": 3, 00:26:55.012 "num_base_bdevs_operational": 3, 00:26:55.012 "base_bdevs_list": [ 00:26:55.012 { 00:26:55.012 "name": "spare", 00:26:55.012 "uuid": "8ed0d28e-0bd9-5911-8f65-670de5dc6e8f", 00:26:55.012 "is_configured": true, 00:26:55.012 "data_offset": 2048, 00:26:55.012 "data_size": 63488 00:26:55.012 }, 00:26:55.012 { 00:26:55.012 "name": null, 00:26:55.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:55.012 "is_configured": false, 00:26:55.012 "data_offset": 2048, 00:26:55.012 "data_size": 63488 00:26:55.012 }, 00:26:55.012 { 00:26:55.012 "name": "BaseBdev3", 00:26:55.013 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:26:55.013 "is_configured": true, 00:26:55.013 "data_offset": 2048, 00:26:55.013 "data_size": 63488 00:26:55.013 }, 00:26:55.013 { 00:26:55.013 "name": "BaseBdev4", 00:26:55.013 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:26:55.013 "is_configured": true, 00:26:55.013 "data_offset": 2048, 00:26:55.013 "data_size": 63488 00:26:55.013 } 00:26:55.013 ] 00:26:55.013 }' 00:26:55.013 00:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:55.013 00:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.271 00:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:55.528 [2024-07-25 00:10:51.262492] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:55.528 [2024-07-25 00:10:51.262738] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:55.528 [2024-07-25 00:10:51.262892] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:55.528 [2024-07-25 00:10:51.262993] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:55.528 [2024-07-25 00:10:51.263010] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state offline 00:26:55.528 00:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.528 00:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:26:55.786 00:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:26:55.786 00:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:26:55.786 00:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:26:55.786 00:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:55.786 00:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:55.786 00:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:55.786 00:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:55.786 00:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:55.786 00:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:55.786 00:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:26:55.786 00:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:55.786 00:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:55.786 00:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:56.044 /dev/nbd0 00:26:56.044 00:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:56.044 00:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:56.044 00:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:56.044 00:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:26:56.044 00:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:56.044 00:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:56.044 00:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:56.044 00:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:26:56.044 00:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:56.044 00:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:56.044 00:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:56.044 1+0 records in 00:26:56.044 1+0 records out 00:26:56.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323565 s, 12.7 MB/s 00:26:56.044 00:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:56.044 00:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:26:56.044 00:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:56.044 00:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:56.044 00:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:26:56.044 00:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:56.044 00:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:56.044 00:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:56.302 /dev/nbd1 00:26:56.302 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:56.302 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:56.302 00:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:26:56.302 00:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:26:56.302 00:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:56.302 00:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:56.302 00:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:26:56.302 00:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:26:56.302 00:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:56.302 00:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:56.302 00:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:56.302 1+0 records in 00:26:56.302 1+0 records out 00:26:56.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300387 s, 13.6 MB/s 00:26:56.302 00:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:56.302 00:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:26:56.302 00:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:56.302 00:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:56.302 00:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:26:56.303 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:56.303 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:56.303 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:56.561 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:56.561 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:56.561 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:56.561 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:56.561 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:26:56.561 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:56.561 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:56.819 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:56.819 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:56.819 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:56.819 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:56.819 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:56.819 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:56.819 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:56.819 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:56.819 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:56.819 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:57.078 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:57.078 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:57.078 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:57.078 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:57.078 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:57.078 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:57.078 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:57.078 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:57.078 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:26:57.078 00:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:57.336 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:57.594 [2024-07-25 00:10:53.311934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:57.594 [2024-07-25 00:10:53.312026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:57.594 [2024-07-25 00:10:53.312068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b780 00:26:57.594 [2024-07-25 00:10:53.312084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:57.595 [2024-07-25 00:10:53.314697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:57.595 [2024-07-25 00:10:53.314742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:57.595 [2024-07-25 00:10:53.314906] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:57.595 [2024-07-25 00:10:53.314962] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:57.595 [2024-07-25 00:10:53.315173] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:57.595 [2024-07-25 00:10:53.315315] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:57.595 spare 00:26:57.595 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:57.595 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:57.595 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:57.595 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:57.595 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:57.595 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:57.595 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:57.595 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:57.595 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:57.595 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:57.595 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.595 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.595 [2024-07-25 00:10:53.415445] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000bd80 00:26:57.595 [2024-07-25 00:10:53.415687] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:57.595 [2024-07-25 00:10:53.415927] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cc17a0 00:26:57.595 [2024-07-25 00:10:53.416386] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000bd80 00:26:57.595 [2024-07-25 00:10:53.416407] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000bd80 00:26:57.595 [2024-07-25 00:10:53.416578] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:57.852 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:57.852 "name": "raid_bdev1", 00:26:57.852 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:26:57.852 "strip_size_kb": 0, 00:26:57.852 "state": "online", 00:26:57.852 "raid_level": "raid1", 00:26:57.852 "superblock": true, 00:26:57.852 "num_base_bdevs": 4, 00:26:57.852 "num_base_bdevs_discovered": 3, 00:26:57.852 "num_base_bdevs_operational": 3, 00:26:57.852 "base_bdevs_list": [ 00:26:57.852 { 00:26:57.852 "name": "spare", 00:26:57.852 "uuid": "8ed0d28e-0bd9-5911-8f65-670de5dc6e8f", 00:26:57.852 "is_configured": true, 00:26:57.852 "data_offset": 2048, 00:26:57.852 "data_size": 63488 00:26:57.852 }, 00:26:57.852 { 00:26:57.852 "name": null, 00:26:57.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:57.852 "is_configured": false, 00:26:57.852 "data_offset": 2048, 00:26:57.852 "data_size": 63488 00:26:57.852 }, 00:26:57.852 { 00:26:57.852 "name": "BaseBdev3", 00:26:57.852 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:26:57.852 "is_configured": true, 00:26:57.852 "data_offset": 2048, 00:26:57.852 "data_size": 63488 00:26:57.852 }, 00:26:57.852 { 00:26:57.852 "name": "BaseBdev4", 00:26:57.852 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:26:57.852 "is_configured": true, 00:26:57.852 "data_offset": 2048, 00:26:57.852 "data_size": 63488 00:26:57.852 } 00:26:57.852 ] 00:26:57.852 }' 00:26:57.852 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:57.852 00:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.110 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:58.110 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:58.110 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:58.110 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:58.110 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:58.110 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.110 00:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:58.368 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:58.368 "name": "raid_bdev1", 00:26:58.368 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:26:58.368 "strip_size_kb": 0, 00:26:58.368 "state": "online", 00:26:58.368 "raid_level": "raid1", 00:26:58.368 "superblock": true, 00:26:58.368 "num_base_bdevs": 4, 00:26:58.368 "num_base_bdevs_discovered": 3, 00:26:58.369 "num_base_bdevs_operational": 3, 00:26:58.369 "base_bdevs_list": [ 00:26:58.369 { 00:26:58.369 "name": "spare", 00:26:58.369 "uuid": "8ed0d28e-0bd9-5911-8f65-670de5dc6e8f", 00:26:58.369 "is_configured": true, 00:26:58.369 "data_offset": 2048, 00:26:58.369 "data_size": 63488 00:26:58.369 }, 00:26:58.369 { 00:26:58.369 "name": null, 00:26:58.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:58.369 "is_configured": false, 00:26:58.369 "data_offset": 2048, 00:26:58.369 "data_size": 63488 00:26:58.369 }, 00:26:58.369 { 00:26:58.369 "name": "BaseBdev3", 00:26:58.369 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:26:58.369 "is_configured": true, 00:26:58.369 "data_offset": 2048, 00:26:58.369 "data_size": 63488 00:26:58.369 }, 00:26:58.369 { 00:26:58.369 "name": "BaseBdev4", 00:26:58.369 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:26:58.369 "is_configured": true, 00:26:58.369 "data_offset": 2048, 00:26:58.369 "data_size": 63488 00:26:58.369 } 00:26:58.369 ] 00:26:58.369 }' 00:26:58.369 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:58.369 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:58.369 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:58.369 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:58.369 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.369 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:58.627 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:26:58.627 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:58.899 [2024-07-25 00:10:54.648970] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:58.899 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:58.899 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:58.899 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:58.899 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:58.899 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:58.899 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:58.899 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:58.899 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:58.899 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:58.899 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:58.899 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.899 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.157 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:59.157 "name": "raid_bdev1", 00:26:59.157 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:26:59.157 "strip_size_kb": 0, 00:26:59.157 "state": "online", 00:26:59.157 "raid_level": "raid1", 00:26:59.157 "superblock": true, 00:26:59.157 "num_base_bdevs": 4, 00:26:59.157 "num_base_bdevs_discovered": 2, 00:26:59.157 "num_base_bdevs_operational": 2, 00:26:59.157 "base_bdevs_list": [ 00:26:59.157 { 00:26:59.157 "name": null, 00:26:59.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.157 "is_configured": false, 00:26:59.157 "data_offset": 2048, 00:26:59.157 "data_size": 63488 00:26:59.157 }, 00:26:59.157 { 00:26:59.157 "name": null, 00:26:59.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.157 "is_configured": false, 00:26:59.157 "data_offset": 2048, 00:26:59.157 "data_size": 63488 00:26:59.157 }, 00:26:59.157 { 00:26:59.157 "name": "BaseBdev3", 00:26:59.157 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:26:59.157 "is_configured": true, 00:26:59.157 "data_offset": 2048, 00:26:59.157 "data_size": 63488 00:26:59.157 }, 00:26:59.157 { 00:26:59.157 "name": "BaseBdev4", 00:26:59.157 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:26:59.157 "is_configured": true, 00:26:59.157 "data_offset": 2048, 00:26:59.157 "data_size": 63488 00:26:59.157 } 00:26:59.157 ] 00:26:59.157 }' 00:26:59.157 00:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:59.157 00:10:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.415 00:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:59.673 [2024-07-25 00:10:55.489250] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:59.673 [2024-07-25 00:10:55.489471] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:26:59.673 [2024-07-25 00:10:55.489493] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:59.673 [2024-07-25 00:10:55.489556] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:59.673 [2024-07-25 00:10:55.501209] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cc1870 00:26:59.673 [2024-07-25 00:10:55.503499] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:59.673 00:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:27:01.047 00:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:01.047 00:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:01.047 00:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:01.047 00:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:01.047 00:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:01.047 00:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.047 00:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.047 00:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:01.047 "name": "raid_bdev1", 00:27:01.047 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:27:01.047 "strip_size_kb": 0, 00:27:01.047 "state": "online", 00:27:01.047 "raid_level": "raid1", 00:27:01.047 "superblock": true, 00:27:01.047 "num_base_bdevs": 4, 00:27:01.047 "num_base_bdevs_discovered": 3, 00:27:01.047 "num_base_bdevs_operational": 3, 00:27:01.047 "process": { 00:27:01.047 "type": "rebuild", 00:27:01.047 "target": "spare", 00:27:01.047 "progress": { 00:27:01.047 "blocks": 24576, 00:27:01.047 "percent": 38 00:27:01.047 } 00:27:01.047 }, 00:27:01.047 "base_bdevs_list": [ 00:27:01.047 { 00:27:01.047 "name": "spare", 00:27:01.047 "uuid": "8ed0d28e-0bd9-5911-8f65-670de5dc6e8f", 00:27:01.047 "is_configured": true, 00:27:01.047 "data_offset": 2048, 00:27:01.047 "data_size": 63488 00:27:01.047 }, 00:27:01.047 { 00:27:01.047 "name": null, 00:27:01.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.047 "is_configured": false, 00:27:01.047 "data_offset": 2048, 00:27:01.047 "data_size": 63488 00:27:01.047 }, 00:27:01.047 { 00:27:01.047 "name": "BaseBdev3", 00:27:01.047 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:27:01.047 "is_configured": true, 00:27:01.047 "data_offset": 2048, 00:27:01.047 "data_size": 63488 00:27:01.047 }, 00:27:01.047 { 00:27:01.047 "name": "BaseBdev4", 00:27:01.047 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:27:01.047 "is_configured": true, 00:27:01.047 "data_offset": 2048, 00:27:01.047 "data_size": 63488 00:27:01.047 } 00:27:01.047 ] 00:27:01.047 }' 00:27:01.047 00:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:01.047 00:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:01.047 00:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:01.047 00:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:01.047 00:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:01.305 [2024-07-25 00:10:57.037971] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:01.305 [2024-07-25 00:10:57.111684] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:01.306 [2024-07-25 00:10:57.111969] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:01.306 [2024-07-25 00:10:57.112004] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:01.306 [2024-07-25 00:10:57.112017] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:01.306 00:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:01.306 00:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:01.306 00:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:01.306 00:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:01.306 00:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:01.306 00:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:01.306 00:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:01.306 00:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:01.306 00:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:01.306 00:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:01.306 00:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.306 00:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.564 00:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:01.564 "name": "raid_bdev1", 00:27:01.564 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:27:01.564 "strip_size_kb": 0, 00:27:01.564 "state": "online", 00:27:01.564 "raid_level": "raid1", 00:27:01.564 "superblock": true, 00:27:01.564 "num_base_bdevs": 4, 00:27:01.564 "num_base_bdevs_discovered": 2, 00:27:01.564 "num_base_bdevs_operational": 2, 00:27:01.564 "base_bdevs_list": [ 00:27:01.564 { 00:27:01.564 "name": null, 00:27:01.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.564 "is_configured": false, 00:27:01.564 "data_offset": 2048, 00:27:01.564 "data_size": 63488 00:27:01.564 }, 00:27:01.564 { 00:27:01.564 "name": null, 00:27:01.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.564 "is_configured": false, 00:27:01.564 "data_offset": 2048, 00:27:01.564 "data_size": 63488 00:27:01.564 }, 00:27:01.564 { 00:27:01.564 "name": "BaseBdev3", 00:27:01.564 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:27:01.564 "is_configured": true, 00:27:01.564 "data_offset": 2048, 00:27:01.564 "data_size": 63488 00:27:01.564 }, 00:27:01.564 { 00:27:01.564 "name": "BaseBdev4", 00:27:01.564 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:27:01.564 "is_configured": true, 00:27:01.564 "data_offset": 2048, 00:27:01.564 "data_size": 63488 00:27:01.564 } 00:27:01.564 ] 00:27:01.564 }' 00:27:01.564 00:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:01.564 00:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:02.129 00:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:02.129 [2024-07-25 00:10:57.920771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:02.129 [2024-07-25 00:10:57.920894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:02.129 [2024-07-25 00:10:57.920962] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:27:02.129 [2024-07-25 00:10:57.920976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:02.129 [2024-07-25 00:10:57.921568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:02.129 [2024-07-25 00:10:57.921614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:02.129 [2024-07-25 00:10:57.921741] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:02.129 [2024-07-25 00:10:57.921757] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:27:02.129 [2024-07-25 00:10:57.921774] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:02.129 [2024-07-25 00:10:57.921802] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:02.129 spare 00:27:02.129 [2024-07-25 00:10:57.933236] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cc1940 00:27:02.129 [2024-07-25 00:10:57.935442] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:02.129 00:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:27:03.501 00:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:03.501 00:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:03.501 00:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:03.501 00:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:03.501 00:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:03.501 00:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.501 00:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:03.501 00:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:03.501 "name": "raid_bdev1", 00:27:03.501 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:27:03.501 "strip_size_kb": 0, 00:27:03.501 "state": "online", 00:27:03.501 "raid_level": "raid1", 00:27:03.501 "superblock": true, 00:27:03.501 "num_base_bdevs": 4, 00:27:03.501 "num_base_bdevs_discovered": 3, 00:27:03.501 "num_base_bdevs_operational": 3, 00:27:03.501 "process": { 00:27:03.501 "type": "rebuild", 00:27:03.501 "target": "spare", 00:27:03.501 "progress": { 00:27:03.501 "blocks": 24576, 00:27:03.501 "percent": 38 00:27:03.501 } 00:27:03.501 }, 00:27:03.501 "base_bdevs_list": [ 00:27:03.501 { 00:27:03.501 "name": "spare", 00:27:03.501 "uuid": "8ed0d28e-0bd9-5911-8f65-670de5dc6e8f", 00:27:03.501 "is_configured": true, 00:27:03.501 "data_offset": 2048, 00:27:03.501 "data_size": 63488 00:27:03.501 }, 00:27:03.501 { 00:27:03.501 "name": null, 00:27:03.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.501 "is_configured": false, 00:27:03.501 "data_offset": 2048, 00:27:03.501 "data_size": 63488 00:27:03.501 }, 00:27:03.501 { 00:27:03.501 "name": "BaseBdev3", 00:27:03.501 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:27:03.501 "is_configured": true, 00:27:03.501 "data_offset": 2048, 00:27:03.501 "data_size": 63488 00:27:03.501 }, 00:27:03.501 { 00:27:03.501 "name": "BaseBdev4", 00:27:03.501 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:27:03.501 "is_configured": true, 00:27:03.501 "data_offset": 2048, 00:27:03.501 "data_size": 63488 00:27:03.501 } 00:27:03.501 ] 00:27:03.501 }' 00:27:03.501 00:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:03.501 00:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:03.501 00:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:03.501 00:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:03.501 00:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:03.759 [2024-07-25 00:10:59.473745] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:03.759 [2024-07-25 00:10:59.543480] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:03.759 [2024-07-25 00:10:59.543754] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:03.759 [2024-07-25 00:10:59.543781] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:03.759 [2024-07-25 00:10:59.543796] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:03.759 00:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:03.759 00:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:03.759 00:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:03.759 00:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:03.759 00:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:03.759 00:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:03.759 00:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:03.759 00:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:03.759 00:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:03.759 00:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:03.759 00:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.759 00:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.017 00:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:04.017 "name": "raid_bdev1", 00:27:04.017 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:27:04.017 "strip_size_kb": 0, 00:27:04.017 "state": "online", 00:27:04.017 "raid_level": "raid1", 00:27:04.017 "superblock": true, 00:27:04.017 "num_base_bdevs": 4, 00:27:04.017 "num_base_bdevs_discovered": 2, 00:27:04.017 "num_base_bdevs_operational": 2, 00:27:04.017 "base_bdevs_list": [ 00:27:04.017 { 00:27:04.017 "name": null, 00:27:04.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.017 "is_configured": false, 00:27:04.017 "data_offset": 2048, 00:27:04.017 "data_size": 63488 00:27:04.017 }, 00:27:04.017 { 00:27:04.017 "name": null, 00:27:04.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.017 "is_configured": false, 00:27:04.017 "data_offset": 2048, 00:27:04.017 "data_size": 63488 00:27:04.017 }, 00:27:04.017 { 00:27:04.017 "name": "BaseBdev3", 00:27:04.017 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:27:04.017 "is_configured": true, 00:27:04.017 "data_offset": 2048, 00:27:04.017 "data_size": 63488 00:27:04.017 }, 00:27:04.017 { 00:27:04.017 "name": "BaseBdev4", 00:27:04.017 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:27:04.017 "is_configured": true, 00:27:04.017 "data_offset": 2048, 00:27:04.017 "data_size": 63488 00:27:04.017 } 00:27:04.017 ] 00:27:04.017 }' 00:27:04.017 00:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:04.017 00:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.275 00:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:04.275 00:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:04.275 00:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:04.275 00:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:04.275 00:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:04.275 00:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.275 00:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.533 00:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:04.533 "name": "raid_bdev1", 00:27:04.533 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:27:04.533 "strip_size_kb": 0, 00:27:04.533 "state": "online", 00:27:04.533 "raid_level": "raid1", 00:27:04.533 "superblock": true, 00:27:04.533 "num_base_bdevs": 4, 00:27:04.533 "num_base_bdevs_discovered": 2, 00:27:04.533 "num_base_bdevs_operational": 2, 00:27:04.533 "base_bdevs_list": [ 00:27:04.533 { 00:27:04.533 "name": null, 00:27:04.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.533 "is_configured": false, 00:27:04.533 "data_offset": 2048, 00:27:04.533 "data_size": 63488 00:27:04.533 }, 00:27:04.533 { 00:27:04.533 "name": null, 00:27:04.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.533 "is_configured": false, 00:27:04.533 "data_offset": 2048, 00:27:04.533 "data_size": 63488 00:27:04.533 }, 00:27:04.533 { 00:27:04.533 "name": "BaseBdev3", 00:27:04.533 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:27:04.533 "is_configured": true, 00:27:04.533 "data_offset": 2048, 00:27:04.533 "data_size": 63488 00:27:04.533 }, 00:27:04.533 { 00:27:04.533 "name": "BaseBdev4", 00:27:04.533 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:27:04.533 "is_configured": true, 00:27:04.533 "data_offset": 2048, 00:27:04.533 "data_size": 63488 00:27:04.533 } 00:27:04.533 ] 00:27:04.533 }' 00:27:04.533 00:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:04.533 00:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:04.533 00:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:04.533 00:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:04.533 00:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:04.791 00:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:05.049 [2024-07-25 00:11:00.844127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:05.049 [2024-07-25 00:11:00.844211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:05.049 [2024-07-25 00:11:00.844261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c980 00:27:05.049 [2024-07-25 00:11:00.844294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:05.049 [2024-07-25 00:11:00.844764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:05.049 [2024-07-25 00:11:00.844792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:05.049 [2024-07-25 00:11:00.844928] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:05.049 [2024-07-25 00:11:00.844972] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:27:05.049 [2024-07-25 00:11:00.844984] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:05.049 BaseBdev1 00:27:05.049 00:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:27:06.019 00:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:06.019 00:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:06.019 00:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:06.019 00:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:06.019 00:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:06.019 00:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:06.019 00:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:06.019 00:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:06.019 00:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:06.019 00:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:06.019 00:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:06.019 00:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:06.277 00:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:06.277 "name": "raid_bdev1", 00:27:06.277 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:27:06.277 "strip_size_kb": 0, 00:27:06.277 "state": "online", 00:27:06.277 "raid_level": "raid1", 00:27:06.277 "superblock": true, 00:27:06.277 "num_base_bdevs": 4, 00:27:06.277 "num_base_bdevs_discovered": 2, 00:27:06.277 "num_base_bdevs_operational": 2, 00:27:06.277 "base_bdevs_list": [ 00:27:06.277 { 00:27:06.277 "name": null, 00:27:06.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.277 "is_configured": false, 00:27:06.277 "data_offset": 2048, 00:27:06.277 "data_size": 63488 00:27:06.277 }, 00:27:06.277 { 00:27:06.277 "name": null, 00:27:06.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.277 "is_configured": false, 00:27:06.277 "data_offset": 2048, 00:27:06.277 "data_size": 63488 00:27:06.277 }, 00:27:06.278 { 00:27:06.278 "name": "BaseBdev3", 00:27:06.278 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:27:06.278 "is_configured": true, 00:27:06.278 "data_offset": 2048, 00:27:06.278 "data_size": 63488 00:27:06.278 }, 00:27:06.278 { 00:27:06.278 "name": "BaseBdev4", 00:27:06.278 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:27:06.278 "is_configured": true, 00:27:06.278 "data_offset": 2048, 00:27:06.278 "data_size": 63488 00:27:06.278 } 00:27:06.278 ] 00:27:06.278 }' 00:27:06.278 00:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:06.278 00:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:06.844 "name": "raid_bdev1", 00:27:06.844 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:27:06.844 "strip_size_kb": 0, 00:27:06.844 "state": "online", 00:27:06.844 "raid_level": "raid1", 00:27:06.844 "superblock": true, 00:27:06.844 "num_base_bdevs": 4, 00:27:06.844 "num_base_bdevs_discovered": 2, 00:27:06.844 "num_base_bdevs_operational": 2, 00:27:06.844 "base_bdevs_list": [ 00:27:06.844 { 00:27:06.844 "name": null, 00:27:06.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.844 "is_configured": false, 00:27:06.844 "data_offset": 2048, 00:27:06.844 "data_size": 63488 00:27:06.844 }, 00:27:06.844 { 00:27:06.844 "name": null, 00:27:06.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.844 "is_configured": false, 00:27:06.844 "data_offset": 2048, 00:27:06.844 "data_size": 63488 00:27:06.844 }, 00:27:06.844 { 00:27:06.844 "name": "BaseBdev3", 00:27:06.844 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:27:06.844 "is_configured": true, 00:27:06.844 "data_offset": 2048, 00:27:06.844 "data_size": 63488 00:27:06.844 }, 00:27:06.844 { 00:27:06.844 "name": "BaseBdev4", 00:27:06.844 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:27:06.844 "is_configured": true, 00:27:06.844 "data_offset": 2048, 00:27:06.844 "data_size": 63488 00:27:06.844 } 00:27:06.844 ] 00:27:06.844 }' 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:06.844 00:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:07.102 [2024-07-25 00:11:02.917195] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:07.102 [2024-07-25 00:11:02.917476] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:27:07.102 [2024-07-25 00:11:02.917516] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:07.102 request: 00:27:07.102 { 00:27:07.102 "base_bdev": "BaseBdev1", 00:27:07.102 "raid_bdev": "raid_bdev1", 00:27:07.102 "method": "bdev_raid_add_base_bdev", 00:27:07.102 "req_id": 1 00:27:07.102 } 00:27:07.102 Got JSON-RPC error response 00:27:07.102 response: 00:27:07.102 { 00:27:07.102 "code": -22, 00:27:07.102 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:07.102 } 00:27:07.102 00:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:27:07.102 00:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:07.102 00:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:07.102 00:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:07.102 00:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:27:08.478 00:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:08.478 00:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:08.478 00:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:08.478 00:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:08.478 00:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:08.478 00:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:08.478 00:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:08.478 00:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:08.478 00:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:08.478 00:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:08.478 00:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.478 00:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.479 00:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:08.479 "name": "raid_bdev1", 00:27:08.479 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:27:08.479 "strip_size_kb": 0, 00:27:08.479 "state": "online", 00:27:08.479 "raid_level": "raid1", 00:27:08.479 "superblock": true, 00:27:08.479 "num_base_bdevs": 4, 00:27:08.479 "num_base_bdevs_discovered": 2, 00:27:08.479 "num_base_bdevs_operational": 2, 00:27:08.479 "base_bdevs_list": [ 00:27:08.479 { 00:27:08.479 "name": null, 00:27:08.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:08.479 "is_configured": false, 00:27:08.479 "data_offset": 2048, 00:27:08.479 "data_size": 63488 00:27:08.479 }, 00:27:08.479 { 00:27:08.479 "name": null, 00:27:08.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:08.479 "is_configured": false, 00:27:08.479 "data_offset": 2048, 00:27:08.479 "data_size": 63488 00:27:08.479 }, 00:27:08.479 { 00:27:08.479 "name": "BaseBdev3", 00:27:08.479 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:27:08.479 "is_configured": true, 00:27:08.479 "data_offset": 2048, 00:27:08.479 "data_size": 63488 00:27:08.479 }, 00:27:08.479 { 00:27:08.479 "name": "BaseBdev4", 00:27:08.479 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:27:08.479 "is_configured": true, 00:27:08.479 "data_offset": 2048, 00:27:08.479 "data_size": 63488 00:27:08.479 } 00:27:08.479 ] 00:27:08.479 }' 00:27:08.479 00:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:08.479 00:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:08.736 00:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:08.736 00:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:08.736 00:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:08.736 00:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:08.736 00:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:08.736 00:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.736 00:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.995 00:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:08.995 "name": "raid_bdev1", 00:27:08.995 "uuid": "ceb246c3-566b-45d9-babb-5806eab76f10", 00:27:08.995 "strip_size_kb": 0, 00:27:08.995 "state": "online", 00:27:08.995 "raid_level": "raid1", 00:27:08.995 "superblock": true, 00:27:08.995 "num_base_bdevs": 4, 00:27:08.995 "num_base_bdevs_discovered": 2, 00:27:08.995 "num_base_bdevs_operational": 2, 00:27:08.995 "base_bdevs_list": [ 00:27:08.995 { 00:27:08.995 "name": null, 00:27:08.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:08.995 "is_configured": false, 00:27:08.995 "data_offset": 2048, 00:27:08.995 "data_size": 63488 00:27:08.995 }, 00:27:08.995 { 00:27:08.995 "name": null, 00:27:08.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:08.995 "is_configured": false, 00:27:08.995 "data_offset": 2048, 00:27:08.995 "data_size": 63488 00:27:08.995 }, 00:27:08.995 { 00:27:08.995 "name": "BaseBdev3", 00:27:08.995 "uuid": "d2d9e03c-2c65-52f5-a456-70dc93bef645", 00:27:08.995 "is_configured": true, 00:27:08.995 "data_offset": 2048, 00:27:08.995 "data_size": 63488 00:27:08.995 }, 00:27:08.995 { 00:27:08.995 "name": "BaseBdev4", 00:27:08.995 "uuid": "444273d2-e641-52ba-9953-839e91c39818", 00:27:08.995 "is_configured": true, 00:27:08.995 "data_offset": 2048, 00:27:08.995 "data_size": 63488 00:27:08.995 } 00:27:08.995 ] 00:27:08.995 }' 00:27:08.995 00:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:08.995 00:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:08.995 00:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:08.995 00:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:08.995 00:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 99990 00:27:08.995 00:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 99990 ']' 00:27:08.995 00:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 99990 00:27:08.995 00:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:27:08.995 00:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:08.995 00:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99990 00:27:08.995 killing process with pid 99990 00:27:08.995 Received shutdown signal, test time was about 60.000000 seconds 00:27:08.995 00:27:08.995 Latency(us) 00:27:08.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.995 =================================================================================================================== 00:27:08.995 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:08.995 00:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:08.995 00:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:08.995 00:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99990' 00:27:08.995 00:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 99990 00:27:08.995 [2024-07-25 00:11:04.819174] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:08.995 00:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 99990 00:27:08.995 [2024-07-25 00:11:04.819338] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:08.995 [2024-07-25 00:11:04.819464] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:08.995 [2024-07-25 00:11:04.819487] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state offline 00:27:09.561 [2024-07-25 00:11:05.227079] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:10.494 00:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:27:10.494 00:27:10.494 real 0m37.023s 00:27:10.494 user 0m50.962s 00:27:10.494 sys 0m5.642s 00:27:10.494 00:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:10.494 ************************************ 00:27:10.494 END TEST raid_rebuild_test_sb 00:27:10.494 ************************************ 00:27:10.494 00:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:10.753 00:11:06 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:27:10.753 00:11:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:27:10.753 00:11:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:10.753 00:11:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:10.753 ************************************ 00:27:10.753 START TEST raid_rebuild_test_io 00:27:10.753 ************************************ 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # raid_pid=100887 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 100887 /var/tmp/spdk-raid.sock 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 100887 ']' 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:10.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:10.753 00:11:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:10.753 [2024-07-25 00:11:06.461608] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:27:10.753 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:10.753 Zero copy mechanism will not be used. 00:27:10.753 [2024-07-25 00:11:06.461954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100887 ] 00:27:10.753 [2024-07-25 00:11:06.620493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.011 [2024-07-25 00:11:06.798577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.268 [2024-07-25 00:11:06.963366] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:11.834 00:11:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:11.834 00:11:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:27:11.834 00:11:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:11.834 00:11:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:11.834 BaseBdev1_malloc 00:27:12.093 00:11:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:12.093 [2024-07-25 00:11:07.954402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:12.093 [2024-07-25 00:11:07.954511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:12.093 [2024-07-25 00:11:07.954563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:27:12.093 [2024-07-25 00:11:07.954593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:12.093 [2024-07-25 00:11:07.957176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:12.093 [2024-07-25 00:11:07.957365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:12.093 BaseBdev1 00:27:12.351 00:11:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:12.351 00:11:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:12.609 BaseBdev2_malloc 00:27:12.609 00:11:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:12.867 [2024-07-25 00:11:08.488171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:12.867 [2024-07-25 00:11:08.488437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:12.867 [2024-07-25 00:11:08.488583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:27:12.867 [2024-07-25 00:11:08.488617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:12.867 [2024-07-25 00:11:08.491179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:12.867 [2024-07-25 00:11:08.491231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:12.867 BaseBdev2 00:27:12.867 00:11:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:12.867 00:11:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:13.125 BaseBdev3_malloc 00:27:13.125 00:11:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:27:13.125 [2024-07-25 00:11:08.963950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:27:13.125 [2024-07-25 00:11:08.964251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:13.125 [2024-07-25 00:11:08.964327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:27:13.125 [2024-07-25 00:11:08.964559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:13.125 [2024-07-25 00:11:08.967295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:13.125 [2024-07-25 00:11:08.967346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:13.125 BaseBdev3 00:27:13.125 00:11:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:13.125 00:11:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:13.383 BaseBdev4_malloc 00:27:13.383 00:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:27:13.642 [2024-07-25 00:11:09.480244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:27:13.642 [2024-07-25 00:11:09.480334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:13.642 [2024-07-25 00:11:09.480373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:27:13.642 [2024-07-25 00:11:09.480400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:13.642 [2024-07-25 00:11:09.483099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:13.642 [2024-07-25 00:11:09.483159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:13.642 BaseBdev4 00:27:13.642 00:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:14.207 spare_malloc 00:27:14.207 00:11:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:14.207 spare_delay 00:27:14.207 00:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:14.480 [2024-07-25 00:11:10.224299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:14.480 [2024-07-25 00:11:10.224578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:14.480 [2024-07-25 00:11:10.224622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:27:14.480 [2024-07-25 00:11:10.224640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:14.480 [2024-07-25 00:11:10.226972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:14.480 [2024-07-25 00:11:10.227020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:14.480 spare 00:27:14.480 00:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:27:14.751 [2024-07-25 00:11:10.448392] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:14.751 [2024-07-25 00:11:10.450837] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:14.751 [2024-07-25 00:11:10.450975] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:14.751 [2024-07-25 00:11:10.451051] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:14.751 [2024-07-25 00:11:10.451199] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a880 00:27:14.751 [2024-07-25 00:11:10.451218] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:14.751 [2024-07-25 00:11:10.451377] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:27:14.751 [2024-07-25 00:11:10.451776] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a880 00:27:14.751 [2024-07-25 00:11:10.451792] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a880 00:27:14.751 [2024-07-25 00:11:10.452197] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:14.751 00:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:14.751 00:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:14.751 00:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:14.751 00:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:14.751 00:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:14.751 00:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:14.751 00:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:14.751 00:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:14.751 00:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:14.751 00:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:14.751 00:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.751 00:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.009 00:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:15.009 "name": "raid_bdev1", 00:27:15.009 "uuid": "d4beb793-4a94-49a1-81c0-5e3b91aba994", 00:27:15.009 "strip_size_kb": 0, 00:27:15.009 "state": "online", 00:27:15.009 "raid_level": "raid1", 00:27:15.009 "superblock": false, 00:27:15.009 "num_base_bdevs": 4, 00:27:15.009 "num_base_bdevs_discovered": 4, 00:27:15.009 "num_base_bdevs_operational": 4, 00:27:15.009 "base_bdevs_list": [ 00:27:15.009 { 00:27:15.009 "name": "BaseBdev1", 00:27:15.009 "uuid": "13984b5c-feb2-54b4-b859-6f0340b3266a", 00:27:15.009 "is_configured": true, 00:27:15.009 "data_offset": 0, 00:27:15.009 "data_size": 65536 00:27:15.009 }, 00:27:15.010 { 00:27:15.010 "name": "BaseBdev2", 00:27:15.010 "uuid": "6eb65cd3-7a14-528f-be00-ffa995e8c4ca", 00:27:15.010 "is_configured": true, 00:27:15.010 "data_offset": 0, 00:27:15.010 "data_size": 65536 00:27:15.010 }, 00:27:15.010 { 00:27:15.010 "name": "BaseBdev3", 00:27:15.010 "uuid": "f0951636-a0c2-5419-b296-e85c581f6a6a", 00:27:15.010 "is_configured": true, 00:27:15.010 "data_offset": 0, 00:27:15.010 "data_size": 65536 00:27:15.010 }, 00:27:15.010 { 00:27:15.010 "name": "BaseBdev4", 00:27:15.010 "uuid": "39919fcb-21e6-586d-8947-144b62eba133", 00:27:15.010 "is_configured": true, 00:27:15.010 "data_offset": 0, 00:27:15.010 "data_size": 65536 00:27:15.010 } 00:27:15.010 ] 00:27:15.010 }' 00:27:15.010 00:11:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:15.010 00:11:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:15.268 00:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:15.268 00:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:27:15.527 [2024-07-25 00:11:11.317151] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:15.527 00:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:27:15.527 00:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:15.527 00:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:15.784 00:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:27:15.785 00:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:27:15.785 00:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:15.785 00:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:16.042 [2024-07-25 00:11:11.687911] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005d40 00:27:16.042 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:16.042 Zero copy mechanism will not be used. 00:27:16.042 Running I/O for 60 seconds... 00:27:16.042 [2024-07-25 00:11:11.759488] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:16.042 [2024-07-25 00:11:11.759981] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005d40 00:27:16.042 00:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:16.042 00:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:16.042 00:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:16.042 00:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:16.042 00:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:16.042 00:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:16.042 00:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:16.042 00:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:16.042 00:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:16.042 00:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:16.043 00:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:16.043 00:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.300 00:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:16.300 "name": "raid_bdev1", 00:27:16.300 "uuid": "d4beb793-4a94-49a1-81c0-5e3b91aba994", 00:27:16.300 "strip_size_kb": 0, 00:27:16.300 "state": "online", 00:27:16.300 "raid_level": "raid1", 00:27:16.300 "superblock": false, 00:27:16.301 "num_base_bdevs": 4, 00:27:16.301 "num_base_bdevs_discovered": 3, 00:27:16.301 "num_base_bdevs_operational": 3, 00:27:16.301 "base_bdevs_list": [ 00:27:16.301 { 00:27:16.301 "name": null, 00:27:16.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:16.301 "is_configured": false, 00:27:16.301 "data_offset": 0, 00:27:16.301 "data_size": 65536 00:27:16.301 }, 00:27:16.301 { 00:27:16.301 "name": "BaseBdev2", 00:27:16.301 "uuid": "6eb65cd3-7a14-528f-be00-ffa995e8c4ca", 00:27:16.301 "is_configured": true, 00:27:16.301 "data_offset": 0, 00:27:16.301 "data_size": 65536 00:27:16.301 }, 00:27:16.301 { 00:27:16.301 "name": "BaseBdev3", 00:27:16.301 "uuid": "f0951636-a0c2-5419-b296-e85c581f6a6a", 00:27:16.301 "is_configured": true, 00:27:16.301 "data_offset": 0, 00:27:16.301 "data_size": 65536 00:27:16.301 }, 00:27:16.301 { 00:27:16.301 "name": "BaseBdev4", 00:27:16.301 "uuid": "39919fcb-21e6-586d-8947-144b62eba133", 00:27:16.301 "is_configured": true, 00:27:16.301 "data_offset": 0, 00:27:16.301 "data_size": 65536 00:27:16.301 } 00:27:16.301 ] 00:27:16.301 }' 00:27:16.301 00:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:16.301 00:11:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:16.867 00:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:16.867 [2024-07-25 00:11:12.671336] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:16.867 00:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:16.867 [2024-07-25 00:11:12.722470] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005e10 00:27:16.867 [2024-07-25 00:11:12.724868] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:17.125 [2024-07-25 00:11:12.835344] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:17.125 [2024-07-25 00:11:12.836242] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:17.125 [2024-07-25 00:11:12.948141] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:17.125 [2024-07-25 00:11:12.948648] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:17.383 [2024-07-25 00:11:13.183862] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:17.383 [2024-07-25 00:11:13.184584] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:17.639 [2024-07-25 00:11:13.315773] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:17.897 [2024-07-25 00:11:13.591344] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:17.897 00:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:17.897 00:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:17.897 00:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:17.897 00:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:17.897 00:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:17.897 00:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.897 00:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.155 [2024-07-25 00:11:13.821733] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:18.155 00:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:18.155 "name": "raid_bdev1", 00:27:18.155 "uuid": "d4beb793-4a94-49a1-81c0-5e3b91aba994", 00:27:18.155 "strip_size_kb": 0, 00:27:18.155 "state": "online", 00:27:18.155 "raid_level": "raid1", 00:27:18.155 "superblock": false, 00:27:18.155 "num_base_bdevs": 4, 00:27:18.155 "num_base_bdevs_discovered": 4, 00:27:18.155 "num_base_bdevs_operational": 4, 00:27:18.155 "process": { 00:27:18.155 "type": "rebuild", 00:27:18.155 "target": "spare", 00:27:18.155 "progress": { 00:27:18.155 "blocks": 18432, 00:27:18.155 "percent": 28 00:27:18.155 } 00:27:18.155 }, 00:27:18.155 "base_bdevs_list": [ 00:27:18.155 { 00:27:18.155 "name": "spare", 00:27:18.155 "uuid": "c8f0b11e-454a-5775-addc-37db43c66720", 00:27:18.155 "is_configured": true, 00:27:18.155 "data_offset": 0, 00:27:18.155 "data_size": 65536 00:27:18.155 }, 00:27:18.155 { 00:27:18.155 "name": "BaseBdev2", 00:27:18.155 "uuid": "6eb65cd3-7a14-528f-be00-ffa995e8c4ca", 00:27:18.155 "is_configured": true, 00:27:18.155 "data_offset": 0, 00:27:18.155 "data_size": 65536 00:27:18.155 }, 00:27:18.155 { 00:27:18.155 "name": "BaseBdev3", 00:27:18.155 "uuid": "f0951636-a0c2-5419-b296-e85c581f6a6a", 00:27:18.155 "is_configured": true, 00:27:18.155 "data_offset": 0, 00:27:18.155 "data_size": 65536 00:27:18.155 }, 00:27:18.155 { 00:27:18.155 "name": "BaseBdev4", 00:27:18.155 "uuid": "39919fcb-21e6-586d-8947-144b62eba133", 00:27:18.155 "is_configured": true, 00:27:18.155 "data_offset": 0, 00:27:18.155 "data_size": 65536 00:27:18.155 } 00:27:18.155 ] 00:27:18.155 }' 00:27:18.155 00:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:18.155 00:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:18.155 00:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:18.155 00:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:18.155 00:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:18.413 [2024-07-25 00:11:14.176543] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:27:18.413 [2024-07-25 00:11:14.176792] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:27:18.413 [2024-07-25 00:11:14.254814] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:18.671 [2024-07-25 00:11:14.412996] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:18.671 [2024-07-25 00:11:14.416983] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:18.671 [2024-07-25 00:11:14.417030] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:18.671 [2024-07-25 00:11:14.417048] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:18.671 [2024-07-25 00:11:14.451637] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005d40 00:27:18.671 00:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:18.671 00:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:18.671 00:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:18.671 00:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:18.671 00:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:18.671 00:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:18.671 00:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:18.671 00:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:18.671 00:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:18.671 00:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:18.671 00:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.671 00:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.929 00:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:18.929 "name": "raid_bdev1", 00:27:18.929 "uuid": "d4beb793-4a94-49a1-81c0-5e3b91aba994", 00:27:18.929 "strip_size_kb": 0, 00:27:18.929 "state": "online", 00:27:18.929 "raid_level": "raid1", 00:27:18.929 "superblock": false, 00:27:18.929 "num_base_bdevs": 4, 00:27:18.929 "num_base_bdevs_discovered": 3, 00:27:18.929 "num_base_bdevs_operational": 3, 00:27:18.929 "base_bdevs_list": [ 00:27:18.929 { 00:27:18.929 "name": null, 00:27:18.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:18.929 "is_configured": false, 00:27:18.929 "data_offset": 0, 00:27:18.929 "data_size": 65536 00:27:18.929 }, 00:27:18.929 { 00:27:18.929 "name": "BaseBdev2", 00:27:18.929 "uuid": "6eb65cd3-7a14-528f-be00-ffa995e8c4ca", 00:27:18.929 "is_configured": true, 00:27:18.929 "data_offset": 0, 00:27:18.929 "data_size": 65536 00:27:18.929 }, 00:27:18.929 { 00:27:18.929 "name": "BaseBdev3", 00:27:18.929 "uuid": "f0951636-a0c2-5419-b296-e85c581f6a6a", 00:27:18.929 "is_configured": true, 00:27:18.929 "data_offset": 0, 00:27:18.929 "data_size": 65536 00:27:18.929 }, 00:27:18.929 { 00:27:18.929 "name": "BaseBdev4", 00:27:18.929 "uuid": "39919fcb-21e6-586d-8947-144b62eba133", 00:27:18.929 "is_configured": true, 00:27:18.929 "data_offset": 0, 00:27:18.929 "data_size": 65536 00:27:18.929 } 00:27:18.929 ] 00:27:18.929 }' 00:27:18.929 00:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:18.929 00:11:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:19.495 00:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:19.495 00:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:19.495 00:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:19.495 00:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:19.495 00:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:19.495 00:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.495 00:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.752 00:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:19.752 "name": "raid_bdev1", 00:27:19.752 "uuid": "d4beb793-4a94-49a1-81c0-5e3b91aba994", 00:27:19.752 "strip_size_kb": 0, 00:27:19.752 "state": "online", 00:27:19.752 "raid_level": "raid1", 00:27:19.752 "superblock": false, 00:27:19.752 "num_base_bdevs": 4, 00:27:19.752 "num_base_bdevs_discovered": 3, 00:27:19.752 "num_base_bdevs_operational": 3, 00:27:19.752 "base_bdevs_list": [ 00:27:19.752 { 00:27:19.752 "name": null, 00:27:19.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:19.752 "is_configured": false, 00:27:19.752 "data_offset": 0, 00:27:19.752 "data_size": 65536 00:27:19.752 }, 00:27:19.752 { 00:27:19.752 "name": "BaseBdev2", 00:27:19.752 "uuid": "6eb65cd3-7a14-528f-be00-ffa995e8c4ca", 00:27:19.752 "is_configured": true, 00:27:19.752 "data_offset": 0, 00:27:19.752 "data_size": 65536 00:27:19.752 }, 00:27:19.752 { 00:27:19.752 "name": "BaseBdev3", 00:27:19.752 "uuid": "f0951636-a0c2-5419-b296-e85c581f6a6a", 00:27:19.752 "is_configured": true, 00:27:19.752 "data_offset": 0, 00:27:19.752 "data_size": 65536 00:27:19.752 }, 00:27:19.752 { 00:27:19.752 "name": "BaseBdev4", 00:27:19.752 "uuid": "39919fcb-21e6-586d-8947-144b62eba133", 00:27:19.752 "is_configured": true, 00:27:19.752 "data_offset": 0, 00:27:19.752 "data_size": 65536 00:27:19.752 } 00:27:19.752 ] 00:27:19.752 }' 00:27:19.752 00:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:19.752 00:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:19.752 00:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:19.752 00:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:19.752 00:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:20.010 [2024-07-25 00:11:15.659733] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:20.010 00:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:27:20.010 [2024-07-25 00:11:15.720724] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ee0 00:27:20.010 [2024-07-25 00:11:15.723071] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:20.010 [2024-07-25 00:11:15.845021] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:20.267 [2024-07-25 00:11:15.976389] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:20.267 [2024-07-25 00:11:15.976973] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:20.524 [2024-07-25 00:11:16.319074] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:20.782 [2024-07-25 00:11:16.559465] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:21.040 00:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:21.040 00:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:21.040 00:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:21.040 00:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:21.040 00:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:21.040 00:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:21.040 00:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:21.040 [2024-07-25 00:11:16.889210] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:21.298 00:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:21.298 "name": "raid_bdev1", 00:27:21.298 "uuid": "d4beb793-4a94-49a1-81c0-5e3b91aba994", 00:27:21.298 "strip_size_kb": 0, 00:27:21.298 "state": "online", 00:27:21.298 "raid_level": "raid1", 00:27:21.298 "superblock": false, 00:27:21.298 "num_base_bdevs": 4, 00:27:21.298 "num_base_bdevs_discovered": 4, 00:27:21.298 "num_base_bdevs_operational": 4, 00:27:21.298 "process": { 00:27:21.298 "type": "rebuild", 00:27:21.298 "target": "spare", 00:27:21.298 "progress": { 00:27:21.298 "blocks": 14336, 00:27:21.298 "percent": 21 00:27:21.298 } 00:27:21.298 }, 00:27:21.298 "base_bdevs_list": [ 00:27:21.298 { 00:27:21.298 "name": "spare", 00:27:21.298 "uuid": "c8f0b11e-454a-5775-addc-37db43c66720", 00:27:21.298 "is_configured": true, 00:27:21.298 "data_offset": 0, 00:27:21.298 "data_size": 65536 00:27:21.298 }, 00:27:21.298 { 00:27:21.298 "name": "BaseBdev2", 00:27:21.298 "uuid": "6eb65cd3-7a14-528f-be00-ffa995e8c4ca", 00:27:21.298 "is_configured": true, 00:27:21.298 "data_offset": 0, 00:27:21.298 "data_size": 65536 00:27:21.298 }, 00:27:21.298 { 00:27:21.298 "name": "BaseBdev3", 00:27:21.298 "uuid": "f0951636-a0c2-5419-b296-e85c581f6a6a", 00:27:21.298 "is_configured": true, 00:27:21.298 "data_offset": 0, 00:27:21.298 "data_size": 65536 00:27:21.298 }, 00:27:21.298 { 00:27:21.298 "name": "BaseBdev4", 00:27:21.298 "uuid": "39919fcb-21e6-586d-8947-144b62eba133", 00:27:21.298 "is_configured": true, 00:27:21.298 "data_offset": 0, 00:27:21.298 "data_size": 65536 00:27:21.298 } 00:27:21.298 ] 00:27:21.298 }' 00:27:21.298 00:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:21.298 00:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:21.298 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:21.298 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:21.298 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:27:21.298 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:27:21.298 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:27:21.298 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:27:21.298 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:27:21.298 [2024-07-25 00:11:17.101715] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:21.298 [2024-07-25 00:11:17.101977] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:21.556 [2024-07-25 00:11:17.264675] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:21.556 [2024-07-25 00:11:17.383708] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005d40 00:27:21.556 [2024-07-25 00:11:17.383926] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005ee0 00:27:21.556 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:27:21.556 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:27:21.556 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:21.556 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:21.556 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:21.556 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:21.556 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:21.556 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:21.556 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:21.813 [2024-07-25 00:11:17.531906] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:27:22.071 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:22.071 "name": "raid_bdev1", 00:27:22.071 "uuid": "d4beb793-4a94-49a1-81c0-5e3b91aba994", 00:27:22.071 "strip_size_kb": 0, 00:27:22.071 "state": "online", 00:27:22.071 "raid_level": "raid1", 00:27:22.071 "superblock": false, 00:27:22.071 "num_base_bdevs": 4, 00:27:22.071 "num_base_bdevs_discovered": 3, 00:27:22.071 "num_base_bdevs_operational": 3, 00:27:22.071 "process": { 00:27:22.071 "type": "rebuild", 00:27:22.071 "target": "spare", 00:27:22.071 "progress": { 00:27:22.071 "blocks": 24576, 00:27:22.071 "percent": 37 00:27:22.071 } 00:27:22.071 }, 00:27:22.071 "base_bdevs_list": [ 00:27:22.071 { 00:27:22.071 "name": "spare", 00:27:22.071 "uuid": "c8f0b11e-454a-5775-addc-37db43c66720", 00:27:22.071 "is_configured": true, 00:27:22.071 "data_offset": 0, 00:27:22.071 "data_size": 65536 00:27:22.071 }, 00:27:22.071 { 00:27:22.071 "name": null, 00:27:22.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.071 "is_configured": false, 00:27:22.071 "data_offset": 0, 00:27:22.071 "data_size": 65536 00:27:22.071 }, 00:27:22.071 { 00:27:22.071 "name": "BaseBdev3", 00:27:22.071 "uuid": "f0951636-a0c2-5419-b296-e85c581f6a6a", 00:27:22.071 "is_configured": true, 00:27:22.071 "data_offset": 0, 00:27:22.071 "data_size": 65536 00:27:22.071 }, 00:27:22.071 { 00:27:22.071 "name": "BaseBdev4", 00:27:22.071 "uuid": "39919fcb-21e6-586d-8947-144b62eba133", 00:27:22.071 "is_configured": true, 00:27:22.071 "data_offset": 0, 00:27:22.071 "data_size": 65536 00:27:22.071 } 00:27:22.071 ] 00:27:22.071 }' 00:27:22.071 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:22.071 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:22.071 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:22.071 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:22.071 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # local timeout=867 00:27:22.071 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:22.071 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:22.071 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:22.071 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:22.071 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:22.071 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:22.071 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.071 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.071 [2024-07-25 00:11:17.764824] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:27:22.071 [2024-07-25 00:11:17.765349] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:27:22.329 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:22.329 "name": "raid_bdev1", 00:27:22.329 "uuid": "d4beb793-4a94-49a1-81c0-5e3b91aba994", 00:27:22.329 "strip_size_kb": 0, 00:27:22.329 "state": "online", 00:27:22.329 "raid_level": "raid1", 00:27:22.329 "superblock": false, 00:27:22.329 "num_base_bdevs": 4, 00:27:22.329 "num_base_bdevs_discovered": 3, 00:27:22.329 "num_base_bdevs_operational": 3, 00:27:22.329 "process": { 00:27:22.329 "type": "rebuild", 00:27:22.329 "target": "spare", 00:27:22.329 "progress": { 00:27:22.329 "blocks": 26624, 00:27:22.329 "percent": 40 00:27:22.329 } 00:27:22.329 }, 00:27:22.329 "base_bdevs_list": [ 00:27:22.329 { 00:27:22.329 "name": "spare", 00:27:22.329 "uuid": "c8f0b11e-454a-5775-addc-37db43c66720", 00:27:22.329 "is_configured": true, 00:27:22.329 "data_offset": 0, 00:27:22.329 "data_size": 65536 00:27:22.329 }, 00:27:22.329 { 00:27:22.329 "name": null, 00:27:22.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.329 "is_configured": false, 00:27:22.329 "data_offset": 0, 00:27:22.329 "data_size": 65536 00:27:22.329 }, 00:27:22.329 { 00:27:22.329 "name": "BaseBdev3", 00:27:22.329 "uuid": "f0951636-a0c2-5419-b296-e85c581f6a6a", 00:27:22.329 "is_configured": true, 00:27:22.329 "data_offset": 0, 00:27:22.329 "data_size": 65536 00:27:22.329 }, 00:27:22.329 { 00:27:22.329 "name": "BaseBdev4", 00:27:22.329 "uuid": "39919fcb-21e6-586d-8947-144b62eba133", 00:27:22.329 "is_configured": true, 00:27:22.329 "data_offset": 0, 00:27:22.329 "data_size": 65536 00:27:22.329 } 00:27:22.329 ] 00:27:22.329 }' 00:27:22.329 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:22.329 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:22.329 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:22.329 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:22.329 00:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:22.329 [2024-07-25 00:11:18.001657] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:27:22.895 [2024-07-25 00:11:18.510558] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:27:23.154 [2024-07-25 00:11:18.859296] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:27:23.154 [2024-07-25 00:11:18.860187] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:27:23.154 00:11:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:23.154 00:11:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:23.154 00:11:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:23.154 00:11:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:23.154 00:11:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:23.154 00:11:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:23.154 00:11:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.154 00:11:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.412 [2024-07-25 00:11:19.099589] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:27:23.412 00:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:23.412 "name": "raid_bdev1", 00:27:23.412 "uuid": "d4beb793-4a94-49a1-81c0-5e3b91aba994", 00:27:23.412 "strip_size_kb": 0, 00:27:23.412 "state": "online", 00:27:23.412 "raid_level": "raid1", 00:27:23.412 "superblock": false, 00:27:23.412 "num_base_bdevs": 4, 00:27:23.412 "num_base_bdevs_discovered": 3, 00:27:23.412 "num_base_bdevs_operational": 3, 00:27:23.412 "process": { 00:27:23.412 "type": "rebuild", 00:27:23.412 "target": "spare", 00:27:23.412 "progress": { 00:27:23.412 "blocks": 40960, 00:27:23.412 "percent": 62 00:27:23.412 } 00:27:23.412 }, 00:27:23.412 "base_bdevs_list": [ 00:27:23.412 { 00:27:23.412 "name": "spare", 00:27:23.412 "uuid": "c8f0b11e-454a-5775-addc-37db43c66720", 00:27:23.412 "is_configured": true, 00:27:23.412 "data_offset": 0, 00:27:23.412 "data_size": 65536 00:27:23.412 }, 00:27:23.412 { 00:27:23.412 "name": null, 00:27:23.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.412 "is_configured": false, 00:27:23.412 "data_offset": 0, 00:27:23.412 "data_size": 65536 00:27:23.412 }, 00:27:23.412 { 00:27:23.412 "name": "BaseBdev3", 00:27:23.412 "uuid": "f0951636-a0c2-5419-b296-e85c581f6a6a", 00:27:23.412 "is_configured": true, 00:27:23.412 "data_offset": 0, 00:27:23.412 "data_size": 65536 00:27:23.412 }, 00:27:23.412 { 00:27:23.412 "name": "BaseBdev4", 00:27:23.412 "uuid": "39919fcb-21e6-586d-8947-144b62eba133", 00:27:23.412 "is_configured": true, 00:27:23.412 "data_offset": 0, 00:27:23.412 "data_size": 65536 00:27:23.412 } 00:27:23.412 ] 00:27:23.412 }' 00:27:23.412 00:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:23.412 00:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:23.412 00:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:23.412 00:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:23.412 00:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:24.347 [2024-07-25 00:11:20.202952] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:27:24.605 00:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:24.605 00:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:24.605 00:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:24.605 00:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:24.605 00:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:24.605 00:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:24.605 00:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:24.605 00:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.864 00:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:24.864 "name": "raid_bdev1", 00:27:24.864 "uuid": "d4beb793-4a94-49a1-81c0-5e3b91aba994", 00:27:24.864 "strip_size_kb": 0, 00:27:24.864 "state": "online", 00:27:24.864 "raid_level": "raid1", 00:27:24.864 "superblock": false, 00:27:24.864 "num_base_bdevs": 4, 00:27:24.864 "num_base_bdevs_discovered": 3, 00:27:24.864 "num_base_bdevs_operational": 3, 00:27:24.864 "process": { 00:27:24.864 "type": "rebuild", 00:27:24.864 "target": "spare", 00:27:24.864 "progress": { 00:27:24.864 "blocks": 61440, 00:27:24.864 "percent": 93 00:27:24.864 } 00:27:24.864 }, 00:27:24.864 "base_bdevs_list": [ 00:27:24.864 { 00:27:24.864 "name": "spare", 00:27:24.864 "uuid": "c8f0b11e-454a-5775-addc-37db43c66720", 00:27:24.864 "is_configured": true, 00:27:24.864 "data_offset": 0, 00:27:24.864 "data_size": 65536 00:27:24.864 }, 00:27:24.864 { 00:27:24.864 "name": null, 00:27:24.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:24.864 "is_configured": false, 00:27:24.864 "data_offset": 0, 00:27:24.864 "data_size": 65536 00:27:24.864 }, 00:27:24.864 { 00:27:24.864 "name": "BaseBdev3", 00:27:24.864 "uuid": "f0951636-a0c2-5419-b296-e85c581f6a6a", 00:27:24.864 "is_configured": true, 00:27:24.864 "data_offset": 0, 00:27:24.864 "data_size": 65536 00:27:24.864 }, 00:27:24.864 { 00:27:24.864 "name": "BaseBdev4", 00:27:24.864 "uuid": "39919fcb-21e6-586d-8947-144b62eba133", 00:27:24.864 "is_configured": true, 00:27:24.864 "data_offset": 0, 00:27:24.864 "data_size": 65536 00:27:24.864 } 00:27:24.864 ] 00:27:24.864 }' 00:27:24.864 00:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:24.864 00:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:24.864 00:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:24.864 00:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:24.864 00:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:24.864 [2024-07-25 00:11:20.650073] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:25.122 [2024-07-25 00:11:20.750051] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:25.122 [2024-07-25 00:11:20.751799] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:25.688 00:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:25.688 00:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:25.688 00:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:25.688 00:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:25.688 00:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:25.689 00:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:25.689 00:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.689 00:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.255 00:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:26.255 "name": "raid_bdev1", 00:27:26.255 "uuid": "d4beb793-4a94-49a1-81c0-5e3b91aba994", 00:27:26.255 "strip_size_kb": 0, 00:27:26.255 "state": "online", 00:27:26.255 "raid_level": "raid1", 00:27:26.255 "superblock": false, 00:27:26.255 "num_base_bdevs": 4, 00:27:26.255 "num_base_bdevs_discovered": 3, 00:27:26.255 "num_base_bdevs_operational": 3, 00:27:26.255 "base_bdevs_list": [ 00:27:26.255 { 00:27:26.255 "name": "spare", 00:27:26.255 "uuid": "c8f0b11e-454a-5775-addc-37db43c66720", 00:27:26.255 "is_configured": true, 00:27:26.255 "data_offset": 0, 00:27:26.255 "data_size": 65536 00:27:26.255 }, 00:27:26.255 { 00:27:26.255 "name": null, 00:27:26.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.255 "is_configured": false, 00:27:26.255 "data_offset": 0, 00:27:26.255 "data_size": 65536 00:27:26.255 }, 00:27:26.255 { 00:27:26.255 "name": "BaseBdev3", 00:27:26.255 "uuid": "f0951636-a0c2-5419-b296-e85c581f6a6a", 00:27:26.255 "is_configured": true, 00:27:26.255 "data_offset": 0, 00:27:26.255 "data_size": 65536 00:27:26.255 }, 00:27:26.255 { 00:27:26.255 "name": "BaseBdev4", 00:27:26.255 "uuid": "39919fcb-21e6-586d-8947-144b62eba133", 00:27:26.255 "is_configured": true, 00:27:26.255 "data_offset": 0, 00:27:26.255 "data_size": 65536 00:27:26.255 } 00:27:26.255 ] 00:27:26.255 }' 00:27:26.255 00:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:26.255 00:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:26.255 00:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:26.255 00:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:26.255 00:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # break 00:27:26.255 00:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:26.255 00:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:26.255 00:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:26.255 00:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:26.255 00:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:26.255 00:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.255 00:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.514 00:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:26.514 "name": "raid_bdev1", 00:27:26.515 "uuid": "d4beb793-4a94-49a1-81c0-5e3b91aba994", 00:27:26.515 "strip_size_kb": 0, 00:27:26.515 "state": "online", 00:27:26.515 "raid_level": "raid1", 00:27:26.515 "superblock": false, 00:27:26.515 "num_base_bdevs": 4, 00:27:26.515 "num_base_bdevs_discovered": 3, 00:27:26.515 "num_base_bdevs_operational": 3, 00:27:26.515 "base_bdevs_list": [ 00:27:26.515 { 00:27:26.515 "name": "spare", 00:27:26.515 "uuid": "c8f0b11e-454a-5775-addc-37db43c66720", 00:27:26.515 "is_configured": true, 00:27:26.515 "data_offset": 0, 00:27:26.515 "data_size": 65536 00:27:26.515 }, 00:27:26.515 { 00:27:26.515 "name": null, 00:27:26.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.515 "is_configured": false, 00:27:26.515 "data_offset": 0, 00:27:26.515 "data_size": 65536 00:27:26.515 }, 00:27:26.515 { 00:27:26.515 "name": "BaseBdev3", 00:27:26.515 "uuid": "f0951636-a0c2-5419-b296-e85c581f6a6a", 00:27:26.515 "is_configured": true, 00:27:26.515 "data_offset": 0, 00:27:26.515 "data_size": 65536 00:27:26.515 }, 00:27:26.515 { 00:27:26.515 "name": "BaseBdev4", 00:27:26.515 "uuid": "39919fcb-21e6-586d-8947-144b62eba133", 00:27:26.515 "is_configured": true, 00:27:26.515 "data_offset": 0, 00:27:26.515 "data_size": 65536 00:27:26.515 } 00:27:26.515 ] 00:27:26.515 }' 00:27:26.515 00:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:26.515 00:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:26.515 00:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:26.515 00:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:26.515 00:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:26.515 00:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:26.515 00:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:26.515 00:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:26.515 00:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:26.515 00:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:26.515 00:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:26.515 00:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:26.515 00:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:26.515 00:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:26.515 00:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.515 00:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.774 00:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:26.774 "name": "raid_bdev1", 00:27:26.774 "uuid": "d4beb793-4a94-49a1-81c0-5e3b91aba994", 00:27:26.774 "strip_size_kb": 0, 00:27:26.774 "state": "online", 00:27:26.774 "raid_level": "raid1", 00:27:26.774 "superblock": false, 00:27:26.774 "num_base_bdevs": 4, 00:27:26.774 "num_base_bdevs_discovered": 3, 00:27:26.774 "num_base_bdevs_operational": 3, 00:27:26.774 "base_bdevs_list": [ 00:27:26.774 { 00:27:26.774 "name": "spare", 00:27:26.774 "uuid": "c8f0b11e-454a-5775-addc-37db43c66720", 00:27:26.774 "is_configured": true, 00:27:26.774 "data_offset": 0, 00:27:26.774 "data_size": 65536 00:27:26.774 }, 00:27:26.774 { 00:27:26.774 "name": null, 00:27:26.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.774 "is_configured": false, 00:27:26.774 "data_offset": 0, 00:27:26.774 "data_size": 65536 00:27:26.774 }, 00:27:26.774 { 00:27:26.774 "name": "BaseBdev3", 00:27:26.774 "uuid": "f0951636-a0c2-5419-b296-e85c581f6a6a", 00:27:26.774 "is_configured": true, 00:27:26.774 "data_offset": 0, 00:27:26.774 "data_size": 65536 00:27:26.774 }, 00:27:26.774 { 00:27:26.774 "name": "BaseBdev4", 00:27:26.774 "uuid": "39919fcb-21e6-586d-8947-144b62eba133", 00:27:26.774 "is_configured": true, 00:27:26.774 "data_offset": 0, 00:27:26.775 "data_size": 65536 00:27:26.775 } 00:27:26.775 ] 00:27:26.775 }' 00:27:26.775 00:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:26.775 00:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:27.033 00:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:27.293 [2024-07-25 00:11:23.053339] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:27.293 [2024-07-25 00:11:23.053615] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:27.293 00:27:27.293 Latency(us) 00:27:27.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.293 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:27:27.293 raid_bdev1 : 11.40 83.26 249.77 0.00 0.00 16517.93 281.13 118203.11 00:27:27.293 =================================================================================================================== 00:27:27.293 Total : 83.26 249.77 0.00 0.00 16517.93 281.13 118203.11 00:27:27.293 [2024-07-25 00:11:23.111232] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:27.293 [2024-07-25 00:11:23.111402] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:27.293 0 00:27:27.293 [2024-07-25 00:11:23.111622] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:27.293 [2024-07-25 00:11:23.111642] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state offline 00:27:27.293 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.293 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # jq length 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:27:27.862 /dev/nbd0 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:27.862 1+0 records in 00:27:27.862 1+0 records out 00:27:27.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615761 s, 6.7 MB/s 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z '' ']' 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # continue 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev3 ']' 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:27.862 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:27:28.121 /dev/nbd1 00:27:28.380 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:28.380 00:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:28.380 00:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:28.380 00:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:27:28.380 00:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:28.380 00:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:28.380 00:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:28.380 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:27:28.380 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:28.380 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:28.380 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:28.380 1+0 records in 00:27:28.380 1+0 records out 00:27:28.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416544 s, 9.8 MB/s 00:27:28.380 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:28.380 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:27:28.380 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:28.380 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:28.380 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:27:28.380 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:28.380 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:28.380 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:28.380 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:27:28.380 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:28.380 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:27:28.380 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:28.380 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:27:28.380 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:28.380 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:28.639 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:28.639 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:28.639 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:28.639 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:28.639 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:28.639 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:28.639 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:27:28.639 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:28.639 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:27:28.639 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev4 ']' 00:27:28.639 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:27:28.639 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:28.639 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:27:28.639 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:28.639 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:27:28.639 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:28.639 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:27:28.639 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:28.639 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:28.639 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:27:28.898 /dev/nbd1 00:27:28.898 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:28.898 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:28.898 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:28.898 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:27:28.898 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:28.898 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:28.898 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:28.898 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:27:28.898 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:28.898 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:28.898 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:29.157 1+0 records in 00:27:29.157 1+0 records out 00:27:29.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435576 s, 9.4 MB/s 00:27:29.157 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:29.157 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:27:29.157 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:29.157 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:29.157 00:11:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:27:29.157 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:29.157 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:29.157 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:29.157 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:27:29.157 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:29.157 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:27:29.157 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:29.157 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:27:29.157 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:29.157 00:11:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:29.416 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:29.416 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:29.416 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:29.416 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:29.416 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:29.416 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:29.416 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:27:29.416 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:29.416 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:29.416 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:29.416 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:29.416 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:29.416 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:27:29.416 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:29.416 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:29.675 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:29.675 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:29.675 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:29.675 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:29.675 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:29.675 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:29.675 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:27:29.675 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:29.675 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:27:29.675 00:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@798 -- # killprocess 100887 00:27:29.675 00:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 100887 ']' 00:27:29.675 00:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 100887 00:27:29.675 00:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:27:29.675 00:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:29.675 00:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100887 00:27:29.675 killing process with pid 100887 00:27:29.675 Received shutdown signal, test time was about 13.816073 seconds 00:27:29.675 00:27:29.675 Latency(us) 00:27:29.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.675 =================================================================================================================== 00:27:29.675 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:29.675 00:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:29.675 00:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:29.675 00:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100887' 00:27:29.675 00:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 100887 00:27:29.675 [2024-07-25 00:11:25.506672] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:29.675 00:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 100887 00:27:30.244 [2024-07-25 00:11:25.881224] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@800 -- # return 0 00:27:31.622 00:27:31.622 real 0m20.757s 00:27:31.622 user 0m30.697s 00:27:31.622 sys 0m2.639s 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:31.622 ************************************ 00:27:31.622 END TEST raid_rebuild_test_io 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:31.622 ************************************ 00:27:31.622 00:11:27 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:27:31.622 00:11:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:27:31.622 00:11:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:31.622 00:11:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:31.622 ************************************ 00:27:31.622 START TEST raid_rebuild_test_sb_io 00:27:31.622 ************************************ 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # raid_pid=101394 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 101394 /var/tmp/spdk-raid.sock 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 101394 ']' 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:31.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:31.622 00:11:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:31.622 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:31.622 Zero copy mechanism will not be used. 00:27:31.622 [2024-07-25 00:11:27.305450] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:27:31.622 [2024-07-25 00:11:27.305755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101394 ] 00:27:31.881 [2024-07-25 00:11:27.509448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.153 [2024-07-25 00:11:27.761418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.153 [2024-07-25 00:11:27.991617] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:32.733 00:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:32.733 00:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:27:32.733 00:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:32.733 00:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:32.733 BaseBdev1_malloc 00:27:32.733 00:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:32.992 [2024-07-25 00:11:28.816734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:32.992 [2024-07-25 00:11:28.817074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:32.992 [2024-07-25 00:11:28.817122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:27:32.992 [2024-07-25 00:11:28.817142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:32.992 [2024-07-25 00:11:28.819790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:32.992 [2024-07-25 00:11:28.819868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:32.992 BaseBdev1 00:27:32.992 00:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:32.992 00:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:33.559 BaseBdev2_malloc 00:27:33.559 00:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:33.559 [2024-07-25 00:11:29.400759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:33.559 [2024-07-25 00:11:29.401062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:33.559 [2024-07-25 00:11:29.401142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:27:33.559 [2024-07-25 00:11:29.401394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:33.559 [2024-07-25 00:11:29.404432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:33.559 [2024-07-25 00:11:29.404658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:33.559 BaseBdev2 00:27:33.817 00:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:33.817 00:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:34.076 BaseBdev3_malloc 00:27:34.076 00:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:27:34.335 [2024-07-25 00:11:30.007019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:27:34.335 [2024-07-25 00:11:30.007103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:34.335 [2024-07-25 00:11:30.007137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:27:34.335 [2024-07-25 00:11:30.007156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:34.335 [2024-07-25 00:11:30.010028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:34.335 [2024-07-25 00:11:30.010127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:34.335 BaseBdev3 00:27:34.335 00:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:34.335 00:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:34.594 BaseBdev4_malloc 00:27:34.594 00:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:27:34.853 [2024-07-25 00:11:30.539586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:27:34.853 [2024-07-25 00:11:30.539746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:34.853 [2024-07-25 00:11:30.539781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:27:34.853 [2024-07-25 00:11:30.539799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:34.853 [2024-07-25 00:11:30.542302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:34.853 [2024-07-25 00:11:30.542380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:34.853 BaseBdev4 00:27:34.853 00:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:35.112 spare_malloc 00:27:35.112 00:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:35.370 spare_delay 00:27:35.370 00:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:35.629 [2024-07-25 00:11:31.317441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:35.629 [2024-07-25 00:11:31.317548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:35.629 [2024-07-25 00:11:31.317579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:27:35.629 [2024-07-25 00:11:31.317594] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:35.629 [2024-07-25 00:11:31.320731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:35.629 [2024-07-25 00:11:31.320809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:35.629 spare 00:27:35.629 00:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:27:35.888 [2024-07-25 00:11:31.533577] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:35.888 [2024-07-25 00:11:31.535692] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:35.888 [2024-07-25 00:11:31.535793] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:35.888 [2024-07-25 00:11:31.535883] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:35.888 [2024-07-25 00:11:31.536197] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a880 00:27:35.888 [2024-07-25 00:11:31.536233] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:35.888 [2024-07-25 00:11:31.536381] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:27:35.888 [2024-07-25 00:11:31.536793] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a880 00:27:35.888 [2024-07-25 00:11:31.536834] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a880 00:27:35.888 [2024-07-25 00:11:31.537045] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:35.888 00:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:35.888 00:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:35.888 00:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:35.888 00:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:35.888 00:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:35.888 00:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:35.888 00:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:35.888 00:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:35.888 00:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:35.888 00:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:35.888 00:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.888 00:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:36.147 00:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:36.147 "name": "raid_bdev1", 00:27:36.147 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:36.147 "strip_size_kb": 0, 00:27:36.147 "state": "online", 00:27:36.147 "raid_level": "raid1", 00:27:36.147 "superblock": true, 00:27:36.147 "num_base_bdevs": 4, 00:27:36.147 "num_base_bdevs_discovered": 4, 00:27:36.147 "num_base_bdevs_operational": 4, 00:27:36.147 "base_bdevs_list": [ 00:27:36.147 { 00:27:36.147 "name": "BaseBdev1", 00:27:36.147 "uuid": "8e71480c-b680-530f-b9f9-d5922ec661d0", 00:27:36.147 "is_configured": true, 00:27:36.147 "data_offset": 2048, 00:27:36.147 "data_size": 63488 00:27:36.147 }, 00:27:36.147 { 00:27:36.147 "name": "BaseBdev2", 00:27:36.147 "uuid": "155b12ab-6f64-5452-b9e5-9e00cfa870b0", 00:27:36.147 "is_configured": true, 00:27:36.147 "data_offset": 2048, 00:27:36.147 "data_size": 63488 00:27:36.147 }, 00:27:36.147 { 00:27:36.147 "name": "BaseBdev3", 00:27:36.147 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:36.147 "is_configured": true, 00:27:36.147 "data_offset": 2048, 00:27:36.147 "data_size": 63488 00:27:36.147 }, 00:27:36.147 { 00:27:36.147 "name": "BaseBdev4", 00:27:36.147 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:36.147 "is_configured": true, 00:27:36.147 "data_offset": 2048, 00:27:36.147 "data_size": 63488 00:27:36.147 } 00:27:36.147 ] 00:27:36.147 }' 00:27:36.147 00:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:36.147 00:11:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:36.406 00:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:36.406 00:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:27:36.664 [2024-07-25 00:11:32.298176] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:36.664 00:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:27:36.664 00:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:36.664 00:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.923 00:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:27:36.923 00:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:27:36.923 00:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:36.923 00:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:36.923 [2024-07-25 00:11:32.709995] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005d40 00:27:36.923 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:36.923 Zero copy mechanism will not be used. 00:27:36.923 Running I/O for 60 seconds... 00:27:37.182 [2024-07-25 00:11:32.816968] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:37.183 [2024-07-25 00:11:32.817341] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005d40 00:27:37.183 00:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:37.183 00:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:37.183 00:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:37.183 00:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:37.183 00:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:37.183 00:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:37.183 00:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:37.183 00:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:37.183 00:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:37.183 00:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:37.183 00:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:37.183 00:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:37.442 00:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:37.442 "name": "raid_bdev1", 00:27:37.442 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:37.442 "strip_size_kb": 0, 00:27:37.442 "state": "online", 00:27:37.442 "raid_level": "raid1", 00:27:37.442 "superblock": true, 00:27:37.442 "num_base_bdevs": 4, 00:27:37.442 "num_base_bdevs_discovered": 3, 00:27:37.442 "num_base_bdevs_operational": 3, 00:27:37.442 "base_bdevs_list": [ 00:27:37.442 { 00:27:37.442 "name": null, 00:27:37.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.442 "is_configured": false, 00:27:37.442 "data_offset": 2048, 00:27:37.442 "data_size": 63488 00:27:37.442 }, 00:27:37.442 { 00:27:37.442 "name": "BaseBdev2", 00:27:37.442 "uuid": "155b12ab-6f64-5452-b9e5-9e00cfa870b0", 00:27:37.442 "is_configured": true, 00:27:37.442 "data_offset": 2048, 00:27:37.442 "data_size": 63488 00:27:37.442 }, 00:27:37.442 { 00:27:37.442 "name": "BaseBdev3", 00:27:37.442 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:37.442 "is_configured": true, 00:27:37.442 "data_offset": 2048, 00:27:37.442 "data_size": 63488 00:27:37.442 }, 00:27:37.442 { 00:27:37.442 "name": "BaseBdev4", 00:27:37.442 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:37.442 "is_configured": true, 00:27:37.442 "data_offset": 2048, 00:27:37.442 "data_size": 63488 00:27:37.442 } 00:27:37.442 ] 00:27:37.442 }' 00:27:37.442 00:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:37.442 00:11:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:37.701 00:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:37.960 [2024-07-25 00:11:33.637682] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:37.960 00:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:37.960 [2024-07-25 00:11:33.706522] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005e10 00:27:37.960 [2024-07-25 00:11:33.708691] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:37.960 [2024-07-25 00:11:33.826553] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:38.219 [2024-07-25 00:11:33.827889] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:38.219 [2024-07-25 00:11:34.043474] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:38.219 [2024-07-25 00:11:34.044080] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:38.788 [2024-07-25 00:11:34.382375] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:38.788 [2024-07-25 00:11:34.383154] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:38.788 [2024-07-25 00:11:34.525246] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:39.046 00:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:39.046 00:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:39.046 00:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:39.046 00:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:39.046 00:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:39.046 00:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.046 00:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:39.046 [2024-07-25 00:11:34.763712] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:39.305 00:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:39.305 "name": "raid_bdev1", 00:27:39.305 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:39.305 "strip_size_kb": 0, 00:27:39.305 "state": "online", 00:27:39.305 "raid_level": "raid1", 00:27:39.305 "superblock": true, 00:27:39.305 "num_base_bdevs": 4, 00:27:39.305 "num_base_bdevs_discovered": 4, 00:27:39.305 "num_base_bdevs_operational": 4, 00:27:39.305 "process": { 00:27:39.305 "type": "rebuild", 00:27:39.305 "target": "spare", 00:27:39.305 "progress": { 00:27:39.305 "blocks": 14336, 00:27:39.305 "percent": 22 00:27:39.305 } 00:27:39.305 }, 00:27:39.305 "base_bdevs_list": [ 00:27:39.305 { 00:27:39.305 "name": "spare", 00:27:39.305 "uuid": "5b89a951-4bc8-5a0d-b114-00a1b723c943", 00:27:39.305 "is_configured": true, 00:27:39.305 "data_offset": 2048, 00:27:39.305 "data_size": 63488 00:27:39.305 }, 00:27:39.305 { 00:27:39.305 "name": "BaseBdev2", 00:27:39.305 "uuid": "155b12ab-6f64-5452-b9e5-9e00cfa870b0", 00:27:39.305 "is_configured": true, 00:27:39.305 "data_offset": 2048, 00:27:39.305 "data_size": 63488 00:27:39.305 }, 00:27:39.305 { 00:27:39.305 "name": "BaseBdev3", 00:27:39.305 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:39.305 "is_configured": true, 00:27:39.305 "data_offset": 2048, 00:27:39.305 "data_size": 63488 00:27:39.305 }, 00:27:39.305 { 00:27:39.305 "name": "BaseBdev4", 00:27:39.305 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:39.305 "is_configured": true, 00:27:39.305 "data_offset": 2048, 00:27:39.305 "data_size": 63488 00:27:39.305 } 00:27:39.305 ] 00:27:39.305 }' 00:27:39.305 00:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:39.305 00:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:39.305 00:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:39.305 00:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:39.305 00:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:39.305 [2024-07-25 00:11:34.992290] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:39.563 [2024-07-25 00:11:35.218874] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:39.563 [2024-07-25 00:11:35.315136] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:27:39.563 [2024-07-25 00:11:35.423744] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:39.563 [2024-07-25 00:11:35.427133] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:39.563 [2024-07-25 00:11:35.427355] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:39.563 [2024-07-25 00:11:35.427382] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:39.823 [2024-07-25 00:11:35.448340] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005d40 00:27:39.823 00:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:39.823 00:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:39.823 00:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:39.823 00:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:39.823 00:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:39.823 00:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:39.823 00:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:39.823 00:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:39.823 00:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:39.823 00:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:39.823 00:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.823 00:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.085 00:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:40.085 "name": "raid_bdev1", 00:27:40.085 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:40.085 "strip_size_kb": 0, 00:27:40.085 "state": "online", 00:27:40.085 "raid_level": "raid1", 00:27:40.085 "superblock": true, 00:27:40.085 "num_base_bdevs": 4, 00:27:40.085 "num_base_bdevs_discovered": 3, 00:27:40.085 "num_base_bdevs_operational": 3, 00:27:40.085 "base_bdevs_list": [ 00:27:40.085 { 00:27:40.085 "name": null, 00:27:40.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.085 "is_configured": false, 00:27:40.085 "data_offset": 2048, 00:27:40.085 "data_size": 63488 00:27:40.085 }, 00:27:40.085 { 00:27:40.085 "name": "BaseBdev2", 00:27:40.085 "uuid": "155b12ab-6f64-5452-b9e5-9e00cfa870b0", 00:27:40.085 "is_configured": true, 00:27:40.085 "data_offset": 2048, 00:27:40.085 "data_size": 63488 00:27:40.085 }, 00:27:40.085 { 00:27:40.086 "name": "BaseBdev3", 00:27:40.086 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:40.086 "is_configured": true, 00:27:40.086 "data_offset": 2048, 00:27:40.086 "data_size": 63488 00:27:40.086 }, 00:27:40.086 { 00:27:40.086 "name": "BaseBdev4", 00:27:40.086 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:40.086 "is_configured": true, 00:27:40.086 "data_offset": 2048, 00:27:40.086 "data_size": 63488 00:27:40.086 } 00:27:40.086 ] 00:27:40.086 }' 00:27:40.086 00:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:40.086 00:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:40.345 00:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:40.345 00:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:40.345 00:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:40.345 00:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:40.345 00:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:40.345 00:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.345 00:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.604 00:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:40.604 "name": "raid_bdev1", 00:27:40.604 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:40.604 "strip_size_kb": 0, 00:27:40.604 "state": "online", 00:27:40.604 "raid_level": "raid1", 00:27:40.604 "superblock": true, 00:27:40.604 "num_base_bdevs": 4, 00:27:40.604 "num_base_bdevs_discovered": 3, 00:27:40.604 "num_base_bdevs_operational": 3, 00:27:40.604 "base_bdevs_list": [ 00:27:40.604 { 00:27:40.604 "name": null, 00:27:40.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.604 "is_configured": false, 00:27:40.604 "data_offset": 2048, 00:27:40.604 "data_size": 63488 00:27:40.604 }, 00:27:40.604 { 00:27:40.604 "name": "BaseBdev2", 00:27:40.604 "uuid": "155b12ab-6f64-5452-b9e5-9e00cfa870b0", 00:27:40.604 "is_configured": true, 00:27:40.604 "data_offset": 2048, 00:27:40.604 "data_size": 63488 00:27:40.604 }, 00:27:40.604 { 00:27:40.604 "name": "BaseBdev3", 00:27:40.604 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:40.604 "is_configured": true, 00:27:40.604 "data_offset": 2048, 00:27:40.604 "data_size": 63488 00:27:40.604 }, 00:27:40.604 { 00:27:40.604 "name": "BaseBdev4", 00:27:40.604 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:40.604 "is_configured": true, 00:27:40.604 "data_offset": 2048, 00:27:40.604 "data_size": 63488 00:27:40.604 } 00:27:40.604 ] 00:27:40.604 }' 00:27:40.604 00:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:40.604 00:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:40.604 00:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:40.604 00:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:40.604 00:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:40.863 [2024-07-25 00:11:36.643822] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:40.863 00:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:27:40.863 [2024-07-25 00:11:36.713947] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ee0 00:27:40.863 [2024-07-25 00:11:36.716096] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:41.122 [2024-07-25 00:11:36.857065] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:41.379 [2024-07-25 00:11:37.075781] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:41.379 [2024-07-25 00:11:37.076349] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:41.679 [2024-07-25 00:11:37.410702] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:41.936 [2024-07-25 00:11:37.629681] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:41.936 [2024-07-25 00:11:37.629999] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:41.936 00:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:41.936 00:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:41.936 00:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:41.936 00:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:41.936 00:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:41.936 00:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:41.936 00:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.195 [2024-07-25 00:11:37.892178] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:42.195 [2024-07-25 00:11:37.893381] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:42.195 00:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:42.195 "name": "raid_bdev1", 00:27:42.195 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:42.195 "strip_size_kb": 0, 00:27:42.195 "state": "online", 00:27:42.195 "raid_level": "raid1", 00:27:42.195 "superblock": true, 00:27:42.195 "num_base_bdevs": 4, 00:27:42.195 "num_base_bdevs_discovered": 4, 00:27:42.195 "num_base_bdevs_operational": 4, 00:27:42.195 "process": { 00:27:42.195 "type": "rebuild", 00:27:42.195 "target": "spare", 00:27:42.195 "progress": { 00:27:42.195 "blocks": 14336, 00:27:42.195 "percent": 22 00:27:42.195 } 00:27:42.195 }, 00:27:42.195 "base_bdevs_list": [ 00:27:42.195 { 00:27:42.195 "name": "spare", 00:27:42.195 "uuid": "5b89a951-4bc8-5a0d-b114-00a1b723c943", 00:27:42.195 "is_configured": true, 00:27:42.195 "data_offset": 2048, 00:27:42.195 "data_size": 63488 00:27:42.195 }, 00:27:42.195 { 00:27:42.195 "name": "BaseBdev2", 00:27:42.195 "uuid": "155b12ab-6f64-5452-b9e5-9e00cfa870b0", 00:27:42.195 "is_configured": true, 00:27:42.195 "data_offset": 2048, 00:27:42.195 "data_size": 63488 00:27:42.195 }, 00:27:42.195 { 00:27:42.195 "name": "BaseBdev3", 00:27:42.195 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:42.195 "is_configured": true, 00:27:42.195 "data_offset": 2048, 00:27:42.195 "data_size": 63488 00:27:42.195 }, 00:27:42.195 { 00:27:42.195 "name": "BaseBdev4", 00:27:42.195 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:42.195 "is_configured": true, 00:27:42.195 "data_offset": 2048, 00:27:42.195 "data_size": 63488 00:27:42.195 } 00:27:42.195 ] 00:27:42.195 }' 00:27:42.195 00:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:42.195 00:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:42.195 00:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:42.195 00:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:42.195 00:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:27:42.195 00:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:27:42.195 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:27:42.195 00:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:27:42.195 00:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:27:42.195 00:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:27:42.195 00:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:27:42.453 [2024-07-25 00:11:38.104270] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:42.453 [2024-07-25 00:11:38.104527] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:42.453 [2024-07-25 00:11:38.171659] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:42.711 [2024-07-25 00:11:38.454383] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005d40 00:27:42.711 [2024-07-25 00:11:38.454438] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005ee0 00:27:42.711 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:27:42.711 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:27:42.711 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:42.711 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:42.711 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:42.711 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:42.711 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:42.711 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.711 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.970 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:42.970 "name": "raid_bdev1", 00:27:42.970 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:42.970 "strip_size_kb": 0, 00:27:42.970 "state": "online", 00:27:42.970 "raid_level": "raid1", 00:27:42.970 "superblock": true, 00:27:42.970 "num_base_bdevs": 4, 00:27:42.970 "num_base_bdevs_discovered": 3, 00:27:42.970 "num_base_bdevs_operational": 3, 00:27:42.970 "process": { 00:27:42.970 "type": "rebuild", 00:27:42.970 "target": "spare", 00:27:42.970 "progress": { 00:27:42.970 "blocks": 20480, 00:27:42.970 "percent": 32 00:27:42.970 } 00:27:42.970 }, 00:27:42.970 "base_bdevs_list": [ 00:27:42.970 { 00:27:42.970 "name": "spare", 00:27:42.970 "uuid": "5b89a951-4bc8-5a0d-b114-00a1b723c943", 00:27:42.970 "is_configured": true, 00:27:42.970 "data_offset": 2048, 00:27:42.970 "data_size": 63488 00:27:42.970 }, 00:27:42.970 { 00:27:42.970 "name": null, 00:27:42.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.970 "is_configured": false, 00:27:42.970 "data_offset": 2048, 00:27:42.970 "data_size": 63488 00:27:42.970 }, 00:27:42.970 { 00:27:42.970 "name": "BaseBdev3", 00:27:42.970 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:42.970 "is_configured": true, 00:27:42.970 "data_offset": 2048, 00:27:42.970 "data_size": 63488 00:27:42.970 }, 00:27:42.970 { 00:27:42.970 "name": "BaseBdev4", 00:27:42.970 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:42.970 "is_configured": true, 00:27:42.970 "data_offset": 2048, 00:27:42.970 "data_size": 63488 00:27:42.970 } 00:27:42.970 ] 00:27:42.970 }' 00:27:42.970 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:42.970 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:42.970 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:42.970 [2024-07-25 00:11:38.714674] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:27:42.970 [2024-07-25 00:11:38.714985] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:27:42.970 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:42.970 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # local timeout=888 00:27:42.970 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:42.970 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:42.970 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:42.970 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:42.970 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:42.970 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:42.970 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.970 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.228 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:43.228 "name": "raid_bdev1", 00:27:43.228 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:43.228 "strip_size_kb": 0, 00:27:43.228 "state": "online", 00:27:43.228 "raid_level": "raid1", 00:27:43.228 "superblock": true, 00:27:43.228 "num_base_bdevs": 4, 00:27:43.228 "num_base_bdevs_discovered": 3, 00:27:43.228 "num_base_bdevs_operational": 3, 00:27:43.228 "process": { 00:27:43.228 "type": "rebuild", 00:27:43.228 "target": "spare", 00:27:43.228 "progress": { 00:27:43.229 "blocks": 24576, 00:27:43.229 "percent": 38 00:27:43.229 } 00:27:43.229 }, 00:27:43.229 "base_bdevs_list": [ 00:27:43.229 { 00:27:43.229 "name": "spare", 00:27:43.229 "uuid": "5b89a951-4bc8-5a0d-b114-00a1b723c943", 00:27:43.229 "is_configured": true, 00:27:43.229 "data_offset": 2048, 00:27:43.229 "data_size": 63488 00:27:43.229 }, 00:27:43.229 { 00:27:43.229 "name": null, 00:27:43.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:43.229 "is_configured": false, 00:27:43.229 "data_offset": 2048, 00:27:43.229 "data_size": 63488 00:27:43.229 }, 00:27:43.229 { 00:27:43.229 "name": "BaseBdev3", 00:27:43.229 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:43.229 "is_configured": true, 00:27:43.229 "data_offset": 2048, 00:27:43.229 "data_size": 63488 00:27:43.229 }, 00:27:43.229 { 00:27:43.229 "name": "BaseBdev4", 00:27:43.229 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:43.229 "is_configured": true, 00:27:43.229 "data_offset": 2048, 00:27:43.229 "data_size": 63488 00:27:43.229 } 00:27:43.229 ] 00:27:43.229 }' 00:27:43.229 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:43.229 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:43.229 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:43.229 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:43.229 00:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:43.229 [2024-07-25 00:11:39.055799] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:27:43.487 [2024-07-25 00:11:39.181828] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:27:43.487 [2024-07-25 00:11:39.182579] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:27:43.746 [2024-07-25 00:11:39.519730] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:27:44.006 [2024-07-25 00:11:39.659358] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:27:44.265 [2024-07-25 00:11:39.992211] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:27:44.265 00:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:44.265 00:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:44.265 00:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:44.265 00:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:44.265 00:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:44.265 00:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:44.265 00:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.265 00:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:44.524 00:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:44.524 "name": "raid_bdev1", 00:27:44.524 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:44.524 "strip_size_kb": 0, 00:27:44.524 "state": "online", 00:27:44.524 "raid_level": "raid1", 00:27:44.524 "superblock": true, 00:27:44.524 "num_base_bdevs": 4, 00:27:44.524 "num_base_bdevs_discovered": 3, 00:27:44.524 "num_base_bdevs_operational": 3, 00:27:44.524 "process": { 00:27:44.524 "type": "rebuild", 00:27:44.524 "target": "spare", 00:27:44.524 "progress": { 00:27:44.524 "blocks": 43008, 00:27:44.524 "percent": 67 00:27:44.524 } 00:27:44.524 }, 00:27:44.524 "base_bdevs_list": [ 00:27:44.524 { 00:27:44.524 "name": "spare", 00:27:44.524 "uuid": "5b89a951-4bc8-5a0d-b114-00a1b723c943", 00:27:44.524 "is_configured": true, 00:27:44.524 "data_offset": 2048, 00:27:44.524 "data_size": 63488 00:27:44.524 }, 00:27:44.524 { 00:27:44.524 "name": null, 00:27:44.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.524 "is_configured": false, 00:27:44.524 "data_offset": 2048, 00:27:44.524 "data_size": 63488 00:27:44.524 }, 00:27:44.524 { 00:27:44.524 "name": "BaseBdev3", 00:27:44.524 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:44.524 "is_configured": true, 00:27:44.524 "data_offset": 2048, 00:27:44.524 "data_size": 63488 00:27:44.524 }, 00:27:44.524 { 00:27:44.524 "name": "BaseBdev4", 00:27:44.524 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:44.524 "is_configured": true, 00:27:44.524 "data_offset": 2048, 00:27:44.524 "data_size": 63488 00:27:44.524 } 00:27:44.524 ] 00:27:44.524 }' 00:27:44.524 00:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:44.524 00:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:44.524 [2024-07-25 00:11:40.232090] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:27:44.524 00:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:44.524 00:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:44.524 00:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:44.782 [2024-07-25 00:11:40.575886] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:27:45.350 [2024-07-25 00:11:40.918092] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:27:45.350 [2024-07-25 00:11:41.143847] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:27:45.608 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:45.608 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:45.608 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:45.608 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:45.608 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:45.608 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:45.608 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.608 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.608 [2024-07-25 00:11:41.372706] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:45.608 [2024-07-25 00:11:41.472714] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:45.867 [2024-07-25 00:11:41.483675] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:45.867 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:45.867 "name": "raid_bdev1", 00:27:45.867 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:45.867 "strip_size_kb": 0, 00:27:45.867 "state": "online", 00:27:45.867 "raid_level": "raid1", 00:27:45.867 "superblock": true, 00:27:45.867 "num_base_bdevs": 4, 00:27:45.867 "num_base_bdevs_discovered": 3, 00:27:45.867 "num_base_bdevs_operational": 3, 00:27:45.867 "base_bdevs_list": [ 00:27:45.867 { 00:27:45.867 "name": "spare", 00:27:45.867 "uuid": "5b89a951-4bc8-5a0d-b114-00a1b723c943", 00:27:45.867 "is_configured": true, 00:27:45.867 "data_offset": 2048, 00:27:45.867 "data_size": 63488 00:27:45.867 }, 00:27:45.867 { 00:27:45.867 "name": null, 00:27:45.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.867 "is_configured": false, 00:27:45.867 "data_offset": 2048, 00:27:45.867 "data_size": 63488 00:27:45.867 }, 00:27:45.867 { 00:27:45.867 "name": "BaseBdev3", 00:27:45.867 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:45.867 "is_configured": true, 00:27:45.867 "data_offset": 2048, 00:27:45.867 "data_size": 63488 00:27:45.867 }, 00:27:45.867 { 00:27:45.867 "name": "BaseBdev4", 00:27:45.867 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:45.867 "is_configured": true, 00:27:45.867 "data_offset": 2048, 00:27:45.867 "data_size": 63488 00:27:45.867 } 00:27:45.867 ] 00:27:45.867 }' 00:27:45.867 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:45.867 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:45.867 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:45.867 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:45.867 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # break 00:27:45.867 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:45.867 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:45.867 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:45.867 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:45.867 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:45.867 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.867 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:46.126 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:46.126 "name": "raid_bdev1", 00:27:46.126 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:46.126 "strip_size_kb": 0, 00:27:46.126 "state": "online", 00:27:46.126 "raid_level": "raid1", 00:27:46.126 "superblock": true, 00:27:46.126 "num_base_bdevs": 4, 00:27:46.126 "num_base_bdevs_discovered": 3, 00:27:46.126 "num_base_bdevs_operational": 3, 00:27:46.126 "base_bdevs_list": [ 00:27:46.126 { 00:27:46.126 "name": "spare", 00:27:46.126 "uuid": "5b89a951-4bc8-5a0d-b114-00a1b723c943", 00:27:46.126 "is_configured": true, 00:27:46.126 "data_offset": 2048, 00:27:46.126 "data_size": 63488 00:27:46.126 }, 00:27:46.126 { 00:27:46.126 "name": null, 00:27:46.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:46.126 "is_configured": false, 00:27:46.126 "data_offset": 2048, 00:27:46.126 "data_size": 63488 00:27:46.126 }, 00:27:46.126 { 00:27:46.126 "name": "BaseBdev3", 00:27:46.126 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:46.126 "is_configured": true, 00:27:46.126 "data_offset": 2048, 00:27:46.126 "data_size": 63488 00:27:46.126 }, 00:27:46.126 { 00:27:46.126 "name": "BaseBdev4", 00:27:46.126 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:46.126 "is_configured": true, 00:27:46.126 "data_offset": 2048, 00:27:46.126 "data_size": 63488 00:27:46.126 } 00:27:46.126 ] 00:27:46.126 }' 00:27:46.126 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:46.126 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:46.126 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:46.126 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:46.126 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:46.126 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:46.126 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:46.126 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:46.126 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:46.126 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:46.126 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:46.126 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:46.126 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:46.126 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:46.126 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:46.126 00:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:46.386 00:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:46.386 "name": "raid_bdev1", 00:27:46.386 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:46.386 "strip_size_kb": 0, 00:27:46.386 "state": "online", 00:27:46.386 "raid_level": "raid1", 00:27:46.386 "superblock": true, 00:27:46.386 "num_base_bdevs": 4, 00:27:46.386 "num_base_bdevs_discovered": 3, 00:27:46.386 "num_base_bdevs_operational": 3, 00:27:46.386 "base_bdevs_list": [ 00:27:46.386 { 00:27:46.386 "name": "spare", 00:27:46.386 "uuid": "5b89a951-4bc8-5a0d-b114-00a1b723c943", 00:27:46.386 "is_configured": true, 00:27:46.386 "data_offset": 2048, 00:27:46.386 "data_size": 63488 00:27:46.386 }, 00:27:46.386 { 00:27:46.386 "name": null, 00:27:46.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:46.386 "is_configured": false, 00:27:46.386 "data_offset": 2048, 00:27:46.386 "data_size": 63488 00:27:46.386 }, 00:27:46.386 { 00:27:46.386 "name": "BaseBdev3", 00:27:46.386 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:46.386 "is_configured": true, 00:27:46.386 "data_offset": 2048, 00:27:46.386 "data_size": 63488 00:27:46.386 }, 00:27:46.386 { 00:27:46.386 "name": "BaseBdev4", 00:27:46.386 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:46.386 "is_configured": true, 00:27:46.386 "data_offset": 2048, 00:27:46.386 "data_size": 63488 00:27:46.386 } 00:27:46.386 ] 00:27:46.386 }' 00:27:46.386 00:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:46.386 00:11:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:46.644 00:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:46.901 [2024-07-25 00:11:42.570955] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:46.901 [2024-07-25 00:11:42.571286] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:46.901 00:27:46.902 Latency(us) 00:27:46.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.902 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:27:46.902 raid_bdev1 : 9.87 100.46 301.39 0.00 0.00 12476.25 283.00 117726.49 00:27:46.902 =================================================================================================================== 00:27:46.902 Total : 100.46 301.39 0.00 0.00 12476.25 283.00 117726.49 00:27:46.902 0 00:27:46.902 [2024-07-25 00:11:42.604239] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:46.902 [2024-07-25 00:11:42.604288] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:46.902 [2024-07-25 00:11:42.604401] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:46.902 [2024-07-25 00:11:42.604418] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state offline 00:27:46.902 00:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # jq length 00:27:46.902 00:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.160 00:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:27:47.160 00:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:27:47.160 00:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:27:47.160 00:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:27:47.160 00:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:47.160 00:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:27:47.160 00:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:47.160 00:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:47.160 00:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:47.160 00:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:27:47.160 00:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:47.160 00:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:47.160 00:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:27:47.419 /dev/nbd0 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:47.419 1+0 records in 00:27:47.419 1+0 records out 00:27:47.419 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496491 s, 8.2 MB/s 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z '' ']' 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # continue 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev3 ']' 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:47.419 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:27:47.687 /dev/nbd1 00:27:47.687 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:47.687 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:47.687 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:47.687 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:27:47.687 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:47.687 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:47.687 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:47.687 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:27:47.687 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:47.687 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:47.687 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:47.687 1+0 records in 00:27:47.687 1+0 records out 00:27:47.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337291 s, 12.1 MB/s 00:27:47.687 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:47.687 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:27:47.687 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:47.687 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:47.687 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:27:47.687 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:47.687 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:47.687 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:47.944 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:27:47.944 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:47.944 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:27:47.945 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:47.945 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:27:47.945 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:47.945 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:48.202 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:48.202 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:48.202 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:48.202 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:48.202 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:48.202 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:48.202 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:27:48.202 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:48.202 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:27:48.202 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev4 ']' 00:27:48.202 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:27:48.202 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:48.202 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:27:48.202 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:48.202 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:27:48.202 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:48.202 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:27:48.202 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:48.203 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:48.203 00:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:27:48.461 /dev/nbd1 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:48.461 1+0 records in 00:27:48.461 1+0 records out 00:27:48.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374966 s, 10.9 MB/s 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:48.461 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:48.719 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:48.719 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:48.719 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:48.719 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:48.719 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:48.719 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:48.719 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:27:48.719 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:48.719 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:48.719 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:48.719 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:48.719 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:48.719 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:27:48.720 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:48.720 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:48.978 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:48.978 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:48.978 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:48.978 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:48.978 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:48.978 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:48.978 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:27:48.978 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:48.978 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:27:48.978 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:49.238 00:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:49.496 [2024-07-25 00:11:45.197274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:49.496 [2024-07-25 00:11:45.197363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:49.496 [2024-07-25 00:11:45.197399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ba80 00:27:49.496 [2024-07-25 00:11:45.197414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:49.496 [2024-07-25 00:11:45.200152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:49.497 [2024-07-25 00:11:45.200194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:49.497 [2024-07-25 00:11:45.200325] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:49.497 [2024-07-25 00:11:45.200395] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:49.497 [2024-07-25 00:11:45.200573] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:49.497 [2024-07-25 00:11:45.200681] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:49.497 spare 00:27:49.497 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:49.497 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:49.497 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:49.497 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:49.497 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:49.497 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:49.497 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:49.497 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:49.497 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:49.497 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:49.497 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.497 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:49.497 [2024-07-25 00:11:45.300846] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000c080 00:27:49.497 [2024-07-25 00:11:45.301098] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:49.497 [2024-07-25 00:11:45.301302] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000036c80 00:27:49.497 [2024-07-25 00:11:45.301802] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000c080 00:27:49.497 [2024-07-25 00:11:45.301855] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000c080 00:27:49.497 [2024-07-25 00:11:45.302104] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:49.755 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:49.755 "name": "raid_bdev1", 00:27:49.755 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:49.755 "strip_size_kb": 0, 00:27:49.755 "state": "online", 00:27:49.755 "raid_level": "raid1", 00:27:49.755 "superblock": true, 00:27:49.755 "num_base_bdevs": 4, 00:27:49.755 "num_base_bdevs_discovered": 3, 00:27:49.755 "num_base_bdevs_operational": 3, 00:27:49.755 "base_bdevs_list": [ 00:27:49.755 { 00:27:49.755 "name": "spare", 00:27:49.755 "uuid": "5b89a951-4bc8-5a0d-b114-00a1b723c943", 00:27:49.755 "is_configured": true, 00:27:49.755 "data_offset": 2048, 00:27:49.755 "data_size": 63488 00:27:49.755 }, 00:27:49.755 { 00:27:49.755 "name": null, 00:27:49.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.755 "is_configured": false, 00:27:49.755 "data_offset": 2048, 00:27:49.755 "data_size": 63488 00:27:49.755 }, 00:27:49.755 { 00:27:49.755 "name": "BaseBdev3", 00:27:49.755 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:49.755 "is_configured": true, 00:27:49.755 "data_offset": 2048, 00:27:49.755 "data_size": 63488 00:27:49.755 }, 00:27:49.755 { 00:27:49.755 "name": "BaseBdev4", 00:27:49.755 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:49.755 "is_configured": true, 00:27:49.755 "data_offset": 2048, 00:27:49.755 "data_size": 63488 00:27:49.755 } 00:27:49.755 ] 00:27:49.755 }' 00:27:49.755 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:49.755 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:50.013 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:50.013 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:50.013 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:50.013 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:50.013 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:50.013 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.013 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.272 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:50.272 "name": "raid_bdev1", 00:27:50.272 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:50.272 "strip_size_kb": 0, 00:27:50.272 "state": "online", 00:27:50.272 "raid_level": "raid1", 00:27:50.272 "superblock": true, 00:27:50.272 "num_base_bdevs": 4, 00:27:50.272 "num_base_bdevs_discovered": 3, 00:27:50.272 "num_base_bdevs_operational": 3, 00:27:50.272 "base_bdevs_list": [ 00:27:50.272 { 00:27:50.272 "name": "spare", 00:27:50.272 "uuid": "5b89a951-4bc8-5a0d-b114-00a1b723c943", 00:27:50.272 "is_configured": true, 00:27:50.272 "data_offset": 2048, 00:27:50.272 "data_size": 63488 00:27:50.272 }, 00:27:50.272 { 00:27:50.272 "name": null, 00:27:50.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.272 "is_configured": false, 00:27:50.272 "data_offset": 2048, 00:27:50.272 "data_size": 63488 00:27:50.272 }, 00:27:50.272 { 00:27:50.272 "name": "BaseBdev3", 00:27:50.272 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:50.272 "is_configured": true, 00:27:50.272 "data_offset": 2048, 00:27:50.272 "data_size": 63488 00:27:50.272 }, 00:27:50.272 { 00:27:50.272 "name": "BaseBdev4", 00:27:50.272 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:50.272 "is_configured": true, 00:27:50.272 "data_offset": 2048, 00:27:50.272 "data_size": 63488 00:27:50.272 } 00:27:50.272 ] 00:27:50.272 }' 00:27:50.272 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:50.272 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:50.272 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:50.272 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:50.272 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:50.272 00:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.531 00:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:27:50.531 00:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:50.791 [2024-07-25 00:11:46.406556] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:50.791 00:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:50.791 00:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:50.791 00:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:50.791 00:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:50.791 00:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:50.791 00:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:50.791 00:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:50.791 00:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:50.791 00:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:50.791 00:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:50.791 00:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.791 00:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.791 00:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:50.791 "name": "raid_bdev1", 00:27:50.791 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:50.791 "strip_size_kb": 0, 00:27:50.791 "state": "online", 00:27:50.791 "raid_level": "raid1", 00:27:50.791 "superblock": true, 00:27:50.791 "num_base_bdevs": 4, 00:27:50.791 "num_base_bdevs_discovered": 2, 00:27:50.791 "num_base_bdevs_operational": 2, 00:27:50.791 "base_bdevs_list": [ 00:27:50.791 { 00:27:50.791 "name": null, 00:27:50.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.791 "is_configured": false, 00:27:50.791 "data_offset": 2048, 00:27:50.791 "data_size": 63488 00:27:50.791 }, 00:27:50.791 { 00:27:50.791 "name": null, 00:27:50.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.791 "is_configured": false, 00:27:50.791 "data_offset": 2048, 00:27:50.791 "data_size": 63488 00:27:50.791 }, 00:27:50.791 { 00:27:50.791 "name": "BaseBdev3", 00:27:50.791 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:50.791 "is_configured": true, 00:27:50.791 "data_offset": 2048, 00:27:50.791 "data_size": 63488 00:27:50.791 }, 00:27:50.791 { 00:27:50.791 "name": "BaseBdev4", 00:27:50.791 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:50.791 "is_configured": true, 00:27:50.791 "data_offset": 2048, 00:27:50.791 "data_size": 63488 00:27:50.791 } 00:27:50.791 ] 00:27:50.791 }' 00:27:50.791 00:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:50.791 00:11:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:51.359 00:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:51.359 [2024-07-25 00:11:47.182858] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:51.359 [2024-07-25 00:11:47.183111] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:27:51.359 [2024-07-25 00:11:47.183133] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:51.359 [2024-07-25 00:11:47.183205] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:51.359 [2024-07-25 00:11:47.195307] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000036d50 00:27:51.359 [2024-07-25 00:11:47.197559] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:51.359 00:11:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # sleep 1 00:27:52.736 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:52.736 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:52.736 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:52.736 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:52.736 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:52.736 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.736 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:52.736 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:52.736 "name": "raid_bdev1", 00:27:52.736 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:52.736 "strip_size_kb": 0, 00:27:52.736 "state": "online", 00:27:52.736 "raid_level": "raid1", 00:27:52.736 "superblock": true, 00:27:52.736 "num_base_bdevs": 4, 00:27:52.736 "num_base_bdevs_discovered": 3, 00:27:52.736 "num_base_bdevs_operational": 3, 00:27:52.737 "process": { 00:27:52.737 "type": "rebuild", 00:27:52.737 "target": "spare", 00:27:52.737 "progress": { 00:27:52.737 "blocks": 24576, 00:27:52.737 "percent": 38 00:27:52.737 } 00:27:52.737 }, 00:27:52.737 "base_bdevs_list": [ 00:27:52.737 { 00:27:52.737 "name": "spare", 00:27:52.737 "uuid": "5b89a951-4bc8-5a0d-b114-00a1b723c943", 00:27:52.737 "is_configured": true, 00:27:52.737 "data_offset": 2048, 00:27:52.737 "data_size": 63488 00:27:52.737 }, 00:27:52.737 { 00:27:52.737 "name": null, 00:27:52.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.737 "is_configured": false, 00:27:52.737 "data_offset": 2048, 00:27:52.737 "data_size": 63488 00:27:52.737 }, 00:27:52.737 { 00:27:52.737 "name": "BaseBdev3", 00:27:52.737 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:52.737 "is_configured": true, 00:27:52.737 "data_offset": 2048, 00:27:52.737 "data_size": 63488 00:27:52.737 }, 00:27:52.737 { 00:27:52.737 "name": "BaseBdev4", 00:27:52.737 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:52.737 "is_configured": true, 00:27:52.737 "data_offset": 2048, 00:27:52.737 "data_size": 63488 00:27:52.737 } 00:27:52.737 ] 00:27:52.737 }' 00:27:52.737 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:52.737 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:52.737 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:52.737 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:52.737 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:52.996 [2024-07-25 00:11:48.679849] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:52.996 [2024-07-25 00:11:48.705494] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:52.996 [2024-07-25 00:11:48.705579] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:52.996 [2024-07-25 00:11:48.705605] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:52.996 [2024-07-25 00:11:48.705616] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:52.996 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:52.996 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:52.996 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:52.996 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:52.996 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:52.996 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:52.996 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:52.996 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:52.996 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:52.996 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:52.996 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.996 00:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:53.255 00:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:53.255 "name": "raid_bdev1", 00:27:53.255 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:53.255 "strip_size_kb": 0, 00:27:53.255 "state": "online", 00:27:53.255 "raid_level": "raid1", 00:27:53.255 "superblock": true, 00:27:53.255 "num_base_bdevs": 4, 00:27:53.255 "num_base_bdevs_discovered": 2, 00:27:53.255 "num_base_bdevs_operational": 2, 00:27:53.255 "base_bdevs_list": [ 00:27:53.255 { 00:27:53.255 "name": null, 00:27:53.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:53.255 "is_configured": false, 00:27:53.255 "data_offset": 2048, 00:27:53.255 "data_size": 63488 00:27:53.255 }, 00:27:53.255 { 00:27:53.255 "name": null, 00:27:53.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:53.255 "is_configured": false, 00:27:53.255 "data_offset": 2048, 00:27:53.255 "data_size": 63488 00:27:53.255 }, 00:27:53.255 { 00:27:53.255 "name": "BaseBdev3", 00:27:53.255 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:53.255 "is_configured": true, 00:27:53.255 "data_offset": 2048, 00:27:53.255 "data_size": 63488 00:27:53.255 }, 00:27:53.255 { 00:27:53.255 "name": "BaseBdev4", 00:27:53.255 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:53.255 "is_configured": true, 00:27:53.255 "data_offset": 2048, 00:27:53.255 "data_size": 63488 00:27:53.255 } 00:27:53.255 ] 00:27:53.255 }' 00:27:53.255 00:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:53.255 00:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:53.514 00:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:53.773 [2024-07-25 00:11:49.545632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:53.773 [2024-07-25 00:11:49.545719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:53.773 [2024-07-25 00:11:49.545755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c680 00:27:53.773 [2024-07-25 00:11:49.545768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:53.773 [2024-07-25 00:11:49.546365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:53.773 [2024-07-25 00:11:49.546397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:53.773 [2024-07-25 00:11:49.546512] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:53.773 [2024-07-25 00:11:49.546559] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:27:53.773 [2024-07-25 00:11:49.546578] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:53.773 [2024-07-25 00:11:49.546614] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:53.773 [2024-07-25 00:11:49.557490] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000036e20 00:27:53.773 spare 00:27:53.773 [2024-07-25 00:11:49.559635] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:53.773 00:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # sleep 1 00:27:54.724 00:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:54.724 00:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:54.724 00:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:54.724 00:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:54.724 00:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:54.724 00:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.991 00:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:54.991 00:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:54.991 "name": "raid_bdev1", 00:27:54.991 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:54.991 "strip_size_kb": 0, 00:27:54.991 "state": "online", 00:27:54.991 "raid_level": "raid1", 00:27:54.991 "superblock": true, 00:27:54.991 "num_base_bdevs": 4, 00:27:54.991 "num_base_bdevs_discovered": 3, 00:27:54.991 "num_base_bdevs_operational": 3, 00:27:54.991 "process": { 00:27:54.991 "type": "rebuild", 00:27:54.991 "target": "spare", 00:27:54.991 "progress": { 00:27:54.991 "blocks": 24576, 00:27:54.991 "percent": 38 00:27:54.991 } 00:27:54.991 }, 00:27:54.991 "base_bdevs_list": [ 00:27:54.991 { 00:27:54.991 "name": "spare", 00:27:54.991 "uuid": "5b89a951-4bc8-5a0d-b114-00a1b723c943", 00:27:54.991 "is_configured": true, 00:27:54.991 "data_offset": 2048, 00:27:54.991 "data_size": 63488 00:27:54.991 }, 00:27:54.991 { 00:27:54.991 "name": null, 00:27:54.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:54.991 "is_configured": false, 00:27:54.991 "data_offset": 2048, 00:27:54.991 "data_size": 63488 00:27:54.991 }, 00:27:54.991 { 00:27:54.991 "name": "BaseBdev3", 00:27:54.991 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:54.991 "is_configured": true, 00:27:54.991 "data_offset": 2048, 00:27:54.991 "data_size": 63488 00:27:54.991 }, 00:27:54.991 { 00:27:54.991 "name": "BaseBdev4", 00:27:54.991 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:54.991 "is_configured": true, 00:27:54.991 "data_offset": 2048, 00:27:54.991 "data_size": 63488 00:27:54.991 } 00:27:54.991 ] 00:27:54.991 }' 00:27:54.991 00:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:54.991 00:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:54.991 00:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:54.991 00:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:54.991 00:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:55.250 [2024-07-25 00:11:51.049968] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:55.250 [2024-07-25 00:11:51.067839] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:55.250 [2024-07-25 00:11:51.067951] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:55.250 [2024-07-25 00:11:51.067976] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:55.250 [2024-07-25 00:11:51.067989] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:55.250 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:55.250 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:55.250 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:55.250 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:55.250 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:55.250 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:55.250 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:55.250 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:55.250 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:55.250 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:55.250 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.250 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.509 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:55.509 "name": "raid_bdev1", 00:27:55.509 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:55.509 "strip_size_kb": 0, 00:27:55.509 "state": "online", 00:27:55.509 "raid_level": "raid1", 00:27:55.509 "superblock": true, 00:27:55.509 "num_base_bdevs": 4, 00:27:55.509 "num_base_bdevs_discovered": 2, 00:27:55.509 "num_base_bdevs_operational": 2, 00:27:55.509 "base_bdevs_list": [ 00:27:55.509 { 00:27:55.509 "name": null, 00:27:55.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:55.509 "is_configured": false, 00:27:55.509 "data_offset": 2048, 00:27:55.509 "data_size": 63488 00:27:55.509 }, 00:27:55.509 { 00:27:55.509 "name": null, 00:27:55.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:55.509 "is_configured": false, 00:27:55.509 "data_offset": 2048, 00:27:55.509 "data_size": 63488 00:27:55.509 }, 00:27:55.509 { 00:27:55.509 "name": "BaseBdev3", 00:27:55.509 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:55.509 "is_configured": true, 00:27:55.509 "data_offset": 2048, 00:27:55.509 "data_size": 63488 00:27:55.509 }, 00:27:55.509 { 00:27:55.509 "name": "BaseBdev4", 00:27:55.509 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:55.509 "is_configured": true, 00:27:55.509 "data_offset": 2048, 00:27:55.509 "data_size": 63488 00:27:55.509 } 00:27:55.509 ] 00:27:55.509 }' 00:27:55.509 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:55.509 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:55.767 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:55.767 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:55.767 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:55.767 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:55.767 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:55.767 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.767 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:56.026 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:56.026 "name": "raid_bdev1", 00:27:56.026 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:56.026 "strip_size_kb": 0, 00:27:56.026 "state": "online", 00:27:56.026 "raid_level": "raid1", 00:27:56.026 "superblock": true, 00:27:56.026 "num_base_bdevs": 4, 00:27:56.026 "num_base_bdevs_discovered": 2, 00:27:56.026 "num_base_bdevs_operational": 2, 00:27:56.026 "base_bdevs_list": [ 00:27:56.026 { 00:27:56.026 "name": null, 00:27:56.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:56.026 "is_configured": false, 00:27:56.026 "data_offset": 2048, 00:27:56.026 "data_size": 63488 00:27:56.026 }, 00:27:56.026 { 00:27:56.026 "name": null, 00:27:56.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:56.026 "is_configured": false, 00:27:56.026 "data_offset": 2048, 00:27:56.026 "data_size": 63488 00:27:56.026 }, 00:27:56.026 { 00:27:56.026 "name": "BaseBdev3", 00:27:56.026 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:56.026 "is_configured": true, 00:27:56.026 "data_offset": 2048, 00:27:56.026 "data_size": 63488 00:27:56.026 }, 00:27:56.026 { 00:27:56.026 "name": "BaseBdev4", 00:27:56.026 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:56.026 "is_configured": true, 00:27:56.026 "data_offset": 2048, 00:27:56.026 "data_size": 63488 00:27:56.026 } 00:27:56.026 ] 00:27:56.026 }' 00:27:56.026 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:56.026 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:56.026 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:56.026 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:56.026 00:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:56.285 00:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:56.544 [2024-07-25 00:11:52.324195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:56.544 [2024-07-25 00:11:52.324321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:56.544 [2024-07-25 00:11:52.324353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000cc80 00:27:56.544 [2024-07-25 00:11:52.324369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:56.544 [2024-07-25 00:11:52.324797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:56.544 [2024-07-25 00:11:52.324878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:56.544 [2024-07-25 00:11:52.324980] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:56.544 [2024-07-25 00:11:52.325005] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:27:56.544 [2024-07-25 00:11:52.325015] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:56.544 BaseBdev1 00:27:56.544 00:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # sleep 1 00:27:57.480 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:57.480 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:57.480 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:57.480 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:57.480 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:57.480 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:57.480 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:57.480 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:57.480 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:57.480 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:57.738 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.738 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.996 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:57.996 "name": "raid_bdev1", 00:27:57.996 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:57.996 "strip_size_kb": 0, 00:27:57.996 "state": "online", 00:27:57.996 "raid_level": "raid1", 00:27:57.996 "superblock": true, 00:27:57.996 "num_base_bdevs": 4, 00:27:57.996 "num_base_bdevs_discovered": 2, 00:27:57.996 "num_base_bdevs_operational": 2, 00:27:57.996 "base_bdevs_list": [ 00:27:57.996 { 00:27:57.996 "name": null, 00:27:57.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:57.996 "is_configured": false, 00:27:57.996 "data_offset": 2048, 00:27:57.996 "data_size": 63488 00:27:57.996 }, 00:27:57.996 { 00:27:57.996 "name": null, 00:27:57.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:57.996 "is_configured": false, 00:27:57.996 "data_offset": 2048, 00:27:57.996 "data_size": 63488 00:27:57.996 }, 00:27:57.996 { 00:27:57.996 "name": "BaseBdev3", 00:27:57.997 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:57.997 "is_configured": true, 00:27:57.997 "data_offset": 2048, 00:27:57.997 "data_size": 63488 00:27:57.997 }, 00:27:57.997 { 00:27:57.997 "name": "BaseBdev4", 00:27:57.997 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:57.997 "is_configured": true, 00:27:57.997 "data_offset": 2048, 00:27:57.997 "data_size": 63488 00:27:57.997 } 00:27:57.997 ] 00:27:57.997 }' 00:27:57.997 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:57.997 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:58.255 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:58.255 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:58.255 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:58.255 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:58.255 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:58.255 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:58.255 00:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.513 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:58.513 "name": "raid_bdev1", 00:27:58.513 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:58.513 "strip_size_kb": 0, 00:27:58.513 "state": "online", 00:27:58.513 "raid_level": "raid1", 00:27:58.513 "superblock": true, 00:27:58.513 "num_base_bdevs": 4, 00:27:58.513 "num_base_bdevs_discovered": 2, 00:27:58.513 "num_base_bdevs_operational": 2, 00:27:58.513 "base_bdevs_list": [ 00:27:58.513 { 00:27:58.513 "name": null, 00:27:58.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.513 "is_configured": false, 00:27:58.513 "data_offset": 2048, 00:27:58.513 "data_size": 63488 00:27:58.513 }, 00:27:58.513 { 00:27:58.513 "name": null, 00:27:58.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.514 "is_configured": false, 00:27:58.514 "data_offset": 2048, 00:27:58.514 "data_size": 63488 00:27:58.514 }, 00:27:58.514 { 00:27:58.514 "name": "BaseBdev3", 00:27:58.514 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:58.514 "is_configured": true, 00:27:58.514 "data_offset": 2048, 00:27:58.514 "data_size": 63488 00:27:58.514 }, 00:27:58.514 { 00:27:58.514 "name": "BaseBdev4", 00:27:58.514 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:58.514 "is_configured": true, 00:27:58.514 "data_offset": 2048, 00:27:58.514 "data_size": 63488 00:27:58.514 } 00:27:58.514 ] 00:27:58.514 }' 00:27:58.514 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:58.514 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:58.514 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:58.514 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:58.514 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:58.514 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:27:58.514 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:58.514 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:58.514 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:58.514 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:58.514 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:58.514 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:58.514 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:58.514 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:58.514 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:58.514 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:58.773 [2024-07-25 00:11:54.433006] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:58.773 [2024-07-25 00:11:54.433394] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:27:58.773 [2024-07-25 00:11:54.433427] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:58.773 request: 00:27:58.773 { 00:27:58.773 "base_bdev": "BaseBdev1", 00:27:58.773 "raid_bdev": "raid_bdev1", 00:27:58.773 "method": "bdev_raid_add_base_bdev", 00:27:58.773 "req_id": 1 00:27:58.773 } 00:27:58.773 Got JSON-RPC error response 00:27:58.773 response: 00:27:58.773 { 00:27:58.773 "code": -22, 00:27:58.773 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:58.773 } 00:27:58.773 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:27:58.773 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:58.773 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:58.773 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:58.773 00:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@793 -- # sleep 1 00:27:59.710 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:59.710 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:59.710 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:59.710 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:59.710 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:59.710 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:59.710 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:59.710 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:59.710 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:59.710 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:59.710 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:59.710 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:59.968 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:59.969 "name": "raid_bdev1", 00:27:59.969 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:27:59.969 "strip_size_kb": 0, 00:27:59.969 "state": "online", 00:27:59.969 "raid_level": "raid1", 00:27:59.969 "superblock": true, 00:27:59.969 "num_base_bdevs": 4, 00:27:59.969 "num_base_bdevs_discovered": 2, 00:27:59.969 "num_base_bdevs_operational": 2, 00:27:59.969 "base_bdevs_list": [ 00:27:59.969 { 00:27:59.969 "name": null, 00:27:59.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.969 "is_configured": false, 00:27:59.969 "data_offset": 2048, 00:27:59.969 "data_size": 63488 00:27:59.969 }, 00:27:59.969 { 00:27:59.969 "name": null, 00:27:59.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.969 "is_configured": false, 00:27:59.969 "data_offset": 2048, 00:27:59.969 "data_size": 63488 00:27:59.969 }, 00:27:59.969 { 00:27:59.969 "name": "BaseBdev3", 00:27:59.969 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:27:59.969 "is_configured": true, 00:27:59.969 "data_offset": 2048, 00:27:59.969 "data_size": 63488 00:27:59.969 }, 00:27:59.969 { 00:27:59.969 "name": "BaseBdev4", 00:27:59.969 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:27:59.969 "is_configured": true, 00:27:59.969 "data_offset": 2048, 00:27:59.969 "data_size": 63488 00:27:59.969 } 00:27:59.969 ] 00:27:59.969 }' 00:27:59.969 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:59.969 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:00.227 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:00.228 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:00.228 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:00.228 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:00.228 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:00.228 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.228 00:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.487 00:11:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:00.487 "name": "raid_bdev1", 00:28:00.487 "uuid": "0e5433eb-aa4b-4206-85b1-660c30a96836", 00:28:00.487 "strip_size_kb": 0, 00:28:00.487 "state": "online", 00:28:00.487 "raid_level": "raid1", 00:28:00.487 "superblock": true, 00:28:00.487 "num_base_bdevs": 4, 00:28:00.487 "num_base_bdevs_discovered": 2, 00:28:00.487 "num_base_bdevs_operational": 2, 00:28:00.487 "base_bdevs_list": [ 00:28:00.487 { 00:28:00.487 "name": null, 00:28:00.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.487 "is_configured": false, 00:28:00.487 "data_offset": 2048, 00:28:00.487 "data_size": 63488 00:28:00.487 }, 00:28:00.487 { 00:28:00.487 "name": null, 00:28:00.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.487 "is_configured": false, 00:28:00.487 "data_offset": 2048, 00:28:00.487 "data_size": 63488 00:28:00.487 }, 00:28:00.487 { 00:28:00.487 "name": "BaseBdev3", 00:28:00.487 "uuid": "0a425485-7d58-5ba4-b285-de500de56f83", 00:28:00.487 "is_configured": true, 00:28:00.487 "data_offset": 2048, 00:28:00.487 "data_size": 63488 00:28:00.487 }, 00:28:00.487 { 00:28:00.487 "name": "BaseBdev4", 00:28:00.487 "uuid": "75a7350d-e266-5acd-af57-5a6e8fcd7b4f", 00:28:00.487 "is_configured": true, 00:28:00.487 "data_offset": 2048, 00:28:00.487 "data_size": 63488 00:28:00.487 } 00:28:00.487 ] 00:28:00.487 }' 00:28:00.487 00:11:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:00.487 00:11:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:00.487 00:11:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:00.487 00:11:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:00.487 00:11:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@798 -- # killprocess 101394 00:28:00.487 00:11:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 101394 ']' 00:28:00.487 00:11:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 101394 00:28:00.487 00:11:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:28:00.487 00:11:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:00.487 00:11:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101394 00:28:00.487 killing process with pid 101394 00:28:00.487 Received shutdown signal, test time was about 23.595623 seconds 00:28:00.487 00:28:00.487 Latency(us) 00:28:00.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.487 =================================================================================================================== 00:28:00.487 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:00.487 00:11:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:00.487 00:11:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:00.487 00:11:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101394' 00:28:00.487 00:11:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 101394 00:28:00.487 [2024-07-25 00:11:56.308191] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:00.487 00:11:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 101394 00:28:00.487 [2024-07-25 00:11:56.308334] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:00.487 [2024-07-25 00:11:56.308417] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:00.487 [2024-07-25 00:11:56.308472] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c080 name raid_bdev1, state offline 00:28:01.055 [2024-07-25 00:11:56.622113] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:02.007 ************************************ 00:28:02.007 END TEST raid_rebuild_test_sb_io 00:28:02.007 ************************************ 00:28:02.007 00:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@800 -- # return 0 00:28:02.007 00:28:02.007 real 0m30.500s 00:28:02.007 user 0m45.986s 00:28:02.007 sys 0m3.852s 00:28:02.007 00:11:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:02.007 00:11:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:02.007 00:11:57 bdev_raid -- bdev/bdev_raid.sh@964 -- # '[' y == y ']' 00:28:02.007 00:11:57 bdev_raid -- bdev/bdev_raid.sh@965 -- # for n in {3..4} 00:28:02.007 00:11:57 bdev_raid -- bdev/bdev_raid.sh@966 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:28:02.007 00:11:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:28:02.007 00:11:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:02.007 00:11:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:02.007 ************************************ 00:28:02.007 START TEST raid5f_state_function_test 00:28:02.007 ************************************ 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:28:02.007 Process raid pid: 102229 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=102229 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 102229' 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 102229 /var/tmp/spdk-raid.sock 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 102229 ']' 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:02.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:02.007 00:11:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.007 [2024-07-25 00:11:57.842281] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:28:02.007 [2024-07-25 00:11:57.842448] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:02.287 [2024-07-25 00:11:58.018231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.546 [2024-07-25 00:11:58.175228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.546 [2024-07-25 00:11:58.335242] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:03.113 00:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:03.113 00:11:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:28:03.113 00:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:28:03.113 [2024-07-25 00:11:58.959162] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:03.113 [2024-07-25 00:11:58.959461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:03.113 [2024-07-25 00:11:58.959487] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:03.113 [2024-07-25 00:11:58.959504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:03.113 [2024-07-25 00:11:58.959514] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:03.113 [2024-07-25 00:11:58.959535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:03.113 00:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:03.113 00:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:03.113 00:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:03.113 00:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:03.113 00:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:03.113 00:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:03.113 00:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:03.113 00:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:03.113 00:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:03.113 00:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:03.113 00:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.113 00:11:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:03.372 00:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:03.372 "name": "Existed_Raid", 00:28:03.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.372 "strip_size_kb": 64, 00:28:03.372 "state": "configuring", 00:28:03.372 "raid_level": "raid5f", 00:28:03.372 "superblock": false, 00:28:03.372 "num_base_bdevs": 3, 00:28:03.372 "num_base_bdevs_discovered": 0, 00:28:03.372 "num_base_bdevs_operational": 3, 00:28:03.372 "base_bdevs_list": [ 00:28:03.372 { 00:28:03.372 "name": "BaseBdev1", 00:28:03.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.372 "is_configured": false, 00:28:03.372 "data_offset": 0, 00:28:03.372 "data_size": 0 00:28:03.372 }, 00:28:03.372 { 00:28:03.372 "name": "BaseBdev2", 00:28:03.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.372 "is_configured": false, 00:28:03.372 "data_offset": 0, 00:28:03.372 "data_size": 0 00:28:03.372 }, 00:28:03.372 { 00:28:03.372 "name": "BaseBdev3", 00:28:03.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.372 "is_configured": false, 00:28:03.372 "data_offset": 0, 00:28:03.372 "data_size": 0 00:28:03.372 } 00:28:03.372 ] 00:28:03.372 }' 00:28:03.372 00:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:03.372 00:11:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.630 00:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:03.888 [2024-07-25 00:11:59.679242] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:03.888 [2024-07-25 00:11:59.679329] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:28:03.888 00:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:28:04.147 [2024-07-25 00:11:59.951382] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:04.147 [2024-07-25 00:11:59.951445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:04.147 [2024-07-25 00:11:59.951469] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:04.147 [2024-07-25 00:11:59.951490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:04.147 [2024-07-25 00:11:59.951501] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:04.147 [2024-07-25 00:11:59.951515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:04.147 00:11:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:04.406 [2024-07-25 00:12:00.215053] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:04.406 BaseBdev1 00:28:04.406 00:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:28:04.406 00:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:28:04.406 00:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:04.406 00:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:28:04.406 00:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:04.406 00:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:04.406 00:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:04.664 00:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:04.923 [ 00:28:04.923 { 00:28:04.923 "name": "BaseBdev1", 00:28:04.923 "aliases": [ 00:28:04.923 "5b74ddf6-15ee-4945-8133-d16497fe951f" 00:28:04.923 ], 00:28:04.923 "product_name": "Malloc disk", 00:28:04.923 "block_size": 512, 00:28:04.923 "num_blocks": 65536, 00:28:04.923 "uuid": "5b74ddf6-15ee-4945-8133-d16497fe951f", 00:28:04.923 "assigned_rate_limits": { 00:28:04.923 "rw_ios_per_sec": 0, 00:28:04.923 "rw_mbytes_per_sec": 0, 00:28:04.923 "r_mbytes_per_sec": 0, 00:28:04.923 "w_mbytes_per_sec": 0 00:28:04.923 }, 00:28:04.923 "claimed": true, 00:28:04.923 "claim_type": "exclusive_write", 00:28:04.923 "zoned": false, 00:28:04.923 "supported_io_types": { 00:28:04.923 "read": true, 00:28:04.923 "write": true, 00:28:04.923 "unmap": true, 00:28:04.923 "flush": true, 00:28:04.923 "reset": true, 00:28:04.923 "nvme_admin": false, 00:28:04.923 "nvme_io": false, 00:28:04.923 "nvme_io_md": false, 00:28:04.923 "write_zeroes": true, 00:28:04.923 "zcopy": true, 00:28:04.923 "get_zone_info": false, 00:28:04.923 "zone_management": false, 00:28:04.923 "zone_append": false, 00:28:04.923 "compare": false, 00:28:04.923 "compare_and_write": false, 00:28:04.923 "abort": true, 00:28:04.923 "seek_hole": false, 00:28:04.923 "seek_data": false, 00:28:04.923 "copy": true, 00:28:04.923 "nvme_iov_md": false 00:28:04.923 }, 00:28:04.923 "memory_domains": [ 00:28:04.923 { 00:28:04.923 "dma_device_id": "system", 00:28:04.923 "dma_device_type": 1 00:28:04.923 }, 00:28:04.923 { 00:28:04.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:04.923 "dma_device_type": 2 00:28:04.923 } 00:28:04.923 ], 00:28:04.923 "driver_specific": {} 00:28:04.923 } 00:28:04.923 ] 00:28:04.923 00:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:28:04.923 00:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:04.923 00:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:04.923 00:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:04.923 00:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:04.923 00:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:04.923 00:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:04.923 00:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:04.923 00:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:04.923 00:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:04.923 00:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:04.923 00:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:04.923 00:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.181 00:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:05.181 "name": "Existed_Raid", 00:28:05.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.181 "strip_size_kb": 64, 00:28:05.181 "state": "configuring", 00:28:05.181 "raid_level": "raid5f", 00:28:05.181 "superblock": false, 00:28:05.181 "num_base_bdevs": 3, 00:28:05.181 "num_base_bdevs_discovered": 1, 00:28:05.181 "num_base_bdevs_operational": 3, 00:28:05.181 "base_bdevs_list": [ 00:28:05.181 { 00:28:05.181 "name": "BaseBdev1", 00:28:05.181 "uuid": "5b74ddf6-15ee-4945-8133-d16497fe951f", 00:28:05.181 "is_configured": true, 00:28:05.181 "data_offset": 0, 00:28:05.181 "data_size": 65536 00:28:05.181 }, 00:28:05.181 { 00:28:05.181 "name": "BaseBdev2", 00:28:05.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.181 "is_configured": false, 00:28:05.181 "data_offset": 0, 00:28:05.181 "data_size": 0 00:28:05.181 }, 00:28:05.181 { 00:28:05.181 "name": "BaseBdev3", 00:28:05.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.182 "is_configured": false, 00:28:05.182 "data_offset": 0, 00:28:05.182 "data_size": 0 00:28:05.182 } 00:28:05.182 ] 00:28:05.182 }' 00:28:05.182 00:12:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:05.182 00:12:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.439 00:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:05.697 [2024-07-25 00:12:01.467542] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:05.697 [2024-07-25 00:12:01.467606] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:28:05.697 00:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:28:05.955 [2024-07-25 00:12:01.695663] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:05.955 [2024-07-25 00:12:01.697589] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:05.955 [2024-07-25 00:12:01.697671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:05.955 [2024-07-25 00:12:01.697686] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:05.955 [2024-07-25 00:12:01.697700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:05.955 00:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:28:05.955 00:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:05.955 00:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:05.955 00:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:05.955 00:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:05.955 00:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:05.955 00:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:05.955 00:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:05.955 00:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:05.955 00:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:05.955 00:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:05.955 00:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:05.955 00:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.955 00:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:06.213 00:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:06.213 "name": "Existed_Raid", 00:28:06.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.213 "strip_size_kb": 64, 00:28:06.213 "state": "configuring", 00:28:06.213 "raid_level": "raid5f", 00:28:06.213 "superblock": false, 00:28:06.213 "num_base_bdevs": 3, 00:28:06.213 "num_base_bdevs_discovered": 1, 00:28:06.213 "num_base_bdevs_operational": 3, 00:28:06.213 "base_bdevs_list": [ 00:28:06.213 { 00:28:06.213 "name": "BaseBdev1", 00:28:06.213 "uuid": "5b74ddf6-15ee-4945-8133-d16497fe951f", 00:28:06.213 "is_configured": true, 00:28:06.213 "data_offset": 0, 00:28:06.213 "data_size": 65536 00:28:06.213 }, 00:28:06.213 { 00:28:06.213 "name": "BaseBdev2", 00:28:06.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.213 "is_configured": false, 00:28:06.213 "data_offset": 0, 00:28:06.213 "data_size": 0 00:28:06.213 }, 00:28:06.213 { 00:28:06.213 "name": "BaseBdev3", 00:28:06.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.213 "is_configured": false, 00:28:06.213 "data_offset": 0, 00:28:06.213 "data_size": 0 00:28:06.213 } 00:28:06.213 ] 00:28:06.213 }' 00:28:06.213 00:12:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:06.213 00:12:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.471 00:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:06.730 [2024-07-25 00:12:02.521312] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:06.730 BaseBdev2 00:28:06.730 00:12:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:28:06.730 00:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:28:06.730 00:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:06.730 00:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:28:06.730 00:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:06.730 00:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:06.730 00:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:06.988 00:12:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:07.247 [ 00:28:07.247 { 00:28:07.247 "name": "BaseBdev2", 00:28:07.247 "aliases": [ 00:28:07.248 "7c9ac4a4-4733-42f8-b6f4-260ab1925042" 00:28:07.248 ], 00:28:07.248 "product_name": "Malloc disk", 00:28:07.248 "block_size": 512, 00:28:07.248 "num_blocks": 65536, 00:28:07.248 "uuid": "7c9ac4a4-4733-42f8-b6f4-260ab1925042", 00:28:07.248 "assigned_rate_limits": { 00:28:07.248 "rw_ios_per_sec": 0, 00:28:07.248 "rw_mbytes_per_sec": 0, 00:28:07.248 "r_mbytes_per_sec": 0, 00:28:07.248 "w_mbytes_per_sec": 0 00:28:07.248 }, 00:28:07.248 "claimed": true, 00:28:07.248 "claim_type": "exclusive_write", 00:28:07.248 "zoned": false, 00:28:07.248 "supported_io_types": { 00:28:07.248 "read": true, 00:28:07.248 "write": true, 00:28:07.248 "unmap": true, 00:28:07.248 "flush": true, 00:28:07.248 "reset": true, 00:28:07.248 "nvme_admin": false, 00:28:07.248 "nvme_io": false, 00:28:07.248 "nvme_io_md": false, 00:28:07.248 "write_zeroes": true, 00:28:07.248 "zcopy": true, 00:28:07.248 "get_zone_info": false, 00:28:07.248 "zone_management": false, 00:28:07.248 "zone_append": false, 00:28:07.248 "compare": false, 00:28:07.248 "compare_and_write": false, 00:28:07.248 "abort": true, 00:28:07.248 "seek_hole": false, 00:28:07.248 "seek_data": false, 00:28:07.248 "copy": true, 00:28:07.248 "nvme_iov_md": false 00:28:07.248 }, 00:28:07.248 "memory_domains": [ 00:28:07.248 { 00:28:07.248 "dma_device_id": "system", 00:28:07.248 "dma_device_type": 1 00:28:07.248 }, 00:28:07.248 { 00:28:07.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:07.248 "dma_device_type": 2 00:28:07.248 } 00:28:07.248 ], 00:28:07.248 "driver_specific": {} 00:28:07.248 } 00:28:07.248 ] 00:28:07.248 00:12:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:28:07.248 00:12:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:07.248 00:12:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:07.248 00:12:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:07.248 00:12:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:07.248 00:12:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:07.248 00:12:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:07.248 00:12:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:07.248 00:12:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:07.248 00:12:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:07.248 00:12:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:07.248 00:12:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:07.248 00:12:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:07.248 00:12:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.248 00:12:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:07.507 00:12:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:07.507 "name": "Existed_Raid", 00:28:07.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.507 "strip_size_kb": 64, 00:28:07.507 "state": "configuring", 00:28:07.507 "raid_level": "raid5f", 00:28:07.507 "superblock": false, 00:28:07.507 "num_base_bdevs": 3, 00:28:07.507 "num_base_bdevs_discovered": 2, 00:28:07.507 "num_base_bdevs_operational": 3, 00:28:07.507 "base_bdevs_list": [ 00:28:07.507 { 00:28:07.507 "name": "BaseBdev1", 00:28:07.507 "uuid": "5b74ddf6-15ee-4945-8133-d16497fe951f", 00:28:07.507 "is_configured": true, 00:28:07.507 "data_offset": 0, 00:28:07.507 "data_size": 65536 00:28:07.507 }, 00:28:07.507 { 00:28:07.507 "name": "BaseBdev2", 00:28:07.507 "uuid": "7c9ac4a4-4733-42f8-b6f4-260ab1925042", 00:28:07.507 "is_configured": true, 00:28:07.507 "data_offset": 0, 00:28:07.507 "data_size": 65536 00:28:07.507 }, 00:28:07.507 { 00:28:07.507 "name": "BaseBdev3", 00:28:07.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.507 "is_configured": false, 00:28:07.507 "data_offset": 0, 00:28:07.507 "data_size": 0 00:28:07.507 } 00:28:07.507 ] 00:28:07.507 }' 00:28:07.507 00:12:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:07.507 00:12:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.073 00:12:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:08.073 [2024-07-25 00:12:03.862838] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:08.073 [2024-07-25 00:12:03.862900] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:28:08.073 [2024-07-25 00:12:03.862916] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:28:08.073 [2024-07-25 00:12:03.863015] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:28:08.073 [2024-07-25 00:12:03.867599] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:28:08.073 [2024-07-25 00:12:03.867628] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:28:08.073 [2024-07-25 00:12:03.867985] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:08.073 BaseBdev3 00:28:08.073 00:12:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:28:08.073 00:12:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:28:08.073 00:12:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:08.073 00:12:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:28:08.073 00:12:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:08.073 00:12:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:08.073 00:12:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:08.332 00:12:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:08.590 [ 00:28:08.590 { 00:28:08.590 "name": "BaseBdev3", 00:28:08.590 "aliases": [ 00:28:08.590 "62b2b081-cbab-409e-be45-4a71e355d8d4" 00:28:08.590 ], 00:28:08.590 "product_name": "Malloc disk", 00:28:08.590 "block_size": 512, 00:28:08.590 "num_blocks": 65536, 00:28:08.590 "uuid": "62b2b081-cbab-409e-be45-4a71e355d8d4", 00:28:08.590 "assigned_rate_limits": { 00:28:08.590 "rw_ios_per_sec": 0, 00:28:08.590 "rw_mbytes_per_sec": 0, 00:28:08.590 "r_mbytes_per_sec": 0, 00:28:08.590 "w_mbytes_per_sec": 0 00:28:08.590 }, 00:28:08.590 "claimed": true, 00:28:08.590 "claim_type": "exclusive_write", 00:28:08.590 "zoned": false, 00:28:08.590 "supported_io_types": { 00:28:08.590 "read": true, 00:28:08.590 "write": true, 00:28:08.590 "unmap": true, 00:28:08.590 "flush": true, 00:28:08.590 "reset": true, 00:28:08.591 "nvme_admin": false, 00:28:08.591 "nvme_io": false, 00:28:08.591 "nvme_io_md": false, 00:28:08.591 "write_zeroes": true, 00:28:08.591 "zcopy": true, 00:28:08.591 "get_zone_info": false, 00:28:08.591 "zone_management": false, 00:28:08.591 "zone_append": false, 00:28:08.591 "compare": false, 00:28:08.591 "compare_and_write": false, 00:28:08.591 "abort": true, 00:28:08.591 "seek_hole": false, 00:28:08.591 "seek_data": false, 00:28:08.591 "copy": true, 00:28:08.591 "nvme_iov_md": false 00:28:08.591 }, 00:28:08.591 "memory_domains": [ 00:28:08.591 { 00:28:08.591 "dma_device_id": "system", 00:28:08.591 "dma_device_type": 1 00:28:08.591 }, 00:28:08.591 { 00:28:08.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:08.591 "dma_device_type": 2 00:28:08.591 } 00:28:08.591 ], 00:28:08.591 "driver_specific": {} 00:28:08.591 } 00:28:08.591 ] 00:28:08.591 00:12:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:28:08.591 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:08.591 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:08.591 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:28:08.591 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:08.591 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:08.591 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:08.591 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:08.591 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:08.591 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:08.591 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:08.591 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:08.591 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:08.591 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.591 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:08.848 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:08.848 "name": "Existed_Raid", 00:28:08.848 "uuid": "669d9b81-22d4-4e83-816f-559b5ca926f7", 00:28:08.848 "strip_size_kb": 64, 00:28:08.848 "state": "online", 00:28:08.848 "raid_level": "raid5f", 00:28:08.848 "superblock": false, 00:28:08.848 "num_base_bdevs": 3, 00:28:08.848 "num_base_bdevs_discovered": 3, 00:28:08.848 "num_base_bdevs_operational": 3, 00:28:08.848 "base_bdevs_list": [ 00:28:08.848 { 00:28:08.848 "name": "BaseBdev1", 00:28:08.848 "uuid": "5b74ddf6-15ee-4945-8133-d16497fe951f", 00:28:08.848 "is_configured": true, 00:28:08.848 "data_offset": 0, 00:28:08.848 "data_size": 65536 00:28:08.848 }, 00:28:08.848 { 00:28:08.848 "name": "BaseBdev2", 00:28:08.848 "uuid": "7c9ac4a4-4733-42f8-b6f4-260ab1925042", 00:28:08.848 "is_configured": true, 00:28:08.848 "data_offset": 0, 00:28:08.848 "data_size": 65536 00:28:08.848 }, 00:28:08.848 { 00:28:08.848 "name": "BaseBdev3", 00:28:08.848 "uuid": "62b2b081-cbab-409e-be45-4a71e355d8d4", 00:28:08.848 "is_configured": true, 00:28:08.848 "data_offset": 0, 00:28:08.848 "data_size": 65536 00:28:08.848 } 00:28:08.848 ] 00:28:08.848 }' 00:28:08.849 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:08.849 00:12:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.106 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:28:09.106 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:09.106 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:09.106 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:09.106 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:09.106 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:28:09.106 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:09.106 00:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:09.363 [2024-07-25 00:12:05.045551] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:09.363 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:09.363 "name": "Existed_Raid", 00:28:09.363 "aliases": [ 00:28:09.363 "669d9b81-22d4-4e83-816f-559b5ca926f7" 00:28:09.363 ], 00:28:09.363 "product_name": "Raid Volume", 00:28:09.363 "block_size": 512, 00:28:09.363 "num_blocks": 131072, 00:28:09.363 "uuid": "669d9b81-22d4-4e83-816f-559b5ca926f7", 00:28:09.363 "assigned_rate_limits": { 00:28:09.363 "rw_ios_per_sec": 0, 00:28:09.363 "rw_mbytes_per_sec": 0, 00:28:09.363 "r_mbytes_per_sec": 0, 00:28:09.363 "w_mbytes_per_sec": 0 00:28:09.363 }, 00:28:09.363 "claimed": false, 00:28:09.363 "zoned": false, 00:28:09.363 "supported_io_types": { 00:28:09.363 "read": true, 00:28:09.363 "write": true, 00:28:09.363 "unmap": false, 00:28:09.363 "flush": false, 00:28:09.363 "reset": true, 00:28:09.363 "nvme_admin": false, 00:28:09.363 "nvme_io": false, 00:28:09.363 "nvme_io_md": false, 00:28:09.363 "write_zeroes": true, 00:28:09.363 "zcopy": false, 00:28:09.363 "get_zone_info": false, 00:28:09.363 "zone_management": false, 00:28:09.363 "zone_append": false, 00:28:09.363 "compare": false, 00:28:09.363 "compare_and_write": false, 00:28:09.363 "abort": false, 00:28:09.363 "seek_hole": false, 00:28:09.363 "seek_data": false, 00:28:09.363 "copy": false, 00:28:09.363 "nvme_iov_md": false 00:28:09.363 }, 00:28:09.363 "driver_specific": { 00:28:09.363 "raid": { 00:28:09.363 "uuid": "669d9b81-22d4-4e83-816f-559b5ca926f7", 00:28:09.363 "strip_size_kb": 64, 00:28:09.363 "state": "online", 00:28:09.363 "raid_level": "raid5f", 00:28:09.363 "superblock": false, 00:28:09.363 "num_base_bdevs": 3, 00:28:09.363 "num_base_bdevs_discovered": 3, 00:28:09.363 "num_base_bdevs_operational": 3, 00:28:09.363 "base_bdevs_list": [ 00:28:09.363 { 00:28:09.363 "name": "BaseBdev1", 00:28:09.363 "uuid": "5b74ddf6-15ee-4945-8133-d16497fe951f", 00:28:09.363 "is_configured": true, 00:28:09.363 "data_offset": 0, 00:28:09.363 "data_size": 65536 00:28:09.363 }, 00:28:09.363 { 00:28:09.363 "name": "BaseBdev2", 00:28:09.363 "uuid": "7c9ac4a4-4733-42f8-b6f4-260ab1925042", 00:28:09.363 "is_configured": true, 00:28:09.363 "data_offset": 0, 00:28:09.363 "data_size": 65536 00:28:09.363 }, 00:28:09.363 { 00:28:09.363 "name": "BaseBdev3", 00:28:09.363 "uuid": "62b2b081-cbab-409e-be45-4a71e355d8d4", 00:28:09.363 "is_configured": true, 00:28:09.363 "data_offset": 0, 00:28:09.363 "data_size": 65536 00:28:09.363 } 00:28:09.363 ] 00:28:09.363 } 00:28:09.363 } 00:28:09.363 }' 00:28:09.363 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:09.363 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:28:09.363 BaseBdev2 00:28:09.363 BaseBdev3' 00:28:09.363 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:09.363 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:28:09.363 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:09.621 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:09.621 "name": "BaseBdev1", 00:28:09.621 "aliases": [ 00:28:09.621 "5b74ddf6-15ee-4945-8133-d16497fe951f" 00:28:09.621 ], 00:28:09.621 "product_name": "Malloc disk", 00:28:09.621 "block_size": 512, 00:28:09.621 "num_blocks": 65536, 00:28:09.621 "uuid": "5b74ddf6-15ee-4945-8133-d16497fe951f", 00:28:09.621 "assigned_rate_limits": { 00:28:09.621 "rw_ios_per_sec": 0, 00:28:09.621 "rw_mbytes_per_sec": 0, 00:28:09.621 "r_mbytes_per_sec": 0, 00:28:09.621 "w_mbytes_per_sec": 0 00:28:09.621 }, 00:28:09.621 "claimed": true, 00:28:09.621 "claim_type": "exclusive_write", 00:28:09.621 "zoned": false, 00:28:09.621 "supported_io_types": { 00:28:09.621 "read": true, 00:28:09.621 "write": true, 00:28:09.621 "unmap": true, 00:28:09.621 "flush": true, 00:28:09.621 "reset": true, 00:28:09.621 "nvme_admin": false, 00:28:09.621 "nvme_io": false, 00:28:09.621 "nvme_io_md": false, 00:28:09.621 "write_zeroes": true, 00:28:09.621 "zcopy": true, 00:28:09.621 "get_zone_info": false, 00:28:09.621 "zone_management": false, 00:28:09.621 "zone_append": false, 00:28:09.621 "compare": false, 00:28:09.621 "compare_and_write": false, 00:28:09.621 "abort": true, 00:28:09.621 "seek_hole": false, 00:28:09.621 "seek_data": false, 00:28:09.621 "copy": true, 00:28:09.621 "nvme_iov_md": false 00:28:09.621 }, 00:28:09.621 "memory_domains": [ 00:28:09.621 { 00:28:09.621 "dma_device_id": "system", 00:28:09.621 "dma_device_type": 1 00:28:09.621 }, 00:28:09.621 { 00:28:09.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.621 "dma_device_type": 2 00:28:09.621 } 00:28:09.621 ], 00:28:09.621 "driver_specific": {} 00:28:09.621 }' 00:28:09.621 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:09.621 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:09.621 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:09.621 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:09.621 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:09.621 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:09.621 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:09.621 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:09.621 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:09.621 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:09.621 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:09.621 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:09.621 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:09.621 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:09.621 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:09.891 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:09.891 "name": "BaseBdev2", 00:28:09.891 "aliases": [ 00:28:09.891 "7c9ac4a4-4733-42f8-b6f4-260ab1925042" 00:28:09.891 ], 00:28:09.891 "product_name": "Malloc disk", 00:28:09.891 "block_size": 512, 00:28:09.891 "num_blocks": 65536, 00:28:09.891 "uuid": "7c9ac4a4-4733-42f8-b6f4-260ab1925042", 00:28:09.891 "assigned_rate_limits": { 00:28:09.891 "rw_ios_per_sec": 0, 00:28:09.891 "rw_mbytes_per_sec": 0, 00:28:09.891 "r_mbytes_per_sec": 0, 00:28:09.891 "w_mbytes_per_sec": 0 00:28:09.891 }, 00:28:09.891 "claimed": true, 00:28:09.891 "claim_type": "exclusive_write", 00:28:09.892 "zoned": false, 00:28:09.892 "supported_io_types": { 00:28:09.892 "read": true, 00:28:09.892 "write": true, 00:28:09.892 "unmap": true, 00:28:09.892 "flush": true, 00:28:09.892 "reset": true, 00:28:09.892 "nvme_admin": false, 00:28:09.892 "nvme_io": false, 00:28:09.892 "nvme_io_md": false, 00:28:09.892 "write_zeroes": true, 00:28:09.892 "zcopy": true, 00:28:09.892 "get_zone_info": false, 00:28:09.892 "zone_management": false, 00:28:09.892 "zone_append": false, 00:28:09.892 "compare": false, 00:28:09.892 "compare_and_write": false, 00:28:09.892 "abort": true, 00:28:09.892 "seek_hole": false, 00:28:09.892 "seek_data": false, 00:28:09.892 "copy": true, 00:28:09.892 "nvme_iov_md": false 00:28:09.892 }, 00:28:09.892 "memory_domains": [ 00:28:09.892 { 00:28:09.892 "dma_device_id": "system", 00:28:09.892 "dma_device_type": 1 00:28:09.892 }, 00:28:09.892 { 00:28:09.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.892 "dma_device_type": 2 00:28:09.892 } 00:28:09.892 ], 00:28:09.892 "driver_specific": {} 00:28:09.892 }' 00:28:09.892 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:09.892 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:09.892 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:09.892 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:09.892 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:09.892 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:09.892 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:09.892 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:09.892 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:09.892 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:09.892 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:09.892 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:09.892 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:09.892 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:09.892 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:10.166 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:10.166 "name": "BaseBdev3", 00:28:10.166 "aliases": [ 00:28:10.166 "62b2b081-cbab-409e-be45-4a71e355d8d4" 00:28:10.166 ], 00:28:10.166 "product_name": "Malloc disk", 00:28:10.166 "block_size": 512, 00:28:10.166 "num_blocks": 65536, 00:28:10.166 "uuid": "62b2b081-cbab-409e-be45-4a71e355d8d4", 00:28:10.166 "assigned_rate_limits": { 00:28:10.166 "rw_ios_per_sec": 0, 00:28:10.166 "rw_mbytes_per_sec": 0, 00:28:10.166 "r_mbytes_per_sec": 0, 00:28:10.166 "w_mbytes_per_sec": 0 00:28:10.166 }, 00:28:10.166 "claimed": true, 00:28:10.166 "claim_type": "exclusive_write", 00:28:10.166 "zoned": false, 00:28:10.166 "supported_io_types": { 00:28:10.166 "read": true, 00:28:10.166 "write": true, 00:28:10.166 "unmap": true, 00:28:10.166 "flush": true, 00:28:10.166 "reset": true, 00:28:10.166 "nvme_admin": false, 00:28:10.166 "nvme_io": false, 00:28:10.166 "nvme_io_md": false, 00:28:10.166 "write_zeroes": true, 00:28:10.166 "zcopy": true, 00:28:10.166 "get_zone_info": false, 00:28:10.166 "zone_management": false, 00:28:10.166 "zone_append": false, 00:28:10.166 "compare": false, 00:28:10.166 "compare_and_write": false, 00:28:10.166 "abort": true, 00:28:10.166 "seek_hole": false, 00:28:10.166 "seek_data": false, 00:28:10.166 "copy": true, 00:28:10.166 "nvme_iov_md": false 00:28:10.166 }, 00:28:10.166 "memory_domains": [ 00:28:10.166 { 00:28:10.166 "dma_device_id": "system", 00:28:10.166 "dma_device_type": 1 00:28:10.166 }, 00:28:10.166 { 00:28:10.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:10.166 "dma_device_type": 2 00:28:10.166 } 00:28:10.166 ], 00:28:10.166 "driver_specific": {} 00:28:10.166 }' 00:28:10.166 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:10.166 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:10.166 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:10.166 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:10.166 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:10.166 00:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:10.166 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:10.166 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:10.166 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:10.166 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:10.166 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:10.166 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:10.166 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:10.424 [2024-07-25 00:12:06.277635] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:10.682 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:28:10.682 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:28:10.682 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:10.683 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:28:10.683 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:28:10.683 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:28:10.683 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:10.683 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:10.683 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:10.683 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:10.683 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:10.683 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:10.683 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:10.683 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:10.683 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:10.683 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:10.683 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:10.941 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:10.941 "name": "Existed_Raid", 00:28:10.941 "uuid": "669d9b81-22d4-4e83-816f-559b5ca926f7", 00:28:10.941 "strip_size_kb": 64, 00:28:10.941 "state": "online", 00:28:10.941 "raid_level": "raid5f", 00:28:10.941 "superblock": false, 00:28:10.941 "num_base_bdevs": 3, 00:28:10.941 "num_base_bdevs_discovered": 2, 00:28:10.941 "num_base_bdevs_operational": 2, 00:28:10.941 "base_bdevs_list": [ 00:28:10.941 { 00:28:10.941 "name": null, 00:28:10.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:10.941 "is_configured": false, 00:28:10.941 "data_offset": 0, 00:28:10.941 "data_size": 65536 00:28:10.941 }, 00:28:10.941 { 00:28:10.941 "name": "BaseBdev2", 00:28:10.941 "uuid": "7c9ac4a4-4733-42f8-b6f4-260ab1925042", 00:28:10.941 "is_configured": true, 00:28:10.941 "data_offset": 0, 00:28:10.941 "data_size": 65536 00:28:10.941 }, 00:28:10.941 { 00:28:10.941 "name": "BaseBdev3", 00:28:10.941 "uuid": "62b2b081-cbab-409e-be45-4a71e355d8d4", 00:28:10.941 "is_configured": true, 00:28:10.941 "data_offset": 0, 00:28:10.941 "data_size": 65536 00:28:10.941 } 00:28:10.941 ] 00:28:10.941 }' 00:28:10.941 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:10.941 00:12:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.199 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:28:11.199 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:11.199 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.199 00:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:11.458 00:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:11.458 00:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:11.458 00:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:28:11.715 [2024-07-25 00:12:07.390885] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:11.716 [2024-07-25 00:12:07.391163] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:11.716 [2024-07-25 00:12:07.465620] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:11.716 00:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:11.716 00:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:11.716 00:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.716 00:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:12.026 00:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:12.026 00:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:12.026 00:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:28:12.284 [2024-07-25 00:12:07.933846] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:12.284 [2024-07-25 00:12:07.933911] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:28:12.284 00:12:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:12.284 00:12:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:12.284 00:12:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:12.284 00:12:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:28:12.543 00:12:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:28:12.543 00:12:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:28:12.543 00:12:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:28:12.543 00:12:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:28:12.543 00:12:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:12.543 00:12:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:12.802 BaseBdev2 00:28:12.802 00:12:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:28:12.802 00:12:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:28:12.802 00:12:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:12.802 00:12:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:28:12.802 00:12:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:12.802 00:12:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:12.802 00:12:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:13.061 00:12:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:13.320 [ 00:28:13.320 { 00:28:13.320 "name": "BaseBdev2", 00:28:13.320 "aliases": [ 00:28:13.320 "4d79c833-e006-4e42-be3a-2ad544595899" 00:28:13.320 ], 00:28:13.320 "product_name": "Malloc disk", 00:28:13.320 "block_size": 512, 00:28:13.320 "num_blocks": 65536, 00:28:13.320 "uuid": "4d79c833-e006-4e42-be3a-2ad544595899", 00:28:13.320 "assigned_rate_limits": { 00:28:13.320 "rw_ios_per_sec": 0, 00:28:13.320 "rw_mbytes_per_sec": 0, 00:28:13.320 "r_mbytes_per_sec": 0, 00:28:13.320 "w_mbytes_per_sec": 0 00:28:13.320 }, 00:28:13.320 "claimed": false, 00:28:13.320 "zoned": false, 00:28:13.320 "supported_io_types": { 00:28:13.320 "read": true, 00:28:13.320 "write": true, 00:28:13.320 "unmap": true, 00:28:13.320 "flush": true, 00:28:13.320 "reset": true, 00:28:13.320 "nvme_admin": false, 00:28:13.320 "nvme_io": false, 00:28:13.320 "nvme_io_md": false, 00:28:13.320 "write_zeroes": true, 00:28:13.320 "zcopy": true, 00:28:13.320 "get_zone_info": false, 00:28:13.320 "zone_management": false, 00:28:13.320 "zone_append": false, 00:28:13.320 "compare": false, 00:28:13.320 "compare_and_write": false, 00:28:13.320 "abort": true, 00:28:13.320 "seek_hole": false, 00:28:13.320 "seek_data": false, 00:28:13.320 "copy": true, 00:28:13.320 "nvme_iov_md": false 00:28:13.320 }, 00:28:13.320 "memory_domains": [ 00:28:13.320 { 00:28:13.320 "dma_device_id": "system", 00:28:13.320 "dma_device_type": 1 00:28:13.320 }, 00:28:13.320 { 00:28:13.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:13.320 "dma_device_type": 2 00:28:13.320 } 00:28:13.320 ], 00:28:13.320 "driver_specific": {} 00:28:13.320 } 00:28:13.320 ] 00:28:13.320 00:12:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:28:13.320 00:12:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:13.320 00:12:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:13.320 00:12:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:13.320 BaseBdev3 00:28:13.579 00:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:28:13.579 00:12:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:28:13.579 00:12:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:13.579 00:12:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:28:13.579 00:12:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:13.579 00:12:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:13.579 00:12:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:13.579 00:12:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:13.837 [ 00:28:13.837 { 00:28:13.837 "name": "BaseBdev3", 00:28:13.837 "aliases": [ 00:28:13.837 "35d2f25c-be72-4765-98b9-e480e0f8d71b" 00:28:13.837 ], 00:28:13.837 "product_name": "Malloc disk", 00:28:13.837 "block_size": 512, 00:28:13.837 "num_blocks": 65536, 00:28:13.837 "uuid": "35d2f25c-be72-4765-98b9-e480e0f8d71b", 00:28:13.837 "assigned_rate_limits": { 00:28:13.837 "rw_ios_per_sec": 0, 00:28:13.837 "rw_mbytes_per_sec": 0, 00:28:13.837 "r_mbytes_per_sec": 0, 00:28:13.837 "w_mbytes_per_sec": 0 00:28:13.837 }, 00:28:13.837 "claimed": false, 00:28:13.837 "zoned": false, 00:28:13.837 "supported_io_types": { 00:28:13.837 "read": true, 00:28:13.837 "write": true, 00:28:13.837 "unmap": true, 00:28:13.837 "flush": true, 00:28:13.837 "reset": true, 00:28:13.837 "nvme_admin": false, 00:28:13.837 "nvme_io": false, 00:28:13.837 "nvme_io_md": false, 00:28:13.837 "write_zeroes": true, 00:28:13.837 "zcopy": true, 00:28:13.837 "get_zone_info": false, 00:28:13.837 "zone_management": false, 00:28:13.837 "zone_append": false, 00:28:13.837 "compare": false, 00:28:13.837 "compare_and_write": false, 00:28:13.837 "abort": true, 00:28:13.837 "seek_hole": false, 00:28:13.837 "seek_data": false, 00:28:13.837 "copy": true, 00:28:13.837 "nvme_iov_md": false 00:28:13.837 }, 00:28:13.837 "memory_domains": [ 00:28:13.837 { 00:28:13.837 "dma_device_id": "system", 00:28:13.838 "dma_device_type": 1 00:28:13.838 }, 00:28:13.838 { 00:28:13.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:13.838 "dma_device_type": 2 00:28:13.838 } 00:28:13.838 ], 00:28:13.838 "driver_specific": {} 00:28:13.838 } 00:28:13.838 ] 00:28:13.838 00:12:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:28:13.838 00:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:13.838 00:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:13.838 00:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:28:14.096 [2024-07-25 00:12:09.804415] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:14.096 [2024-07-25 00:12:09.804468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:14.096 [2024-07-25 00:12:09.804513] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:14.096 [2024-07-25 00:12:09.806453] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:14.096 00:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:14.096 00:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:14.096 00:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:14.096 00:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:14.096 00:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:14.096 00:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:14.096 00:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:14.096 00:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:14.096 00:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:14.096 00:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:14.096 00:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:14.096 00:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:14.354 00:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:14.354 "name": "Existed_Raid", 00:28:14.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:14.354 "strip_size_kb": 64, 00:28:14.354 "state": "configuring", 00:28:14.354 "raid_level": "raid5f", 00:28:14.354 "superblock": false, 00:28:14.354 "num_base_bdevs": 3, 00:28:14.354 "num_base_bdevs_discovered": 2, 00:28:14.354 "num_base_bdevs_operational": 3, 00:28:14.354 "base_bdevs_list": [ 00:28:14.354 { 00:28:14.354 "name": "BaseBdev1", 00:28:14.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:14.354 "is_configured": false, 00:28:14.354 "data_offset": 0, 00:28:14.354 "data_size": 0 00:28:14.354 }, 00:28:14.354 { 00:28:14.354 "name": "BaseBdev2", 00:28:14.354 "uuid": "4d79c833-e006-4e42-be3a-2ad544595899", 00:28:14.354 "is_configured": true, 00:28:14.354 "data_offset": 0, 00:28:14.354 "data_size": 65536 00:28:14.354 }, 00:28:14.354 { 00:28:14.354 "name": "BaseBdev3", 00:28:14.354 "uuid": "35d2f25c-be72-4765-98b9-e480e0f8d71b", 00:28:14.354 "is_configured": true, 00:28:14.354 "data_offset": 0, 00:28:14.354 "data_size": 65536 00:28:14.354 } 00:28:14.354 ] 00:28:14.354 }' 00:28:14.354 00:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:14.354 00:12:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.613 00:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:28:14.872 [2024-07-25 00:12:10.632705] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:14.872 00:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:14.872 00:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:14.872 00:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:14.872 00:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:14.872 00:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:14.872 00:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:14.872 00:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:14.872 00:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:14.872 00:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:14.872 00:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:14.872 00:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:14.872 00:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:15.138 00:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:15.138 "name": "Existed_Raid", 00:28:15.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:15.138 "strip_size_kb": 64, 00:28:15.138 "state": "configuring", 00:28:15.138 "raid_level": "raid5f", 00:28:15.138 "superblock": false, 00:28:15.138 "num_base_bdevs": 3, 00:28:15.138 "num_base_bdevs_discovered": 1, 00:28:15.138 "num_base_bdevs_operational": 3, 00:28:15.138 "base_bdevs_list": [ 00:28:15.138 { 00:28:15.138 "name": "BaseBdev1", 00:28:15.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:15.138 "is_configured": false, 00:28:15.138 "data_offset": 0, 00:28:15.138 "data_size": 0 00:28:15.138 }, 00:28:15.138 { 00:28:15.138 "name": null, 00:28:15.138 "uuid": "4d79c833-e006-4e42-be3a-2ad544595899", 00:28:15.138 "is_configured": false, 00:28:15.138 "data_offset": 0, 00:28:15.138 "data_size": 65536 00:28:15.138 }, 00:28:15.138 { 00:28:15.138 "name": "BaseBdev3", 00:28:15.138 "uuid": "35d2f25c-be72-4765-98b9-e480e0f8d71b", 00:28:15.138 "is_configured": true, 00:28:15.138 "data_offset": 0, 00:28:15.138 "data_size": 65536 00:28:15.138 } 00:28:15.138 ] 00:28:15.138 }' 00:28:15.138 00:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:15.138 00:12:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.398 00:12:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:15.398 00:12:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:15.656 00:12:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:28:15.656 00:12:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:15.914 [2024-07-25 00:12:11.724906] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:15.914 BaseBdev1 00:28:15.914 00:12:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:28:15.914 00:12:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:28:15.914 00:12:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:15.914 00:12:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:28:15.914 00:12:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:15.914 00:12:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:15.914 00:12:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:16.173 00:12:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:16.431 [ 00:28:16.431 { 00:28:16.431 "name": "BaseBdev1", 00:28:16.431 "aliases": [ 00:28:16.431 "196153c8-8c0d-4c62-a8a0-04f6fad02c88" 00:28:16.431 ], 00:28:16.431 "product_name": "Malloc disk", 00:28:16.431 "block_size": 512, 00:28:16.431 "num_blocks": 65536, 00:28:16.431 "uuid": "196153c8-8c0d-4c62-a8a0-04f6fad02c88", 00:28:16.431 "assigned_rate_limits": { 00:28:16.431 "rw_ios_per_sec": 0, 00:28:16.431 "rw_mbytes_per_sec": 0, 00:28:16.431 "r_mbytes_per_sec": 0, 00:28:16.431 "w_mbytes_per_sec": 0 00:28:16.431 }, 00:28:16.431 "claimed": true, 00:28:16.431 "claim_type": "exclusive_write", 00:28:16.431 "zoned": false, 00:28:16.431 "supported_io_types": { 00:28:16.431 "read": true, 00:28:16.431 "write": true, 00:28:16.431 "unmap": true, 00:28:16.431 "flush": true, 00:28:16.431 "reset": true, 00:28:16.431 "nvme_admin": false, 00:28:16.431 "nvme_io": false, 00:28:16.431 "nvme_io_md": false, 00:28:16.431 "write_zeroes": true, 00:28:16.431 "zcopy": true, 00:28:16.431 "get_zone_info": false, 00:28:16.431 "zone_management": false, 00:28:16.431 "zone_append": false, 00:28:16.431 "compare": false, 00:28:16.431 "compare_and_write": false, 00:28:16.431 "abort": true, 00:28:16.431 "seek_hole": false, 00:28:16.431 "seek_data": false, 00:28:16.431 "copy": true, 00:28:16.431 "nvme_iov_md": false 00:28:16.431 }, 00:28:16.431 "memory_domains": [ 00:28:16.431 { 00:28:16.431 "dma_device_id": "system", 00:28:16.431 "dma_device_type": 1 00:28:16.431 }, 00:28:16.431 { 00:28:16.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:16.431 "dma_device_type": 2 00:28:16.431 } 00:28:16.431 ], 00:28:16.431 "driver_specific": {} 00:28:16.431 } 00:28:16.431 ] 00:28:16.431 00:12:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:28:16.431 00:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:16.431 00:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:16.431 00:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:16.431 00:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:16.431 00:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:16.431 00:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:16.431 00:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:16.431 00:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:16.431 00:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:16.431 00:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:16.431 00:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.431 00:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:16.690 00:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:16.690 "name": "Existed_Raid", 00:28:16.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.690 "strip_size_kb": 64, 00:28:16.690 "state": "configuring", 00:28:16.690 "raid_level": "raid5f", 00:28:16.690 "superblock": false, 00:28:16.690 "num_base_bdevs": 3, 00:28:16.690 "num_base_bdevs_discovered": 2, 00:28:16.690 "num_base_bdevs_operational": 3, 00:28:16.690 "base_bdevs_list": [ 00:28:16.690 { 00:28:16.690 "name": "BaseBdev1", 00:28:16.690 "uuid": "196153c8-8c0d-4c62-a8a0-04f6fad02c88", 00:28:16.690 "is_configured": true, 00:28:16.690 "data_offset": 0, 00:28:16.690 "data_size": 65536 00:28:16.690 }, 00:28:16.690 { 00:28:16.690 "name": null, 00:28:16.690 "uuid": "4d79c833-e006-4e42-be3a-2ad544595899", 00:28:16.690 "is_configured": false, 00:28:16.690 "data_offset": 0, 00:28:16.690 "data_size": 65536 00:28:16.690 }, 00:28:16.690 { 00:28:16.690 "name": "BaseBdev3", 00:28:16.690 "uuid": "35d2f25c-be72-4765-98b9-e480e0f8d71b", 00:28:16.690 "is_configured": true, 00:28:16.690 "data_offset": 0, 00:28:16.690 "data_size": 65536 00:28:16.690 } 00:28:16.690 ] 00:28:16.690 }' 00:28:16.690 00:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:16.690 00:12:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.948 00:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.948 00:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:17.206 00:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:28:17.206 00:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:28:17.465 [2024-07-25 00:12:13.169460] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:17.465 00:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:17.465 00:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:17.465 00:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:17.465 00:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:17.465 00:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:17.465 00:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:17.465 00:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:17.465 00:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:17.465 00:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:17.465 00:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:17.465 00:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:17.465 00:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:17.724 00:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:17.724 "name": "Existed_Raid", 00:28:17.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.724 "strip_size_kb": 64, 00:28:17.724 "state": "configuring", 00:28:17.724 "raid_level": "raid5f", 00:28:17.724 "superblock": false, 00:28:17.724 "num_base_bdevs": 3, 00:28:17.724 "num_base_bdevs_discovered": 1, 00:28:17.724 "num_base_bdevs_operational": 3, 00:28:17.724 "base_bdevs_list": [ 00:28:17.724 { 00:28:17.724 "name": "BaseBdev1", 00:28:17.724 "uuid": "196153c8-8c0d-4c62-a8a0-04f6fad02c88", 00:28:17.724 "is_configured": true, 00:28:17.724 "data_offset": 0, 00:28:17.724 "data_size": 65536 00:28:17.724 }, 00:28:17.724 { 00:28:17.724 "name": null, 00:28:17.724 "uuid": "4d79c833-e006-4e42-be3a-2ad544595899", 00:28:17.724 "is_configured": false, 00:28:17.724 "data_offset": 0, 00:28:17.724 "data_size": 65536 00:28:17.724 }, 00:28:17.724 { 00:28:17.724 "name": null, 00:28:17.724 "uuid": "35d2f25c-be72-4765-98b9-e480e0f8d71b", 00:28:17.724 "is_configured": false, 00:28:17.724 "data_offset": 0, 00:28:17.724 "data_size": 65536 00:28:17.724 } 00:28:17.724 ] 00:28:17.724 }' 00:28:17.724 00:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:17.724 00:12:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:17.982 00:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:17.982 00:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:18.239 00:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:28:18.239 00:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:28:18.496 [2024-07-25 00:12:14.181776] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:18.496 00:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:18.496 00:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:18.496 00:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:18.496 00:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:18.496 00:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:18.496 00:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:18.496 00:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:18.496 00:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:18.496 00:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:18.496 00:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:18.496 00:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:18.496 00:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:18.754 00:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:18.754 "name": "Existed_Raid", 00:28:18.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:18.754 "strip_size_kb": 64, 00:28:18.754 "state": "configuring", 00:28:18.754 "raid_level": "raid5f", 00:28:18.754 "superblock": false, 00:28:18.754 "num_base_bdevs": 3, 00:28:18.754 "num_base_bdevs_discovered": 2, 00:28:18.754 "num_base_bdevs_operational": 3, 00:28:18.754 "base_bdevs_list": [ 00:28:18.754 { 00:28:18.754 "name": "BaseBdev1", 00:28:18.754 "uuid": "196153c8-8c0d-4c62-a8a0-04f6fad02c88", 00:28:18.754 "is_configured": true, 00:28:18.754 "data_offset": 0, 00:28:18.754 "data_size": 65536 00:28:18.754 }, 00:28:18.754 { 00:28:18.754 "name": null, 00:28:18.754 "uuid": "4d79c833-e006-4e42-be3a-2ad544595899", 00:28:18.754 "is_configured": false, 00:28:18.754 "data_offset": 0, 00:28:18.754 "data_size": 65536 00:28:18.754 }, 00:28:18.754 { 00:28:18.754 "name": "BaseBdev3", 00:28:18.754 "uuid": "35d2f25c-be72-4765-98b9-e480e0f8d71b", 00:28:18.754 "is_configured": true, 00:28:18.754 "data_offset": 0, 00:28:18.754 "data_size": 65536 00:28:18.754 } 00:28:18.754 ] 00:28:18.754 }' 00:28:18.754 00:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:18.754 00:12:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:19.012 00:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.012 00:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:19.270 00:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:28:19.270 00:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:19.528 [2024-07-25 00:12:15.189993] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:19.528 00:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:19.528 00:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:19.528 00:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:19.528 00:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:19.528 00:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:19.528 00:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:19.528 00:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:19.528 00:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:19.528 00:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:19.528 00:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:19.528 00:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.528 00:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:19.787 00:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:19.787 "name": "Existed_Raid", 00:28:19.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.787 "strip_size_kb": 64, 00:28:19.787 "state": "configuring", 00:28:19.787 "raid_level": "raid5f", 00:28:19.787 "superblock": false, 00:28:19.787 "num_base_bdevs": 3, 00:28:19.787 "num_base_bdevs_discovered": 1, 00:28:19.787 "num_base_bdevs_operational": 3, 00:28:19.787 "base_bdevs_list": [ 00:28:19.787 { 00:28:19.787 "name": null, 00:28:19.787 "uuid": "196153c8-8c0d-4c62-a8a0-04f6fad02c88", 00:28:19.787 "is_configured": false, 00:28:19.787 "data_offset": 0, 00:28:19.787 "data_size": 65536 00:28:19.787 }, 00:28:19.787 { 00:28:19.787 "name": null, 00:28:19.787 "uuid": "4d79c833-e006-4e42-be3a-2ad544595899", 00:28:19.787 "is_configured": false, 00:28:19.787 "data_offset": 0, 00:28:19.787 "data_size": 65536 00:28:19.787 }, 00:28:19.787 { 00:28:19.787 "name": "BaseBdev3", 00:28:19.787 "uuid": "35d2f25c-be72-4765-98b9-e480e0f8d71b", 00:28:19.787 "is_configured": true, 00:28:19.787 "data_offset": 0, 00:28:19.787 "data_size": 65536 00:28:19.787 } 00:28:19.787 ] 00:28:19.787 }' 00:28:19.787 00:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:19.787 00:12:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:20.045 00:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.045 00:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:20.303 00:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:28:20.303 00:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:28:20.561 [2024-07-25 00:12:16.203109] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:20.561 00:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:20.561 00:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:20.561 00:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:20.561 00:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:20.561 00:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:20.562 00:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:20.562 00:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:20.562 00:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:20.562 00:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:20.562 00:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:20.562 00:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:20.562 00:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.562 00:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:20.562 "name": "Existed_Raid", 00:28:20.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:20.562 "strip_size_kb": 64, 00:28:20.562 "state": "configuring", 00:28:20.562 "raid_level": "raid5f", 00:28:20.562 "superblock": false, 00:28:20.562 "num_base_bdevs": 3, 00:28:20.562 "num_base_bdevs_discovered": 2, 00:28:20.562 "num_base_bdevs_operational": 3, 00:28:20.562 "base_bdevs_list": [ 00:28:20.562 { 00:28:20.562 "name": null, 00:28:20.562 "uuid": "196153c8-8c0d-4c62-a8a0-04f6fad02c88", 00:28:20.562 "is_configured": false, 00:28:20.562 "data_offset": 0, 00:28:20.562 "data_size": 65536 00:28:20.562 }, 00:28:20.562 { 00:28:20.562 "name": "BaseBdev2", 00:28:20.562 "uuid": "4d79c833-e006-4e42-be3a-2ad544595899", 00:28:20.562 "is_configured": true, 00:28:20.562 "data_offset": 0, 00:28:20.562 "data_size": 65536 00:28:20.562 }, 00:28:20.562 { 00:28:20.562 "name": "BaseBdev3", 00:28:20.562 "uuid": "35d2f25c-be72-4765-98b9-e480e0f8d71b", 00:28:20.562 "is_configured": true, 00:28:20.562 "data_offset": 0, 00:28:20.562 "data_size": 65536 00:28:20.562 } 00:28:20.562 ] 00:28:20.562 }' 00:28:20.562 00:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:20.562 00:12:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:21.149 00:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.149 00:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:21.150 00:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:28:21.150 00:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.150 00:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:28:21.419 00:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 196153c8-8c0d-4c62-a8a0-04f6fad02c88 00:28:21.679 [2024-07-25 00:12:17.391999] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:28:21.679 [2024-07-25 00:12:17.392048] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:28:21.679 [2024-07-25 00:12:17.392062] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:28:21.679 [2024-07-25 00:12:17.392155] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005d40 00:28:21.679 NewBaseBdev 00:28:21.679 [2024-07-25 00:12:17.396381] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:28:21.679 [2024-07-25 00:12:17.396404] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000008a80 00:28:21.679 [2024-07-25 00:12:17.396651] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:21.679 00:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:28:21.679 00:12:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:28:21.679 00:12:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:21.679 00:12:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:28:21.679 00:12:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:21.679 00:12:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:21.679 00:12:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:21.937 00:12:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:28:22.195 [ 00:28:22.195 { 00:28:22.195 "name": "NewBaseBdev", 00:28:22.195 "aliases": [ 00:28:22.195 "196153c8-8c0d-4c62-a8a0-04f6fad02c88" 00:28:22.195 ], 00:28:22.195 "product_name": "Malloc disk", 00:28:22.195 "block_size": 512, 00:28:22.195 "num_blocks": 65536, 00:28:22.195 "uuid": "196153c8-8c0d-4c62-a8a0-04f6fad02c88", 00:28:22.195 "assigned_rate_limits": { 00:28:22.195 "rw_ios_per_sec": 0, 00:28:22.195 "rw_mbytes_per_sec": 0, 00:28:22.195 "r_mbytes_per_sec": 0, 00:28:22.195 "w_mbytes_per_sec": 0 00:28:22.195 }, 00:28:22.195 "claimed": true, 00:28:22.195 "claim_type": "exclusive_write", 00:28:22.195 "zoned": false, 00:28:22.195 "supported_io_types": { 00:28:22.195 "read": true, 00:28:22.195 "write": true, 00:28:22.195 "unmap": true, 00:28:22.195 "flush": true, 00:28:22.195 "reset": true, 00:28:22.195 "nvme_admin": false, 00:28:22.195 "nvme_io": false, 00:28:22.195 "nvme_io_md": false, 00:28:22.195 "write_zeroes": true, 00:28:22.195 "zcopy": true, 00:28:22.196 "get_zone_info": false, 00:28:22.196 "zone_management": false, 00:28:22.196 "zone_append": false, 00:28:22.196 "compare": false, 00:28:22.196 "compare_and_write": false, 00:28:22.196 "abort": true, 00:28:22.196 "seek_hole": false, 00:28:22.196 "seek_data": false, 00:28:22.196 "copy": true, 00:28:22.196 "nvme_iov_md": false 00:28:22.196 }, 00:28:22.196 "memory_domains": [ 00:28:22.196 { 00:28:22.196 "dma_device_id": "system", 00:28:22.196 "dma_device_type": 1 00:28:22.196 }, 00:28:22.196 { 00:28:22.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:22.196 "dma_device_type": 2 00:28:22.196 } 00:28:22.196 ], 00:28:22.196 "driver_specific": {} 00:28:22.196 } 00:28:22.196 ] 00:28:22.196 00:12:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:28:22.196 00:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:28:22.196 00:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:22.196 00:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:22.196 00:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:22.196 00:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:22.196 00:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:22.196 00:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:22.196 00:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:22.196 00:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:22.196 00:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:22.196 00:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.196 00:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:22.455 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:22.455 "name": "Existed_Raid", 00:28:22.455 "uuid": "395c1c68-4c8f-4dae-a216-3ab165fc4414", 00:28:22.455 "strip_size_kb": 64, 00:28:22.455 "state": "online", 00:28:22.455 "raid_level": "raid5f", 00:28:22.455 "superblock": false, 00:28:22.455 "num_base_bdevs": 3, 00:28:22.455 "num_base_bdevs_discovered": 3, 00:28:22.455 "num_base_bdevs_operational": 3, 00:28:22.455 "base_bdevs_list": [ 00:28:22.455 { 00:28:22.455 "name": "NewBaseBdev", 00:28:22.455 "uuid": "196153c8-8c0d-4c62-a8a0-04f6fad02c88", 00:28:22.455 "is_configured": true, 00:28:22.455 "data_offset": 0, 00:28:22.455 "data_size": 65536 00:28:22.455 }, 00:28:22.455 { 00:28:22.455 "name": "BaseBdev2", 00:28:22.455 "uuid": "4d79c833-e006-4e42-be3a-2ad544595899", 00:28:22.455 "is_configured": true, 00:28:22.455 "data_offset": 0, 00:28:22.455 "data_size": 65536 00:28:22.455 }, 00:28:22.455 { 00:28:22.455 "name": "BaseBdev3", 00:28:22.455 "uuid": "35d2f25c-be72-4765-98b9-e480e0f8d71b", 00:28:22.455 "is_configured": true, 00:28:22.455 "data_offset": 0, 00:28:22.455 "data_size": 65536 00:28:22.455 } 00:28:22.455 ] 00:28:22.455 }' 00:28:22.455 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:22.455 00:12:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:22.714 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:28:22.714 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:22.714 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:22.714 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:22.714 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:22.714 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:28:22.714 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:22.714 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:22.714 [2024-07-25 00:12:18.565778] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:22.972 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:22.972 "name": "Existed_Raid", 00:28:22.972 "aliases": [ 00:28:22.972 "395c1c68-4c8f-4dae-a216-3ab165fc4414" 00:28:22.972 ], 00:28:22.972 "product_name": "Raid Volume", 00:28:22.972 "block_size": 512, 00:28:22.972 "num_blocks": 131072, 00:28:22.972 "uuid": "395c1c68-4c8f-4dae-a216-3ab165fc4414", 00:28:22.972 "assigned_rate_limits": { 00:28:22.972 "rw_ios_per_sec": 0, 00:28:22.972 "rw_mbytes_per_sec": 0, 00:28:22.972 "r_mbytes_per_sec": 0, 00:28:22.972 "w_mbytes_per_sec": 0 00:28:22.972 }, 00:28:22.972 "claimed": false, 00:28:22.972 "zoned": false, 00:28:22.972 "supported_io_types": { 00:28:22.972 "read": true, 00:28:22.972 "write": true, 00:28:22.972 "unmap": false, 00:28:22.972 "flush": false, 00:28:22.972 "reset": true, 00:28:22.972 "nvme_admin": false, 00:28:22.972 "nvme_io": false, 00:28:22.972 "nvme_io_md": false, 00:28:22.972 "write_zeroes": true, 00:28:22.972 "zcopy": false, 00:28:22.972 "get_zone_info": false, 00:28:22.972 "zone_management": false, 00:28:22.972 "zone_append": false, 00:28:22.972 "compare": false, 00:28:22.972 "compare_and_write": false, 00:28:22.972 "abort": false, 00:28:22.972 "seek_hole": false, 00:28:22.972 "seek_data": false, 00:28:22.973 "copy": false, 00:28:22.973 "nvme_iov_md": false 00:28:22.973 }, 00:28:22.973 "driver_specific": { 00:28:22.973 "raid": { 00:28:22.973 "uuid": "395c1c68-4c8f-4dae-a216-3ab165fc4414", 00:28:22.973 "strip_size_kb": 64, 00:28:22.973 "state": "online", 00:28:22.973 "raid_level": "raid5f", 00:28:22.973 "superblock": false, 00:28:22.973 "num_base_bdevs": 3, 00:28:22.973 "num_base_bdevs_discovered": 3, 00:28:22.973 "num_base_bdevs_operational": 3, 00:28:22.973 "base_bdevs_list": [ 00:28:22.973 { 00:28:22.973 "name": "NewBaseBdev", 00:28:22.973 "uuid": "196153c8-8c0d-4c62-a8a0-04f6fad02c88", 00:28:22.973 "is_configured": true, 00:28:22.973 "data_offset": 0, 00:28:22.973 "data_size": 65536 00:28:22.973 }, 00:28:22.973 { 00:28:22.973 "name": "BaseBdev2", 00:28:22.973 "uuid": "4d79c833-e006-4e42-be3a-2ad544595899", 00:28:22.973 "is_configured": true, 00:28:22.973 "data_offset": 0, 00:28:22.973 "data_size": 65536 00:28:22.973 }, 00:28:22.973 { 00:28:22.973 "name": "BaseBdev3", 00:28:22.973 "uuid": "35d2f25c-be72-4765-98b9-e480e0f8d71b", 00:28:22.973 "is_configured": true, 00:28:22.973 "data_offset": 0, 00:28:22.973 "data_size": 65536 00:28:22.973 } 00:28:22.973 ] 00:28:22.973 } 00:28:22.973 } 00:28:22.973 }' 00:28:22.973 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:22.973 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:28:22.973 BaseBdev2 00:28:22.973 BaseBdev3' 00:28:22.973 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:22.973 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:28:22.973 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:23.231 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:23.231 "name": "NewBaseBdev", 00:28:23.231 "aliases": [ 00:28:23.231 "196153c8-8c0d-4c62-a8a0-04f6fad02c88" 00:28:23.231 ], 00:28:23.231 "product_name": "Malloc disk", 00:28:23.231 "block_size": 512, 00:28:23.231 "num_blocks": 65536, 00:28:23.231 "uuid": "196153c8-8c0d-4c62-a8a0-04f6fad02c88", 00:28:23.231 "assigned_rate_limits": { 00:28:23.231 "rw_ios_per_sec": 0, 00:28:23.231 "rw_mbytes_per_sec": 0, 00:28:23.231 "r_mbytes_per_sec": 0, 00:28:23.231 "w_mbytes_per_sec": 0 00:28:23.231 }, 00:28:23.231 "claimed": true, 00:28:23.231 "claim_type": "exclusive_write", 00:28:23.231 "zoned": false, 00:28:23.231 "supported_io_types": { 00:28:23.231 "read": true, 00:28:23.231 "write": true, 00:28:23.231 "unmap": true, 00:28:23.231 "flush": true, 00:28:23.231 "reset": true, 00:28:23.231 "nvme_admin": false, 00:28:23.231 "nvme_io": false, 00:28:23.231 "nvme_io_md": false, 00:28:23.231 "write_zeroes": true, 00:28:23.231 "zcopy": true, 00:28:23.231 "get_zone_info": false, 00:28:23.231 "zone_management": false, 00:28:23.231 "zone_append": false, 00:28:23.231 "compare": false, 00:28:23.231 "compare_and_write": false, 00:28:23.231 "abort": true, 00:28:23.231 "seek_hole": false, 00:28:23.231 "seek_data": false, 00:28:23.231 "copy": true, 00:28:23.231 "nvme_iov_md": false 00:28:23.231 }, 00:28:23.231 "memory_domains": [ 00:28:23.231 { 00:28:23.231 "dma_device_id": "system", 00:28:23.231 "dma_device_type": 1 00:28:23.231 }, 00:28:23.231 { 00:28:23.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:23.231 "dma_device_type": 2 00:28:23.231 } 00:28:23.231 ], 00:28:23.231 "driver_specific": {} 00:28:23.231 }' 00:28:23.231 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:23.231 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:23.231 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:23.231 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:23.231 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:23.231 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:23.231 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:23.231 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:23.231 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:23.231 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:23.231 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:23.231 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:23.231 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:23.231 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:23.231 00:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:23.490 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:23.490 "name": "BaseBdev2", 00:28:23.490 "aliases": [ 00:28:23.490 "4d79c833-e006-4e42-be3a-2ad544595899" 00:28:23.490 ], 00:28:23.490 "product_name": "Malloc disk", 00:28:23.490 "block_size": 512, 00:28:23.490 "num_blocks": 65536, 00:28:23.490 "uuid": "4d79c833-e006-4e42-be3a-2ad544595899", 00:28:23.490 "assigned_rate_limits": { 00:28:23.490 "rw_ios_per_sec": 0, 00:28:23.490 "rw_mbytes_per_sec": 0, 00:28:23.490 "r_mbytes_per_sec": 0, 00:28:23.490 "w_mbytes_per_sec": 0 00:28:23.490 }, 00:28:23.490 "claimed": true, 00:28:23.490 "claim_type": "exclusive_write", 00:28:23.491 "zoned": false, 00:28:23.491 "supported_io_types": { 00:28:23.491 "read": true, 00:28:23.491 "write": true, 00:28:23.491 "unmap": true, 00:28:23.491 "flush": true, 00:28:23.491 "reset": true, 00:28:23.491 "nvme_admin": false, 00:28:23.491 "nvme_io": false, 00:28:23.491 "nvme_io_md": false, 00:28:23.491 "write_zeroes": true, 00:28:23.491 "zcopy": true, 00:28:23.491 "get_zone_info": false, 00:28:23.491 "zone_management": false, 00:28:23.491 "zone_append": false, 00:28:23.491 "compare": false, 00:28:23.491 "compare_and_write": false, 00:28:23.491 "abort": true, 00:28:23.491 "seek_hole": false, 00:28:23.491 "seek_data": false, 00:28:23.491 "copy": true, 00:28:23.491 "nvme_iov_md": false 00:28:23.491 }, 00:28:23.491 "memory_domains": [ 00:28:23.491 { 00:28:23.491 "dma_device_id": "system", 00:28:23.491 "dma_device_type": 1 00:28:23.491 }, 00:28:23.491 { 00:28:23.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:23.491 "dma_device_type": 2 00:28:23.491 } 00:28:23.491 ], 00:28:23.491 "driver_specific": {} 00:28:23.491 }' 00:28:23.491 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:23.491 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:23.491 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:23.491 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:23.491 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:23.491 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:23.491 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:23.491 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:23.491 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:23.491 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:23.491 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:23.491 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:23.491 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:23.491 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:23.491 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:23.750 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:23.750 "name": "BaseBdev3", 00:28:23.750 "aliases": [ 00:28:23.750 "35d2f25c-be72-4765-98b9-e480e0f8d71b" 00:28:23.750 ], 00:28:23.750 "product_name": "Malloc disk", 00:28:23.750 "block_size": 512, 00:28:23.750 "num_blocks": 65536, 00:28:23.750 "uuid": "35d2f25c-be72-4765-98b9-e480e0f8d71b", 00:28:23.750 "assigned_rate_limits": { 00:28:23.750 "rw_ios_per_sec": 0, 00:28:23.750 "rw_mbytes_per_sec": 0, 00:28:23.750 "r_mbytes_per_sec": 0, 00:28:23.750 "w_mbytes_per_sec": 0 00:28:23.750 }, 00:28:23.750 "claimed": true, 00:28:23.750 "claim_type": "exclusive_write", 00:28:23.750 "zoned": false, 00:28:23.750 "supported_io_types": { 00:28:23.750 "read": true, 00:28:23.750 "write": true, 00:28:23.750 "unmap": true, 00:28:23.750 "flush": true, 00:28:23.750 "reset": true, 00:28:23.750 "nvme_admin": false, 00:28:23.750 "nvme_io": false, 00:28:23.750 "nvme_io_md": false, 00:28:23.750 "write_zeroes": true, 00:28:23.750 "zcopy": true, 00:28:23.750 "get_zone_info": false, 00:28:23.750 "zone_management": false, 00:28:23.750 "zone_append": false, 00:28:23.750 "compare": false, 00:28:23.750 "compare_and_write": false, 00:28:23.750 "abort": true, 00:28:23.750 "seek_hole": false, 00:28:23.750 "seek_data": false, 00:28:23.750 "copy": true, 00:28:23.750 "nvme_iov_md": false 00:28:23.750 }, 00:28:23.750 "memory_domains": [ 00:28:23.750 { 00:28:23.750 "dma_device_id": "system", 00:28:23.750 "dma_device_type": 1 00:28:23.750 }, 00:28:23.750 { 00:28:23.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:23.750 "dma_device_type": 2 00:28:23.750 } 00:28:23.750 ], 00:28:23.750 "driver_specific": {} 00:28:23.750 }' 00:28:23.750 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:23.750 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:23.750 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:23.750 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:23.750 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:23.750 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:23.750 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:23.750 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:23.750 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:23.750 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:23.750 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:23.750 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:23.750 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:24.009 [2024-07-25 00:12:19.829992] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:24.009 [2024-07-25 00:12:19.830211] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:24.009 [2024-07-25 00:12:19.830309] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:24.009 [2024-07-25 00:12:19.830629] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:24.009 [2024-07-25 00:12:19.830683] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name Existed_Raid, state offline 00:28:24.009 00:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 102229 00:28:24.009 00:12:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 102229 ']' 00:28:24.009 00:12:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 102229 00:28:24.009 00:12:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:28:24.009 00:12:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:24.009 00:12:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102229 00:28:24.268 killing process with pid 102229 00:28:24.268 00:12:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:24.268 00:12:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:24.268 00:12:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102229' 00:28:24.268 00:12:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 102229 00:28:24.268 [2024-07-25 00:12:19.882709] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:24.268 00:12:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 102229 00:28:24.268 [2024-07-25 00:12:20.104271] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:25.646 ************************************ 00:28:25.646 END TEST raid5f_state_function_test 00:28:25.646 ************************************ 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:28:25.646 00:28:25.646 real 0m23.338s 00:28:25.646 user 0m40.797s 00:28:25.646 sys 0m3.665s 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.646 00:12:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:28:25.646 00:12:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:28:25.646 00:12:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:25.646 00:12:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:25.646 ************************************ 00:28:25.646 START TEST raid5f_state_function_test_sb 00:28:25.646 ************************************ 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=103097 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 103097' 00:28:25.646 Process raid pid: 103097 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 103097 /var/tmp/spdk-raid.sock 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 103097 ']' 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:25.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:25.646 00:12:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:25.647 [2024-07-25 00:12:21.230396] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:28:25.647 [2024-07-25 00:12:21.230564] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.647 [2024-07-25 00:12:21.391469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.906 [2024-07-25 00:12:21.557949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.906 [2024-07-25 00:12:21.710721] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:26.473 00:12:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:26.473 00:12:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:28:26.473 00:12:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:28:26.732 [2024-07-25 00:12:22.389096] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:26.732 [2024-07-25 00:12:22.389169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:26.732 [2024-07-25 00:12:22.389183] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:26.732 [2024-07-25 00:12:22.389197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:26.732 [2024-07-25 00:12:22.389207] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:26.732 [2024-07-25 00:12:22.389219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:26.732 00:12:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:26.732 00:12:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:26.732 00:12:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:26.732 00:12:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:26.732 00:12:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:26.732 00:12:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:26.732 00:12:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:26.732 00:12:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:26.732 00:12:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:26.732 00:12:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:26.732 00:12:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:26.732 00:12:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:26.991 00:12:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:26.991 "name": "Existed_Raid", 00:28:26.991 "uuid": "1effe5da-938c-476b-ae10-f914139045ec", 00:28:26.991 "strip_size_kb": 64, 00:28:26.991 "state": "configuring", 00:28:26.991 "raid_level": "raid5f", 00:28:26.991 "superblock": true, 00:28:26.991 "num_base_bdevs": 3, 00:28:26.991 "num_base_bdevs_discovered": 0, 00:28:26.991 "num_base_bdevs_operational": 3, 00:28:26.991 "base_bdevs_list": [ 00:28:26.991 { 00:28:26.991 "name": "BaseBdev1", 00:28:26.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.991 "is_configured": false, 00:28:26.991 "data_offset": 0, 00:28:26.991 "data_size": 0 00:28:26.991 }, 00:28:26.991 { 00:28:26.991 "name": "BaseBdev2", 00:28:26.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.991 "is_configured": false, 00:28:26.991 "data_offset": 0, 00:28:26.991 "data_size": 0 00:28:26.991 }, 00:28:26.991 { 00:28:26.991 "name": "BaseBdev3", 00:28:26.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.991 "is_configured": false, 00:28:26.991 "data_offset": 0, 00:28:26.991 "data_size": 0 00:28:26.991 } 00:28:26.991 ] 00:28:26.991 }' 00:28:26.991 00:12:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:26.991 00:12:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:27.249 00:12:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:27.508 [2024-07-25 00:12:23.133161] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:27.508 [2024-07-25 00:12:23.133240] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:28:27.508 00:12:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:28:27.508 [2024-07-25 00:12:23.325237] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:27.508 [2024-07-25 00:12:23.325318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:27.508 [2024-07-25 00:12:23.325339] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:27.508 [2024-07-25 00:12:23.325357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:27.508 [2024-07-25 00:12:23.325365] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:27.508 [2024-07-25 00:12:23.325377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:27.508 00:12:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:27.766 [2024-07-25 00:12:23.538074] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:27.766 BaseBdev1 00:28:27.766 00:12:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:28:27.766 00:12:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:28:27.766 00:12:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:27.766 00:12:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:27.766 00:12:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:27.766 00:12:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:27.766 00:12:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:28.024 00:12:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:28.281 [ 00:28:28.281 { 00:28:28.281 "name": "BaseBdev1", 00:28:28.281 "aliases": [ 00:28:28.281 "9e263b4f-b870-4d15-be98-0750ac37215e" 00:28:28.281 ], 00:28:28.281 "product_name": "Malloc disk", 00:28:28.281 "block_size": 512, 00:28:28.281 "num_blocks": 65536, 00:28:28.281 "uuid": "9e263b4f-b870-4d15-be98-0750ac37215e", 00:28:28.281 "assigned_rate_limits": { 00:28:28.281 "rw_ios_per_sec": 0, 00:28:28.281 "rw_mbytes_per_sec": 0, 00:28:28.281 "r_mbytes_per_sec": 0, 00:28:28.281 "w_mbytes_per_sec": 0 00:28:28.281 }, 00:28:28.281 "claimed": true, 00:28:28.281 "claim_type": "exclusive_write", 00:28:28.281 "zoned": false, 00:28:28.281 "supported_io_types": { 00:28:28.281 "read": true, 00:28:28.281 "write": true, 00:28:28.281 "unmap": true, 00:28:28.281 "flush": true, 00:28:28.281 "reset": true, 00:28:28.281 "nvme_admin": false, 00:28:28.281 "nvme_io": false, 00:28:28.281 "nvme_io_md": false, 00:28:28.281 "write_zeroes": true, 00:28:28.281 "zcopy": true, 00:28:28.281 "get_zone_info": false, 00:28:28.281 "zone_management": false, 00:28:28.281 "zone_append": false, 00:28:28.281 "compare": false, 00:28:28.281 "compare_and_write": false, 00:28:28.281 "abort": true, 00:28:28.281 "seek_hole": false, 00:28:28.281 "seek_data": false, 00:28:28.281 "copy": true, 00:28:28.281 "nvme_iov_md": false 00:28:28.281 }, 00:28:28.281 "memory_domains": [ 00:28:28.282 { 00:28:28.282 "dma_device_id": "system", 00:28:28.282 "dma_device_type": 1 00:28:28.282 }, 00:28:28.282 { 00:28:28.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:28.282 "dma_device_type": 2 00:28:28.282 } 00:28:28.282 ], 00:28:28.282 "driver_specific": {} 00:28:28.282 } 00:28:28.282 ] 00:28:28.282 00:12:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:28.282 00:12:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:28.282 00:12:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:28.282 00:12:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:28.282 00:12:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:28.282 00:12:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:28.282 00:12:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:28.282 00:12:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:28.282 00:12:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:28.282 00:12:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:28.282 00:12:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:28.282 00:12:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:28.282 00:12:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:28.540 00:12:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:28.540 "name": "Existed_Raid", 00:28:28.540 "uuid": "58de225c-d08b-4573-af02-a5ccd925f0b5", 00:28:28.540 "strip_size_kb": 64, 00:28:28.540 "state": "configuring", 00:28:28.540 "raid_level": "raid5f", 00:28:28.540 "superblock": true, 00:28:28.540 "num_base_bdevs": 3, 00:28:28.540 "num_base_bdevs_discovered": 1, 00:28:28.540 "num_base_bdevs_operational": 3, 00:28:28.540 "base_bdevs_list": [ 00:28:28.540 { 00:28:28.540 "name": "BaseBdev1", 00:28:28.540 "uuid": "9e263b4f-b870-4d15-be98-0750ac37215e", 00:28:28.540 "is_configured": true, 00:28:28.540 "data_offset": 2048, 00:28:28.540 "data_size": 63488 00:28:28.540 }, 00:28:28.540 { 00:28:28.540 "name": "BaseBdev2", 00:28:28.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:28.540 "is_configured": false, 00:28:28.540 "data_offset": 0, 00:28:28.540 "data_size": 0 00:28:28.540 }, 00:28:28.540 { 00:28:28.540 "name": "BaseBdev3", 00:28:28.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:28.540 "is_configured": false, 00:28:28.540 "data_offset": 0, 00:28:28.540 "data_size": 0 00:28:28.540 } 00:28:28.540 ] 00:28:28.540 }' 00:28:28.540 00:12:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:28.540 00:12:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:28.799 00:12:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:29.057 [2024-07-25 00:12:24.830520] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:29.057 [2024-07-25 00:12:24.830588] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:28:29.057 00:12:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:28:29.316 [2024-07-25 00:12:25.034628] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:29.316 [2024-07-25 00:12:25.036724] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:29.316 [2024-07-25 00:12:25.036848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:29.316 [2024-07-25 00:12:25.036864] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:29.316 [2024-07-25 00:12:25.036880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:29.316 00:12:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:28:29.316 00:12:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:29.316 00:12:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:29.316 00:12:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:29.316 00:12:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:29.316 00:12:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:29.316 00:12:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:29.316 00:12:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:29.316 00:12:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:29.316 00:12:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:29.316 00:12:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:29.316 00:12:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:29.316 00:12:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:29.316 00:12:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.575 00:12:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:29.575 "name": "Existed_Raid", 00:28:29.575 "uuid": "b4394024-918b-4c14-b24e-62d677b5ae6e", 00:28:29.575 "strip_size_kb": 64, 00:28:29.575 "state": "configuring", 00:28:29.575 "raid_level": "raid5f", 00:28:29.575 "superblock": true, 00:28:29.575 "num_base_bdevs": 3, 00:28:29.575 "num_base_bdevs_discovered": 1, 00:28:29.575 "num_base_bdevs_operational": 3, 00:28:29.575 "base_bdevs_list": [ 00:28:29.575 { 00:28:29.575 "name": "BaseBdev1", 00:28:29.575 "uuid": "9e263b4f-b870-4d15-be98-0750ac37215e", 00:28:29.575 "is_configured": true, 00:28:29.575 "data_offset": 2048, 00:28:29.575 "data_size": 63488 00:28:29.575 }, 00:28:29.575 { 00:28:29.575 "name": "BaseBdev2", 00:28:29.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.575 "is_configured": false, 00:28:29.575 "data_offset": 0, 00:28:29.575 "data_size": 0 00:28:29.575 }, 00:28:29.575 { 00:28:29.575 "name": "BaseBdev3", 00:28:29.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.575 "is_configured": false, 00:28:29.575 "data_offset": 0, 00:28:29.575 "data_size": 0 00:28:29.575 } 00:28:29.575 ] 00:28:29.575 }' 00:28:29.575 00:12:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:29.575 00:12:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:29.834 00:12:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:30.093 [2024-07-25 00:12:25.831520] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:30.093 BaseBdev2 00:28:30.093 00:12:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:28:30.093 00:12:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:28:30.093 00:12:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:30.093 00:12:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:30.093 00:12:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:30.093 00:12:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:30.093 00:12:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:30.352 00:12:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:30.610 [ 00:28:30.610 { 00:28:30.610 "name": "BaseBdev2", 00:28:30.610 "aliases": [ 00:28:30.610 "77aa400f-1813-4214-9be6-7c73567713b4" 00:28:30.610 ], 00:28:30.610 "product_name": "Malloc disk", 00:28:30.610 "block_size": 512, 00:28:30.610 "num_blocks": 65536, 00:28:30.610 "uuid": "77aa400f-1813-4214-9be6-7c73567713b4", 00:28:30.610 "assigned_rate_limits": { 00:28:30.610 "rw_ios_per_sec": 0, 00:28:30.610 "rw_mbytes_per_sec": 0, 00:28:30.610 "r_mbytes_per_sec": 0, 00:28:30.610 "w_mbytes_per_sec": 0 00:28:30.610 }, 00:28:30.610 "claimed": true, 00:28:30.610 "claim_type": "exclusive_write", 00:28:30.610 "zoned": false, 00:28:30.610 "supported_io_types": { 00:28:30.610 "read": true, 00:28:30.610 "write": true, 00:28:30.610 "unmap": true, 00:28:30.610 "flush": true, 00:28:30.610 "reset": true, 00:28:30.610 "nvme_admin": false, 00:28:30.610 "nvme_io": false, 00:28:30.610 "nvme_io_md": false, 00:28:30.610 "write_zeroes": true, 00:28:30.610 "zcopy": true, 00:28:30.610 "get_zone_info": false, 00:28:30.610 "zone_management": false, 00:28:30.610 "zone_append": false, 00:28:30.610 "compare": false, 00:28:30.610 "compare_and_write": false, 00:28:30.610 "abort": true, 00:28:30.610 "seek_hole": false, 00:28:30.610 "seek_data": false, 00:28:30.610 "copy": true, 00:28:30.610 "nvme_iov_md": false 00:28:30.610 }, 00:28:30.610 "memory_domains": [ 00:28:30.610 { 00:28:30.610 "dma_device_id": "system", 00:28:30.610 "dma_device_type": 1 00:28:30.610 }, 00:28:30.610 { 00:28:30.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:30.610 "dma_device_type": 2 00:28:30.610 } 00:28:30.610 ], 00:28:30.610 "driver_specific": {} 00:28:30.610 } 00:28:30.610 ] 00:28:30.610 00:12:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:30.610 00:12:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:30.610 00:12:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:30.610 00:12:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:30.610 00:12:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:30.610 00:12:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:30.610 00:12:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:30.610 00:12:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:30.610 00:12:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:30.610 00:12:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:30.610 00:12:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:30.610 00:12:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:30.610 00:12:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:30.610 00:12:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.610 00:12:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:30.868 00:12:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:30.868 "name": "Existed_Raid", 00:28:30.868 "uuid": "b4394024-918b-4c14-b24e-62d677b5ae6e", 00:28:30.868 "strip_size_kb": 64, 00:28:30.868 "state": "configuring", 00:28:30.868 "raid_level": "raid5f", 00:28:30.868 "superblock": true, 00:28:30.868 "num_base_bdevs": 3, 00:28:30.868 "num_base_bdevs_discovered": 2, 00:28:30.868 "num_base_bdevs_operational": 3, 00:28:30.868 "base_bdevs_list": [ 00:28:30.868 { 00:28:30.868 "name": "BaseBdev1", 00:28:30.868 "uuid": "9e263b4f-b870-4d15-be98-0750ac37215e", 00:28:30.868 "is_configured": true, 00:28:30.868 "data_offset": 2048, 00:28:30.868 "data_size": 63488 00:28:30.868 }, 00:28:30.868 { 00:28:30.868 "name": "BaseBdev2", 00:28:30.868 "uuid": "77aa400f-1813-4214-9be6-7c73567713b4", 00:28:30.868 "is_configured": true, 00:28:30.868 "data_offset": 2048, 00:28:30.868 "data_size": 63488 00:28:30.868 }, 00:28:30.868 { 00:28:30.868 "name": "BaseBdev3", 00:28:30.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:30.868 "is_configured": false, 00:28:30.868 "data_offset": 0, 00:28:30.868 "data_size": 0 00:28:30.868 } 00:28:30.868 ] 00:28:30.868 }' 00:28:30.868 00:12:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:30.868 00:12:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:31.126 00:12:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:31.385 [2024-07-25 00:12:27.063902] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:31.385 [2024-07-25 00:12:27.064156] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:28:31.385 [2024-07-25 00:12:27.064177] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:28:31.385 [2024-07-25 00:12:27.064355] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:28:31.385 BaseBdev3 00:28:31.385 [2024-07-25 00:12:27.070075] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:28:31.385 [2024-07-25 00:12:27.070099] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:28:31.385 [2024-07-25 00:12:27.070340] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:31.385 00:12:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:28:31.385 00:12:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:28:31.385 00:12:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:31.385 00:12:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:31.385 00:12:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:31.385 00:12:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:31.385 00:12:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:31.644 00:12:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:31.903 [ 00:28:31.903 { 00:28:31.903 "name": "BaseBdev3", 00:28:31.903 "aliases": [ 00:28:31.903 "db0be108-7b99-487e-8a06-2e010d947504" 00:28:31.903 ], 00:28:31.903 "product_name": "Malloc disk", 00:28:31.903 "block_size": 512, 00:28:31.903 "num_blocks": 65536, 00:28:31.903 "uuid": "db0be108-7b99-487e-8a06-2e010d947504", 00:28:31.903 "assigned_rate_limits": { 00:28:31.903 "rw_ios_per_sec": 0, 00:28:31.903 "rw_mbytes_per_sec": 0, 00:28:31.903 "r_mbytes_per_sec": 0, 00:28:31.903 "w_mbytes_per_sec": 0 00:28:31.903 }, 00:28:31.903 "claimed": true, 00:28:31.903 "claim_type": "exclusive_write", 00:28:31.903 "zoned": false, 00:28:31.903 "supported_io_types": { 00:28:31.903 "read": true, 00:28:31.903 "write": true, 00:28:31.903 "unmap": true, 00:28:31.903 "flush": true, 00:28:31.903 "reset": true, 00:28:31.903 "nvme_admin": false, 00:28:31.903 "nvme_io": false, 00:28:31.903 "nvme_io_md": false, 00:28:31.903 "write_zeroes": true, 00:28:31.903 "zcopy": true, 00:28:31.903 "get_zone_info": false, 00:28:31.903 "zone_management": false, 00:28:31.903 "zone_append": false, 00:28:31.903 "compare": false, 00:28:31.903 "compare_and_write": false, 00:28:31.903 "abort": true, 00:28:31.903 "seek_hole": false, 00:28:31.903 "seek_data": false, 00:28:31.903 "copy": true, 00:28:31.903 "nvme_iov_md": false 00:28:31.903 }, 00:28:31.903 "memory_domains": [ 00:28:31.903 { 00:28:31.903 "dma_device_id": "system", 00:28:31.903 "dma_device_type": 1 00:28:31.903 }, 00:28:31.903 { 00:28:31.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:31.903 "dma_device_type": 2 00:28:31.903 } 00:28:31.903 ], 00:28:31.903 "driver_specific": {} 00:28:31.903 } 00:28:31.903 ] 00:28:31.903 00:12:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:31.903 00:12:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:31.903 00:12:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:31.903 00:12:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:28:31.903 00:12:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:31.903 00:12:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:31.903 00:12:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:31.903 00:12:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:31.903 00:12:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:31.903 00:12:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:31.903 00:12:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:31.903 00:12:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:31.903 00:12:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:31.903 00:12:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.903 00:12:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:31.903 00:12:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:31.903 "name": "Existed_Raid", 00:28:31.903 "uuid": "b4394024-918b-4c14-b24e-62d677b5ae6e", 00:28:31.903 "strip_size_kb": 64, 00:28:31.903 "state": "online", 00:28:31.903 "raid_level": "raid5f", 00:28:31.903 "superblock": true, 00:28:31.903 "num_base_bdevs": 3, 00:28:31.903 "num_base_bdevs_discovered": 3, 00:28:31.903 "num_base_bdevs_operational": 3, 00:28:31.903 "base_bdevs_list": [ 00:28:31.903 { 00:28:31.903 "name": "BaseBdev1", 00:28:31.903 "uuid": "9e263b4f-b870-4d15-be98-0750ac37215e", 00:28:31.903 "is_configured": true, 00:28:31.903 "data_offset": 2048, 00:28:31.903 "data_size": 63488 00:28:31.903 }, 00:28:31.903 { 00:28:31.903 "name": "BaseBdev2", 00:28:31.903 "uuid": "77aa400f-1813-4214-9be6-7c73567713b4", 00:28:31.903 "is_configured": true, 00:28:31.903 "data_offset": 2048, 00:28:31.903 "data_size": 63488 00:28:31.903 }, 00:28:31.903 { 00:28:31.903 "name": "BaseBdev3", 00:28:31.903 "uuid": "db0be108-7b99-487e-8a06-2e010d947504", 00:28:31.903 "is_configured": true, 00:28:31.903 "data_offset": 2048, 00:28:31.903 "data_size": 63488 00:28:31.903 } 00:28:31.903 ] 00:28:31.903 }' 00:28:31.903 00:12:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:31.903 00:12:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:32.471 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:28:32.471 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:32.471 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:32.471 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:32.471 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:32.471 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:28:32.471 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:32.471 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:32.471 [2024-07-25 00:12:28.276401] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:32.471 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:32.471 "name": "Existed_Raid", 00:28:32.471 "aliases": [ 00:28:32.471 "b4394024-918b-4c14-b24e-62d677b5ae6e" 00:28:32.471 ], 00:28:32.471 "product_name": "Raid Volume", 00:28:32.471 "block_size": 512, 00:28:32.471 "num_blocks": 126976, 00:28:32.471 "uuid": "b4394024-918b-4c14-b24e-62d677b5ae6e", 00:28:32.471 "assigned_rate_limits": { 00:28:32.471 "rw_ios_per_sec": 0, 00:28:32.471 "rw_mbytes_per_sec": 0, 00:28:32.471 "r_mbytes_per_sec": 0, 00:28:32.471 "w_mbytes_per_sec": 0 00:28:32.471 }, 00:28:32.471 "claimed": false, 00:28:32.471 "zoned": false, 00:28:32.471 "supported_io_types": { 00:28:32.471 "read": true, 00:28:32.471 "write": true, 00:28:32.471 "unmap": false, 00:28:32.471 "flush": false, 00:28:32.471 "reset": true, 00:28:32.471 "nvme_admin": false, 00:28:32.471 "nvme_io": false, 00:28:32.471 "nvme_io_md": false, 00:28:32.471 "write_zeroes": true, 00:28:32.471 "zcopy": false, 00:28:32.471 "get_zone_info": false, 00:28:32.471 "zone_management": false, 00:28:32.471 "zone_append": false, 00:28:32.471 "compare": false, 00:28:32.471 "compare_and_write": false, 00:28:32.471 "abort": false, 00:28:32.471 "seek_hole": false, 00:28:32.471 "seek_data": false, 00:28:32.471 "copy": false, 00:28:32.471 "nvme_iov_md": false 00:28:32.471 }, 00:28:32.471 "driver_specific": { 00:28:32.471 "raid": { 00:28:32.471 "uuid": "b4394024-918b-4c14-b24e-62d677b5ae6e", 00:28:32.471 "strip_size_kb": 64, 00:28:32.471 "state": "online", 00:28:32.471 "raid_level": "raid5f", 00:28:32.471 "superblock": true, 00:28:32.471 "num_base_bdevs": 3, 00:28:32.471 "num_base_bdevs_discovered": 3, 00:28:32.471 "num_base_bdevs_operational": 3, 00:28:32.471 "base_bdevs_list": [ 00:28:32.471 { 00:28:32.471 "name": "BaseBdev1", 00:28:32.471 "uuid": "9e263b4f-b870-4d15-be98-0750ac37215e", 00:28:32.471 "is_configured": true, 00:28:32.471 "data_offset": 2048, 00:28:32.471 "data_size": 63488 00:28:32.471 }, 00:28:32.471 { 00:28:32.471 "name": "BaseBdev2", 00:28:32.471 "uuid": "77aa400f-1813-4214-9be6-7c73567713b4", 00:28:32.471 "is_configured": true, 00:28:32.471 "data_offset": 2048, 00:28:32.471 "data_size": 63488 00:28:32.471 }, 00:28:32.471 { 00:28:32.471 "name": "BaseBdev3", 00:28:32.471 "uuid": "db0be108-7b99-487e-8a06-2e010d947504", 00:28:32.471 "is_configured": true, 00:28:32.471 "data_offset": 2048, 00:28:32.471 "data_size": 63488 00:28:32.471 } 00:28:32.471 ] 00:28:32.471 } 00:28:32.471 } 00:28:32.471 }' 00:28:32.471 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:32.471 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:28:32.471 BaseBdev2 00:28:32.471 BaseBdev3' 00:28:32.471 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:32.471 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:28:32.471 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:32.730 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:32.730 "name": "BaseBdev1", 00:28:32.730 "aliases": [ 00:28:32.730 "9e263b4f-b870-4d15-be98-0750ac37215e" 00:28:32.730 ], 00:28:32.730 "product_name": "Malloc disk", 00:28:32.730 "block_size": 512, 00:28:32.730 "num_blocks": 65536, 00:28:32.730 "uuid": "9e263b4f-b870-4d15-be98-0750ac37215e", 00:28:32.730 "assigned_rate_limits": { 00:28:32.730 "rw_ios_per_sec": 0, 00:28:32.730 "rw_mbytes_per_sec": 0, 00:28:32.730 "r_mbytes_per_sec": 0, 00:28:32.730 "w_mbytes_per_sec": 0 00:28:32.730 }, 00:28:32.730 "claimed": true, 00:28:32.730 "claim_type": "exclusive_write", 00:28:32.730 "zoned": false, 00:28:32.730 "supported_io_types": { 00:28:32.730 "read": true, 00:28:32.730 "write": true, 00:28:32.730 "unmap": true, 00:28:32.730 "flush": true, 00:28:32.730 "reset": true, 00:28:32.730 "nvme_admin": false, 00:28:32.730 "nvme_io": false, 00:28:32.730 "nvme_io_md": false, 00:28:32.730 "write_zeroes": true, 00:28:32.730 "zcopy": true, 00:28:32.730 "get_zone_info": false, 00:28:32.730 "zone_management": false, 00:28:32.730 "zone_append": false, 00:28:32.730 "compare": false, 00:28:32.730 "compare_and_write": false, 00:28:32.730 "abort": true, 00:28:32.730 "seek_hole": false, 00:28:32.730 "seek_data": false, 00:28:32.730 "copy": true, 00:28:32.730 "nvme_iov_md": false 00:28:32.730 }, 00:28:32.730 "memory_domains": [ 00:28:32.730 { 00:28:32.730 "dma_device_id": "system", 00:28:32.730 "dma_device_type": 1 00:28:32.730 }, 00:28:32.730 { 00:28:32.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:32.730 "dma_device_type": 2 00:28:32.730 } 00:28:32.730 ], 00:28:32.730 "driver_specific": {} 00:28:32.730 }' 00:28:32.730 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:32.730 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:32.730 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:32.730 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:32.730 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:32.989 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:32.989 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:32.989 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:32.989 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:32.989 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:32.989 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:32.989 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:32.989 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:32.989 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:32.989 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:32.989 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:32.989 "name": "BaseBdev2", 00:28:32.989 "aliases": [ 00:28:32.989 "77aa400f-1813-4214-9be6-7c73567713b4" 00:28:32.989 ], 00:28:32.989 "product_name": "Malloc disk", 00:28:32.989 "block_size": 512, 00:28:32.989 "num_blocks": 65536, 00:28:32.989 "uuid": "77aa400f-1813-4214-9be6-7c73567713b4", 00:28:32.989 "assigned_rate_limits": { 00:28:32.989 "rw_ios_per_sec": 0, 00:28:32.989 "rw_mbytes_per_sec": 0, 00:28:32.989 "r_mbytes_per_sec": 0, 00:28:32.989 "w_mbytes_per_sec": 0 00:28:32.989 }, 00:28:32.989 "claimed": true, 00:28:32.989 "claim_type": "exclusive_write", 00:28:32.989 "zoned": false, 00:28:32.989 "supported_io_types": { 00:28:32.989 "read": true, 00:28:32.989 "write": true, 00:28:32.989 "unmap": true, 00:28:32.989 "flush": true, 00:28:32.989 "reset": true, 00:28:32.989 "nvme_admin": false, 00:28:32.989 "nvme_io": false, 00:28:32.989 "nvme_io_md": false, 00:28:32.989 "write_zeroes": true, 00:28:32.989 "zcopy": true, 00:28:32.989 "get_zone_info": false, 00:28:32.989 "zone_management": false, 00:28:32.989 "zone_append": false, 00:28:32.989 "compare": false, 00:28:32.989 "compare_and_write": false, 00:28:32.989 "abort": true, 00:28:32.989 "seek_hole": false, 00:28:32.989 "seek_data": false, 00:28:32.989 "copy": true, 00:28:32.989 "nvme_iov_md": false 00:28:32.989 }, 00:28:32.989 "memory_domains": [ 00:28:32.989 { 00:28:32.989 "dma_device_id": "system", 00:28:32.989 "dma_device_type": 1 00:28:32.989 }, 00:28:32.989 { 00:28:32.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:32.989 "dma_device_type": 2 00:28:32.989 } 00:28:32.989 ], 00:28:32.989 "driver_specific": {} 00:28:32.989 }' 00:28:32.989 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:33.248 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:33.248 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:33.248 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:33.248 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:33.248 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:33.248 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:33.248 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:33.248 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:33.248 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:33.248 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:33.248 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:33.248 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:33.248 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:33.248 00:12:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:33.507 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:33.507 "name": "BaseBdev3", 00:28:33.507 "aliases": [ 00:28:33.507 "db0be108-7b99-487e-8a06-2e010d947504" 00:28:33.507 ], 00:28:33.507 "product_name": "Malloc disk", 00:28:33.507 "block_size": 512, 00:28:33.507 "num_blocks": 65536, 00:28:33.507 "uuid": "db0be108-7b99-487e-8a06-2e010d947504", 00:28:33.507 "assigned_rate_limits": { 00:28:33.507 "rw_ios_per_sec": 0, 00:28:33.507 "rw_mbytes_per_sec": 0, 00:28:33.507 "r_mbytes_per_sec": 0, 00:28:33.507 "w_mbytes_per_sec": 0 00:28:33.507 }, 00:28:33.507 "claimed": true, 00:28:33.507 "claim_type": "exclusive_write", 00:28:33.507 "zoned": false, 00:28:33.507 "supported_io_types": { 00:28:33.507 "read": true, 00:28:33.507 "write": true, 00:28:33.507 "unmap": true, 00:28:33.507 "flush": true, 00:28:33.507 "reset": true, 00:28:33.507 "nvme_admin": false, 00:28:33.507 "nvme_io": false, 00:28:33.507 "nvme_io_md": false, 00:28:33.507 "write_zeroes": true, 00:28:33.507 "zcopy": true, 00:28:33.507 "get_zone_info": false, 00:28:33.507 "zone_management": false, 00:28:33.507 "zone_append": false, 00:28:33.507 "compare": false, 00:28:33.507 "compare_and_write": false, 00:28:33.507 "abort": true, 00:28:33.507 "seek_hole": false, 00:28:33.507 "seek_data": false, 00:28:33.507 "copy": true, 00:28:33.507 "nvme_iov_md": false 00:28:33.507 }, 00:28:33.507 "memory_domains": [ 00:28:33.507 { 00:28:33.507 "dma_device_id": "system", 00:28:33.507 "dma_device_type": 1 00:28:33.507 }, 00:28:33.507 { 00:28:33.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:33.507 "dma_device_type": 2 00:28:33.507 } 00:28:33.507 ], 00:28:33.507 "driver_specific": {} 00:28:33.507 }' 00:28:33.507 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:33.507 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:33.507 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:33.507 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:33.507 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:33.507 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:33.507 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:33.507 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:33.507 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:33.507 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:33.507 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:33.507 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:33.507 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:33.766 [2024-07-25 00:12:29.456608] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:33.766 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:28:33.766 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:28:33.766 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:33.766 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:28:33.766 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:28:33.766 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:28:33.766 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:33.766 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:33.766 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:33.766 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:33.766 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:33.766 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:33.766 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:33.766 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:33.766 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:33.766 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:33.766 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:34.025 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:34.025 "name": "Existed_Raid", 00:28:34.025 "uuid": "b4394024-918b-4c14-b24e-62d677b5ae6e", 00:28:34.025 "strip_size_kb": 64, 00:28:34.025 "state": "online", 00:28:34.025 "raid_level": "raid5f", 00:28:34.025 "superblock": true, 00:28:34.025 "num_base_bdevs": 3, 00:28:34.025 "num_base_bdevs_discovered": 2, 00:28:34.025 "num_base_bdevs_operational": 2, 00:28:34.025 "base_bdevs_list": [ 00:28:34.025 { 00:28:34.025 "name": null, 00:28:34.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:34.025 "is_configured": false, 00:28:34.025 "data_offset": 2048, 00:28:34.025 "data_size": 63488 00:28:34.025 }, 00:28:34.025 { 00:28:34.025 "name": "BaseBdev2", 00:28:34.025 "uuid": "77aa400f-1813-4214-9be6-7c73567713b4", 00:28:34.025 "is_configured": true, 00:28:34.025 "data_offset": 2048, 00:28:34.025 "data_size": 63488 00:28:34.025 }, 00:28:34.025 { 00:28:34.025 "name": "BaseBdev3", 00:28:34.025 "uuid": "db0be108-7b99-487e-8a06-2e010d947504", 00:28:34.025 "is_configured": true, 00:28:34.025 "data_offset": 2048, 00:28:34.025 "data_size": 63488 00:28:34.025 } 00:28:34.025 ] 00:28:34.025 }' 00:28:34.025 00:12:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:34.025 00:12:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:34.287 00:12:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:28:34.287 00:12:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:34.287 00:12:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:34.287 00:12:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:34.556 00:12:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:34.556 00:12:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:34.556 00:12:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:28:34.815 [2024-07-25 00:12:30.525335] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:34.815 [2024-07-25 00:12:30.525489] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:34.815 [2024-07-25 00:12:30.595473] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:34.815 00:12:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:34.815 00:12:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:34.815 00:12:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:34.815 00:12:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:35.074 00:12:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:35.074 00:12:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:35.074 00:12:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:28:35.332 [2024-07-25 00:12:31.011644] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:35.332 [2024-07-25 00:12:31.011963] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:28:35.332 00:12:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:35.332 00:12:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:35.332 00:12:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.332 00:12:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:28:35.590 00:12:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:28:35.590 00:12:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:28:35.590 00:12:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:28:35.590 00:12:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:28:35.590 00:12:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:35.590 00:12:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:35.849 BaseBdev2 00:28:35.849 00:12:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:28:35.849 00:12:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:28:35.849 00:12:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:35.849 00:12:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:35.849 00:12:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:35.849 00:12:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:35.849 00:12:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:36.108 00:12:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:36.108 [ 00:28:36.108 { 00:28:36.108 "name": "BaseBdev2", 00:28:36.108 "aliases": [ 00:28:36.108 "fdefae84-d253-4c2a-a5d7-d38e220d9d6b" 00:28:36.108 ], 00:28:36.108 "product_name": "Malloc disk", 00:28:36.108 "block_size": 512, 00:28:36.108 "num_blocks": 65536, 00:28:36.108 "uuid": "fdefae84-d253-4c2a-a5d7-d38e220d9d6b", 00:28:36.108 "assigned_rate_limits": { 00:28:36.108 "rw_ios_per_sec": 0, 00:28:36.108 "rw_mbytes_per_sec": 0, 00:28:36.108 "r_mbytes_per_sec": 0, 00:28:36.108 "w_mbytes_per_sec": 0 00:28:36.108 }, 00:28:36.108 "claimed": false, 00:28:36.108 "zoned": false, 00:28:36.108 "supported_io_types": { 00:28:36.108 "read": true, 00:28:36.108 "write": true, 00:28:36.108 "unmap": true, 00:28:36.108 "flush": true, 00:28:36.108 "reset": true, 00:28:36.108 "nvme_admin": false, 00:28:36.108 "nvme_io": false, 00:28:36.108 "nvme_io_md": false, 00:28:36.108 "write_zeroes": true, 00:28:36.108 "zcopy": true, 00:28:36.108 "get_zone_info": false, 00:28:36.108 "zone_management": false, 00:28:36.108 "zone_append": false, 00:28:36.108 "compare": false, 00:28:36.108 "compare_and_write": false, 00:28:36.108 "abort": true, 00:28:36.108 "seek_hole": false, 00:28:36.108 "seek_data": false, 00:28:36.108 "copy": true, 00:28:36.108 "nvme_iov_md": false 00:28:36.108 }, 00:28:36.108 "memory_domains": [ 00:28:36.108 { 00:28:36.108 "dma_device_id": "system", 00:28:36.108 "dma_device_type": 1 00:28:36.108 }, 00:28:36.108 { 00:28:36.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:36.108 "dma_device_type": 2 00:28:36.108 } 00:28:36.108 ], 00:28:36.108 "driver_specific": {} 00:28:36.108 } 00:28:36.108 ] 00:28:36.366 00:12:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:36.366 00:12:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:36.366 00:12:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:36.366 00:12:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:36.625 BaseBdev3 00:28:36.625 00:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:28:36.625 00:12:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:28:36.625 00:12:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:36.625 00:12:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:36.625 00:12:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:36.625 00:12:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:36.625 00:12:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:36.625 00:12:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:36.883 [ 00:28:36.883 { 00:28:36.883 "name": "BaseBdev3", 00:28:36.883 "aliases": [ 00:28:36.883 "5f6a776e-5e88-4a82-9390-b3834cb6cc12" 00:28:36.883 ], 00:28:36.883 "product_name": "Malloc disk", 00:28:36.883 "block_size": 512, 00:28:36.883 "num_blocks": 65536, 00:28:36.883 "uuid": "5f6a776e-5e88-4a82-9390-b3834cb6cc12", 00:28:36.883 "assigned_rate_limits": { 00:28:36.883 "rw_ios_per_sec": 0, 00:28:36.883 "rw_mbytes_per_sec": 0, 00:28:36.883 "r_mbytes_per_sec": 0, 00:28:36.883 "w_mbytes_per_sec": 0 00:28:36.883 }, 00:28:36.883 "claimed": false, 00:28:36.883 "zoned": false, 00:28:36.883 "supported_io_types": { 00:28:36.883 "read": true, 00:28:36.883 "write": true, 00:28:36.883 "unmap": true, 00:28:36.883 "flush": true, 00:28:36.883 "reset": true, 00:28:36.883 "nvme_admin": false, 00:28:36.883 "nvme_io": false, 00:28:36.883 "nvme_io_md": false, 00:28:36.883 "write_zeroes": true, 00:28:36.883 "zcopy": true, 00:28:36.883 "get_zone_info": false, 00:28:36.883 "zone_management": false, 00:28:36.883 "zone_append": false, 00:28:36.883 "compare": false, 00:28:36.883 "compare_and_write": false, 00:28:36.883 "abort": true, 00:28:36.883 "seek_hole": false, 00:28:36.883 "seek_data": false, 00:28:36.883 "copy": true, 00:28:36.883 "nvme_iov_md": false 00:28:36.883 }, 00:28:36.883 "memory_domains": [ 00:28:36.883 { 00:28:36.883 "dma_device_id": "system", 00:28:36.883 "dma_device_type": 1 00:28:36.883 }, 00:28:36.883 { 00:28:36.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:36.883 "dma_device_type": 2 00:28:36.883 } 00:28:36.883 ], 00:28:36.883 "driver_specific": {} 00:28:36.883 } 00:28:36.883 ] 00:28:36.883 00:12:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:36.883 00:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:36.883 00:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:36.883 00:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:28:37.141 [2024-07-25 00:12:32.925335] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:37.141 [2024-07-25 00:12:32.925393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:37.141 [2024-07-25 00:12:32.925426] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:37.141 [2024-07-25 00:12:32.927690] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:37.141 00:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:37.141 00:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:37.141 00:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:37.141 00:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:37.141 00:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:37.141 00:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:37.141 00:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:37.141 00:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:37.141 00:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:37.141 00:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:37.141 00:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.141 00:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:37.400 00:12:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:37.400 "name": "Existed_Raid", 00:28:37.400 "uuid": "5048c9f4-2b29-4df0-847a-61fa2c6e2db5", 00:28:37.400 "strip_size_kb": 64, 00:28:37.400 "state": "configuring", 00:28:37.400 "raid_level": "raid5f", 00:28:37.400 "superblock": true, 00:28:37.400 "num_base_bdevs": 3, 00:28:37.400 "num_base_bdevs_discovered": 2, 00:28:37.400 "num_base_bdevs_operational": 3, 00:28:37.400 "base_bdevs_list": [ 00:28:37.400 { 00:28:37.400 "name": "BaseBdev1", 00:28:37.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:37.400 "is_configured": false, 00:28:37.400 "data_offset": 0, 00:28:37.400 "data_size": 0 00:28:37.400 }, 00:28:37.400 { 00:28:37.400 "name": "BaseBdev2", 00:28:37.400 "uuid": "fdefae84-d253-4c2a-a5d7-d38e220d9d6b", 00:28:37.400 "is_configured": true, 00:28:37.400 "data_offset": 2048, 00:28:37.400 "data_size": 63488 00:28:37.400 }, 00:28:37.400 { 00:28:37.400 "name": "BaseBdev3", 00:28:37.400 "uuid": "5f6a776e-5e88-4a82-9390-b3834cb6cc12", 00:28:37.400 "is_configured": true, 00:28:37.400 "data_offset": 2048, 00:28:37.400 "data_size": 63488 00:28:37.400 } 00:28:37.400 ] 00:28:37.400 }' 00:28:37.400 00:12:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:37.400 00:12:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:37.658 00:12:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:28:37.916 [2024-07-25 00:12:33.745539] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:37.916 00:12:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:37.916 00:12:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:37.916 00:12:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:37.916 00:12:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:37.916 00:12:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:37.916 00:12:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:37.916 00:12:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:37.916 00:12:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:37.916 00:12:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:37.916 00:12:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:37.916 00:12:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:37.916 00:12:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.264 00:12:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:38.264 "name": "Existed_Raid", 00:28:38.264 "uuid": "5048c9f4-2b29-4df0-847a-61fa2c6e2db5", 00:28:38.264 "strip_size_kb": 64, 00:28:38.264 "state": "configuring", 00:28:38.264 "raid_level": "raid5f", 00:28:38.264 "superblock": true, 00:28:38.264 "num_base_bdevs": 3, 00:28:38.264 "num_base_bdevs_discovered": 1, 00:28:38.264 "num_base_bdevs_operational": 3, 00:28:38.264 "base_bdevs_list": [ 00:28:38.264 { 00:28:38.264 "name": "BaseBdev1", 00:28:38.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:38.264 "is_configured": false, 00:28:38.264 "data_offset": 0, 00:28:38.264 "data_size": 0 00:28:38.264 }, 00:28:38.264 { 00:28:38.264 "name": null, 00:28:38.264 "uuid": "fdefae84-d253-4c2a-a5d7-d38e220d9d6b", 00:28:38.264 "is_configured": false, 00:28:38.264 "data_offset": 2048, 00:28:38.264 "data_size": 63488 00:28:38.264 }, 00:28:38.264 { 00:28:38.264 "name": "BaseBdev3", 00:28:38.264 "uuid": "5f6a776e-5e88-4a82-9390-b3834cb6cc12", 00:28:38.264 "is_configured": true, 00:28:38.264 "data_offset": 2048, 00:28:38.264 "data_size": 63488 00:28:38.264 } 00:28:38.264 ] 00:28:38.264 }' 00:28:38.264 00:12:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:38.264 00:12:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:38.523 00:12:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.523 00:12:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:38.782 00:12:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:28:38.782 00:12:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:39.041 [2024-07-25 00:12:34.741565] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:39.041 BaseBdev1 00:28:39.041 00:12:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:28:39.041 00:12:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:28:39.041 00:12:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:39.041 00:12:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:39.041 00:12:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:39.041 00:12:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:39.041 00:12:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:39.300 00:12:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:39.559 [ 00:28:39.559 { 00:28:39.559 "name": "BaseBdev1", 00:28:39.559 "aliases": [ 00:28:39.559 "bdd8c283-0747-42ef-8eca-59c4b66d64af" 00:28:39.559 ], 00:28:39.559 "product_name": "Malloc disk", 00:28:39.559 "block_size": 512, 00:28:39.559 "num_blocks": 65536, 00:28:39.559 "uuid": "bdd8c283-0747-42ef-8eca-59c4b66d64af", 00:28:39.559 "assigned_rate_limits": { 00:28:39.559 "rw_ios_per_sec": 0, 00:28:39.559 "rw_mbytes_per_sec": 0, 00:28:39.559 "r_mbytes_per_sec": 0, 00:28:39.559 "w_mbytes_per_sec": 0 00:28:39.559 }, 00:28:39.559 "claimed": true, 00:28:39.559 "claim_type": "exclusive_write", 00:28:39.559 "zoned": false, 00:28:39.559 "supported_io_types": { 00:28:39.559 "read": true, 00:28:39.559 "write": true, 00:28:39.560 "unmap": true, 00:28:39.560 "flush": true, 00:28:39.560 "reset": true, 00:28:39.560 "nvme_admin": false, 00:28:39.560 "nvme_io": false, 00:28:39.560 "nvme_io_md": false, 00:28:39.560 "write_zeroes": true, 00:28:39.560 "zcopy": true, 00:28:39.560 "get_zone_info": false, 00:28:39.560 "zone_management": false, 00:28:39.560 "zone_append": false, 00:28:39.560 "compare": false, 00:28:39.560 "compare_and_write": false, 00:28:39.560 "abort": true, 00:28:39.560 "seek_hole": false, 00:28:39.560 "seek_data": false, 00:28:39.560 "copy": true, 00:28:39.560 "nvme_iov_md": false 00:28:39.560 }, 00:28:39.560 "memory_domains": [ 00:28:39.560 { 00:28:39.560 "dma_device_id": "system", 00:28:39.560 "dma_device_type": 1 00:28:39.560 }, 00:28:39.560 { 00:28:39.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:39.560 "dma_device_type": 2 00:28:39.560 } 00:28:39.560 ], 00:28:39.560 "driver_specific": {} 00:28:39.560 } 00:28:39.560 ] 00:28:39.560 00:12:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:39.560 00:12:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:39.560 00:12:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:39.560 00:12:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:39.560 00:12:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:39.560 00:12:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:39.560 00:12:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:39.560 00:12:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:39.560 00:12:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:39.560 00:12:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:39.560 00:12:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:39.560 00:12:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.560 00:12:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:39.819 00:12:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:39.819 "name": "Existed_Raid", 00:28:39.819 "uuid": "5048c9f4-2b29-4df0-847a-61fa2c6e2db5", 00:28:39.819 "strip_size_kb": 64, 00:28:39.819 "state": "configuring", 00:28:39.819 "raid_level": "raid5f", 00:28:39.819 "superblock": true, 00:28:39.819 "num_base_bdevs": 3, 00:28:39.819 "num_base_bdevs_discovered": 2, 00:28:39.819 "num_base_bdevs_operational": 3, 00:28:39.819 "base_bdevs_list": [ 00:28:39.819 { 00:28:39.819 "name": "BaseBdev1", 00:28:39.819 "uuid": "bdd8c283-0747-42ef-8eca-59c4b66d64af", 00:28:39.819 "is_configured": true, 00:28:39.819 "data_offset": 2048, 00:28:39.819 "data_size": 63488 00:28:39.819 }, 00:28:39.819 { 00:28:39.819 "name": null, 00:28:39.819 "uuid": "fdefae84-d253-4c2a-a5d7-d38e220d9d6b", 00:28:39.819 "is_configured": false, 00:28:39.819 "data_offset": 2048, 00:28:39.819 "data_size": 63488 00:28:39.819 }, 00:28:39.819 { 00:28:39.819 "name": "BaseBdev3", 00:28:39.819 "uuid": "5f6a776e-5e88-4a82-9390-b3834cb6cc12", 00:28:39.819 "is_configured": true, 00:28:39.819 "data_offset": 2048, 00:28:39.819 "data_size": 63488 00:28:39.819 } 00:28:39.819 ] 00:28:39.819 }' 00:28:39.819 00:12:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:39.819 00:12:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:40.078 00:12:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.078 00:12:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:40.336 00:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:28:40.336 00:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:28:40.595 [2024-07-25 00:12:36.222211] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:40.595 00:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:40.595 00:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:40.595 00:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:40.595 00:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:40.595 00:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:40.595 00:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:40.595 00:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:40.595 00:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:40.595 00:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:40.595 00:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:40.596 00:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.596 00:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:40.854 00:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:40.854 "name": "Existed_Raid", 00:28:40.854 "uuid": "5048c9f4-2b29-4df0-847a-61fa2c6e2db5", 00:28:40.854 "strip_size_kb": 64, 00:28:40.854 "state": "configuring", 00:28:40.854 "raid_level": "raid5f", 00:28:40.854 "superblock": true, 00:28:40.854 "num_base_bdevs": 3, 00:28:40.854 "num_base_bdevs_discovered": 1, 00:28:40.854 "num_base_bdevs_operational": 3, 00:28:40.854 "base_bdevs_list": [ 00:28:40.854 { 00:28:40.854 "name": "BaseBdev1", 00:28:40.854 "uuid": "bdd8c283-0747-42ef-8eca-59c4b66d64af", 00:28:40.854 "is_configured": true, 00:28:40.854 "data_offset": 2048, 00:28:40.854 "data_size": 63488 00:28:40.854 }, 00:28:40.854 { 00:28:40.854 "name": null, 00:28:40.855 "uuid": "fdefae84-d253-4c2a-a5d7-d38e220d9d6b", 00:28:40.855 "is_configured": false, 00:28:40.855 "data_offset": 2048, 00:28:40.855 "data_size": 63488 00:28:40.855 }, 00:28:40.855 { 00:28:40.855 "name": null, 00:28:40.855 "uuid": "5f6a776e-5e88-4a82-9390-b3834cb6cc12", 00:28:40.855 "is_configured": false, 00:28:40.855 "data_offset": 2048, 00:28:40.855 "data_size": 63488 00:28:40.855 } 00:28:40.855 ] 00:28:40.855 }' 00:28:40.855 00:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:40.855 00:12:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:41.113 00:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.113 00:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:41.372 00:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:28:41.372 00:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:28:41.372 [2024-07-25 00:12:37.222427] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:41.372 00:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:41.372 00:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:41.372 00:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:41.372 00:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:41.372 00:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:41.372 00:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:41.372 00:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:41.372 00:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:41.372 00:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:41.372 00:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:41.631 00:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.631 00:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:41.631 00:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:41.631 "name": "Existed_Raid", 00:28:41.631 "uuid": "5048c9f4-2b29-4df0-847a-61fa2c6e2db5", 00:28:41.631 "strip_size_kb": 64, 00:28:41.631 "state": "configuring", 00:28:41.631 "raid_level": "raid5f", 00:28:41.631 "superblock": true, 00:28:41.631 "num_base_bdevs": 3, 00:28:41.631 "num_base_bdevs_discovered": 2, 00:28:41.631 "num_base_bdevs_operational": 3, 00:28:41.631 "base_bdevs_list": [ 00:28:41.631 { 00:28:41.631 "name": "BaseBdev1", 00:28:41.631 "uuid": "bdd8c283-0747-42ef-8eca-59c4b66d64af", 00:28:41.631 "is_configured": true, 00:28:41.631 "data_offset": 2048, 00:28:41.631 "data_size": 63488 00:28:41.631 }, 00:28:41.631 { 00:28:41.631 "name": null, 00:28:41.631 "uuid": "fdefae84-d253-4c2a-a5d7-d38e220d9d6b", 00:28:41.631 "is_configured": false, 00:28:41.631 "data_offset": 2048, 00:28:41.631 "data_size": 63488 00:28:41.631 }, 00:28:41.631 { 00:28:41.631 "name": "BaseBdev3", 00:28:41.631 "uuid": "5f6a776e-5e88-4a82-9390-b3834cb6cc12", 00:28:41.631 "is_configured": true, 00:28:41.631 "data_offset": 2048, 00:28:41.631 "data_size": 63488 00:28:41.631 } 00:28:41.631 ] 00:28:41.631 }' 00:28:41.631 00:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:41.631 00:12:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:41.890 00:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.890 00:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:42.149 00:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:28:42.149 00:12:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:42.717 [2024-07-25 00:12:38.279029] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:42.717 00:12:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:42.717 00:12:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:42.717 00:12:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:42.717 00:12:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:42.717 00:12:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:42.717 00:12:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:42.717 00:12:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:42.717 00:12:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:42.717 00:12:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:42.717 00:12:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:42.717 00:12:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:42.717 00:12:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.976 00:12:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:42.976 "name": "Existed_Raid", 00:28:42.976 "uuid": "5048c9f4-2b29-4df0-847a-61fa2c6e2db5", 00:28:42.976 "strip_size_kb": 64, 00:28:42.976 "state": "configuring", 00:28:42.976 "raid_level": "raid5f", 00:28:42.976 "superblock": true, 00:28:42.976 "num_base_bdevs": 3, 00:28:42.976 "num_base_bdevs_discovered": 1, 00:28:42.976 "num_base_bdevs_operational": 3, 00:28:42.976 "base_bdevs_list": [ 00:28:42.976 { 00:28:42.976 "name": null, 00:28:42.976 "uuid": "bdd8c283-0747-42ef-8eca-59c4b66d64af", 00:28:42.976 "is_configured": false, 00:28:42.976 "data_offset": 2048, 00:28:42.976 "data_size": 63488 00:28:42.976 }, 00:28:42.976 { 00:28:42.976 "name": null, 00:28:42.976 "uuid": "fdefae84-d253-4c2a-a5d7-d38e220d9d6b", 00:28:42.976 "is_configured": false, 00:28:42.976 "data_offset": 2048, 00:28:42.976 "data_size": 63488 00:28:42.976 }, 00:28:42.976 { 00:28:42.976 "name": "BaseBdev3", 00:28:42.976 "uuid": "5f6a776e-5e88-4a82-9390-b3834cb6cc12", 00:28:42.976 "is_configured": true, 00:28:42.976 "data_offset": 2048, 00:28:42.976 "data_size": 63488 00:28:42.976 } 00:28:42.976 ] 00:28:42.976 }' 00:28:42.976 00:12:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:42.976 00:12:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:43.235 00:12:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:43.235 00:12:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:43.494 00:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:28:43.494 00:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:28:43.494 [2024-07-25 00:12:39.339716] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:43.494 00:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:28:43.494 00:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:43.494 00:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:43.494 00:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:43.494 00:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:43.494 00:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:43.494 00:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:43.494 00:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:43.494 00:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:43.494 00:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:43.494 00:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:43.494 00:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:43.753 00:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:43.753 "name": "Existed_Raid", 00:28:43.753 "uuid": "5048c9f4-2b29-4df0-847a-61fa2c6e2db5", 00:28:43.753 "strip_size_kb": 64, 00:28:43.753 "state": "configuring", 00:28:43.753 "raid_level": "raid5f", 00:28:43.753 "superblock": true, 00:28:43.753 "num_base_bdevs": 3, 00:28:43.753 "num_base_bdevs_discovered": 2, 00:28:43.753 "num_base_bdevs_operational": 3, 00:28:43.753 "base_bdevs_list": [ 00:28:43.753 { 00:28:43.753 "name": null, 00:28:43.753 "uuid": "bdd8c283-0747-42ef-8eca-59c4b66d64af", 00:28:43.753 "is_configured": false, 00:28:43.753 "data_offset": 2048, 00:28:43.753 "data_size": 63488 00:28:43.753 }, 00:28:43.753 { 00:28:43.753 "name": "BaseBdev2", 00:28:43.753 "uuid": "fdefae84-d253-4c2a-a5d7-d38e220d9d6b", 00:28:43.753 "is_configured": true, 00:28:43.753 "data_offset": 2048, 00:28:43.753 "data_size": 63488 00:28:43.753 }, 00:28:43.753 { 00:28:43.753 "name": "BaseBdev3", 00:28:43.753 "uuid": "5f6a776e-5e88-4a82-9390-b3834cb6cc12", 00:28:43.753 "is_configured": true, 00:28:43.753 "data_offset": 2048, 00:28:43.753 "data_size": 63488 00:28:43.753 } 00:28:43.753 ] 00:28:43.753 }' 00:28:43.753 00:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:43.753 00:12:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:44.321 00:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:44.321 00:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:44.321 00:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:28:44.321 00:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:28:44.321 00:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:44.580 00:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u bdd8c283-0747-42ef-8eca-59c4b66d64af 00:28:44.839 [2024-07-25 00:12:40.624246] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:28:44.839 NewBaseBdev 00:28:44.839 [2024-07-25 00:12:40.624744] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:28:44.839 [2024-07-25 00:12:40.624775] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:28:44.839 [2024-07-25 00:12:40.624932] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005d40 00:28:44.839 [2024-07-25 00:12:40.630073] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:28:44.839 [2024-07-25 00:12:40.630228] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000008a80 00:28:44.839 [2024-07-25 00:12:40.630572] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:44.839 00:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:28:44.839 00:12:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:28:44.839 00:12:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:44.839 00:12:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:44.839 00:12:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:44.839 00:12:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:44.839 00:12:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:45.098 00:12:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:28:45.357 [ 00:28:45.357 { 00:28:45.357 "name": "NewBaseBdev", 00:28:45.357 "aliases": [ 00:28:45.357 "bdd8c283-0747-42ef-8eca-59c4b66d64af" 00:28:45.357 ], 00:28:45.357 "product_name": "Malloc disk", 00:28:45.357 "block_size": 512, 00:28:45.357 "num_blocks": 65536, 00:28:45.357 "uuid": "bdd8c283-0747-42ef-8eca-59c4b66d64af", 00:28:45.357 "assigned_rate_limits": { 00:28:45.357 "rw_ios_per_sec": 0, 00:28:45.357 "rw_mbytes_per_sec": 0, 00:28:45.357 "r_mbytes_per_sec": 0, 00:28:45.357 "w_mbytes_per_sec": 0 00:28:45.357 }, 00:28:45.357 "claimed": true, 00:28:45.357 "claim_type": "exclusive_write", 00:28:45.357 "zoned": false, 00:28:45.357 "supported_io_types": { 00:28:45.357 "read": true, 00:28:45.357 "write": true, 00:28:45.357 "unmap": true, 00:28:45.357 "flush": true, 00:28:45.357 "reset": true, 00:28:45.357 "nvme_admin": false, 00:28:45.357 "nvme_io": false, 00:28:45.357 "nvme_io_md": false, 00:28:45.357 "write_zeroes": true, 00:28:45.357 "zcopy": true, 00:28:45.357 "get_zone_info": false, 00:28:45.357 "zone_management": false, 00:28:45.357 "zone_append": false, 00:28:45.357 "compare": false, 00:28:45.357 "compare_and_write": false, 00:28:45.357 "abort": true, 00:28:45.357 "seek_hole": false, 00:28:45.357 "seek_data": false, 00:28:45.357 "copy": true, 00:28:45.357 "nvme_iov_md": false 00:28:45.357 }, 00:28:45.357 "memory_domains": [ 00:28:45.357 { 00:28:45.357 "dma_device_id": "system", 00:28:45.357 "dma_device_type": 1 00:28:45.357 }, 00:28:45.357 { 00:28:45.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:45.357 "dma_device_type": 2 00:28:45.357 } 00:28:45.357 ], 00:28:45.357 "driver_specific": {} 00:28:45.357 } 00:28:45.357 ] 00:28:45.357 00:12:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:45.357 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:28:45.357 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:45.357 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:45.357 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:45.357 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:45.357 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:45.357 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:45.357 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:45.357 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:45.357 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:45.357 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.357 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:45.615 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:45.615 "name": "Existed_Raid", 00:28:45.615 "uuid": "5048c9f4-2b29-4df0-847a-61fa2c6e2db5", 00:28:45.615 "strip_size_kb": 64, 00:28:45.615 "state": "online", 00:28:45.615 "raid_level": "raid5f", 00:28:45.615 "superblock": true, 00:28:45.615 "num_base_bdevs": 3, 00:28:45.615 "num_base_bdevs_discovered": 3, 00:28:45.615 "num_base_bdevs_operational": 3, 00:28:45.615 "base_bdevs_list": [ 00:28:45.615 { 00:28:45.615 "name": "NewBaseBdev", 00:28:45.615 "uuid": "bdd8c283-0747-42ef-8eca-59c4b66d64af", 00:28:45.615 "is_configured": true, 00:28:45.615 "data_offset": 2048, 00:28:45.615 "data_size": 63488 00:28:45.615 }, 00:28:45.615 { 00:28:45.615 "name": "BaseBdev2", 00:28:45.615 "uuid": "fdefae84-d253-4c2a-a5d7-d38e220d9d6b", 00:28:45.615 "is_configured": true, 00:28:45.615 "data_offset": 2048, 00:28:45.615 "data_size": 63488 00:28:45.615 }, 00:28:45.615 { 00:28:45.615 "name": "BaseBdev3", 00:28:45.615 "uuid": "5f6a776e-5e88-4a82-9390-b3834cb6cc12", 00:28:45.615 "is_configured": true, 00:28:45.615 "data_offset": 2048, 00:28:45.615 "data_size": 63488 00:28:45.615 } 00:28:45.615 ] 00:28:45.615 }' 00:28:45.615 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:45.615 00:12:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:45.874 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:28:45.874 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:45.874 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:45.874 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:45.874 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:45.874 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:28:45.874 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:45.874 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:46.133 [2024-07-25 00:12:41.896579] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:46.133 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:46.133 "name": "Existed_Raid", 00:28:46.133 "aliases": [ 00:28:46.133 "5048c9f4-2b29-4df0-847a-61fa2c6e2db5" 00:28:46.133 ], 00:28:46.133 "product_name": "Raid Volume", 00:28:46.133 "block_size": 512, 00:28:46.133 "num_blocks": 126976, 00:28:46.133 "uuid": "5048c9f4-2b29-4df0-847a-61fa2c6e2db5", 00:28:46.133 "assigned_rate_limits": { 00:28:46.133 "rw_ios_per_sec": 0, 00:28:46.133 "rw_mbytes_per_sec": 0, 00:28:46.133 "r_mbytes_per_sec": 0, 00:28:46.133 "w_mbytes_per_sec": 0 00:28:46.133 }, 00:28:46.133 "claimed": false, 00:28:46.133 "zoned": false, 00:28:46.133 "supported_io_types": { 00:28:46.133 "read": true, 00:28:46.133 "write": true, 00:28:46.133 "unmap": false, 00:28:46.133 "flush": false, 00:28:46.133 "reset": true, 00:28:46.133 "nvme_admin": false, 00:28:46.133 "nvme_io": false, 00:28:46.133 "nvme_io_md": false, 00:28:46.133 "write_zeroes": true, 00:28:46.133 "zcopy": false, 00:28:46.133 "get_zone_info": false, 00:28:46.133 "zone_management": false, 00:28:46.133 "zone_append": false, 00:28:46.133 "compare": false, 00:28:46.133 "compare_and_write": false, 00:28:46.133 "abort": false, 00:28:46.133 "seek_hole": false, 00:28:46.133 "seek_data": false, 00:28:46.133 "copy": false, 00:28:46.133 "nvme_iov_md": false 00:28:46.133 }, 00:28:46.133 "driver_specific": { 00:28:46.133 "raid": { 00:28:46.133 "uuid": "5048c9f4-2b29-4df0-847a-61fa2c6e2db5", 00:28:46.133 "strip_size_kb": 64, 00:28:46.133 "state": "online", 00:28:46.133 "raid_level": "raid5f", 00:28:46.133 "superblock": true, 00:28:46.133 "num_base_bdevs": 3, 00:28:46.133 "num_base_bdevs_discovered": 3, 00:28:46.133 "num_base_bdevs_operational": 3, 00:28:46.133 "base_bdevs_list": [ 00:28:46.133 { 00:28:46.133 "name": "NewBaseBdev", 00:28:46.133 "uuid": "bdd8c283-0747-42ef-8eca-59c4b66d64af", 00:28:46.133 "is_configured": true, 00:28:46.133 "data_offset": 2048, 00:28:46.133 "data_size": 63488 00:28:46.133 }, 00:28:46.133 { 00:28:46.133 "name": "BaseBdev2", 00:28:46.133 "uuid": "fdefae84-d253-4c2a-a5d7-d38e220d9d6b", 00:28:46.133 "is_configured": true, 00:28:46.133 "data_offset": 2048, 00:28:46.133 "data_size": 63488 00:28:46.133 }, 00:28:46.133 { 00:28:46.133 "name": "BaseBdev3", 00:28:46.133 "uuid": "5f6a776e-5e88-4a82-9390-b3834cb6cc12", 00:28:46.133 "is_configured": true, 00:28:46.133 "data_offset": 2048, 00:28:46.133 "data_size": 63488 00:28:46.133 } 00:28:46.133 ] 00:28:46.133 } 00:28:46.133 } 00:28:46.133 }' 00:28:46.133 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:46.133 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:28:46.133 BaseBdev2 00:28:46.133 BaseBdev3' 00:28:46.133 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:46.133 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:28:46.133 00:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:46.392 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:46.392 "name": "NewBaseBdev", 00:28:46.392 "aliases": [ 00:28:46.392 "bdd8c283-0747-42ef-8eca-59c4b66d64af" 00:28:46.392 ], 00:28:46.392 "product_name": "Malloc disk", 00:28:46.392 "block_size": 512, 00:28:46.392 "num_blocks": 65536, 00:28:46.392 "uuid": "bdd8c283-0747-42ef-8eca-59c4b66d64af", 00:28:46.392 "assigned_rate_limits": { 00:28:46.392 "rw_ios_per_sec": 0, 00:28:46.392 "rw_mbytes_per_sec": 0, 00:28:46.392 "r_mbytes_per_sec": 0, 00:28:46.392 "w_mbytes_per_sec": 0 00:28:46.392 }, 00:28:46.392 "claimed": true, 00:28:46.392 "claim_type": "exclusive_write", 00:28:46.392 "zoned": false, 00:28:46.392 "supported_io_types": { 00:28:46.392 "read": true, 00:28:46.392 "write": true, 00:28:46.392 "unmap": true, 00:28:46.392 "flush": true, 00:28:46.392 "reset": true, 00:28:46.392 "nvme_admin": false, 00:28:46.392 "nvme_io": false, 00:28:46.392 "nvme_io_md": false, 00:28:46.392 "write_zeroes": true, 00:28:46.392 "zcopy": true, 00:28:46.392 "get_zone_info": false, 00:28:46.392 "zone_management": false, 00:28:46.392 "zone_append": false, 00:28:46.392 "compare": false, 00:28:46.392 "compare_and_write": false, 00:28:46.392 "abort": true, 00:28:46.392 "seek_hole": false, 00:28:46.392 "seek_data": false, 00:28:46.392 "copy": true, 00:28:46.392 "nvme_iov_md": false 00:28:46.392 }, 00:28:46.392 "memory_domains": [ 00:28:46.392 { 00:28:46.392 "dma_device_id": "system", 00:28:46.392 "dma_device_type": 1 00:28:46.392 }, 00:28:46.392 { 00:28:46.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:46.392 "dma_device_type": 2 00:28:46.392 } 00:28:46.392 ], 00:28:46.392 "driver_specific": {} 00:28:46.392 }' 00:28:46.392 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:46.392 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:46.392 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:46.392 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:46.392 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:46.392 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:46.392 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:46.392 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:46.392 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:46.392 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:46.392 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:46.392 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:46.392 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:46.392 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:46.392 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:46.651 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:46.651 "name": "BaseBdev2", 00:28:46.651 "aliases": [ 00:28:46.651 "fdefae84-d253-4c2a-a5d7-d38e220d9d6b" 00:28:46.651 ], 00:28:46.651 "product_name": "Malloc disk", 00:28:46.651 "block_size": 512, 00:28:46.651 "num_blocks": 65536, 00:28:46.651 "uuid": "fdefae84-d253-4c2a-a5d7-d38e220d9d6b", 00:28:46.651 "assigned_rate_limits": { 00:28:46.651 "rw_ios_per_sec": 0, 00:28:46.651 "rw_mbytes_per_sec": 0, 00:28:46.651 "r_mbytes_per_sec": 0, 00:28:46.651 "w_mbytes_per_sec": 0 00:28:46.651 }, 00:28:46.651 "claimed": true, 00:28:46.651 "claim_type": "exclusive_write", 00:28:46.651 "zoned": false, 00:28:46.651 "supported_io_types": { 00:28:46.651 "read": true, 00:28:46.651 "write": true, 00:28:46.651 "unmap": true, 00:28:46.651 "flush": true, 00:28:46.651 "reset": true, 00:28:46.651 "nvme_admin": false, 00:28:46.651 "nvme_io": false, 00:28:46.651 "nvme_io_md": false, 00:28:46.651 "write_zeroes": true, 00:28:46.651 "zcopy": true, 00:28:46.651 "get_zone_info": false, 00:28:46.651 "zone_management": false, 00:28:46.651 "zone_append": false, 00:28:46.651 "compare": false, 00:28:46.651 "compare_and_write": false, 00:28:46.651 "abort": true, 00:28:46.651 "seek_hole": false, 00:28:46.651 "seek_data": false, 00:28:46.651 "copy": true, 00:28:46.651 "nvme_iov_md": false 00:28:46.651 }, 00:28:46.651 "memory_domains": [ 00:28:46.651 { 00:28:46.651 "dma_device_id": "system", 00:28:46.651 "dma_device_type": 1 00:28:46.651 }, 00:28:46.651 { 00:28:46.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:46.651 "dma_device_type": 2 00:28:46.651 } 00:28:46.651 ], 00:28:46.651 "driver_specific": {} 00:28:46.651 }' 00:28:46.651 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:46.651 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:46.651 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:46.651 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:46.651 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:46.651 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:46.651 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:46.651 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:46.651 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:46.651 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:46.651 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:46.651 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:46.651 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:46.651 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:46.651 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:46.910 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:46.910 "name": "BaseBdev3", 00:28:46.910 "aliases": [ 00:28:46.910 "5f6a776e-5e88-4a82-9390-b3834cb6cc12" 00:28:46.910 ], 00:28:46.910 "product_name": "Malloc disk", 00:28:46.910 "block_size": 512, 00:28:46.910 "num_blocks": 65536, 00:28:46.910 "uuid": "5f6a776e-5e88-4a82-9390-b3834cb6cc12", 00:28:46.910 "assigned_rate_limits": { 00:28:46.910 "rw_ios_per_sec": 0, 00:28:46.910 "rw_mbytes_per_sec": 0, 00:28:46.910 "r_mbytes_per_sec": 0, 00:28:46.910 "w_mbytes_per_sec": 0 00:28:46.910 }, 00:28:46.910 "claimed": true, 00:28:46.910 "claim_type": "exclusive_write", 00:28:46.910 "zoned": false, 00:28:46.910 "supported_io_types": { 00:28:46.910 "read": true, 00:28:46.910 "write": true, 00:28:46.910 "unmap": true, 00:28:46.910 "flush": true, 00:28:46.910 "reset": true, 00:28:46.910 "nvme_admin": false, 00:28:46.910 "nvme_io": false, 00:28:46.910 "nvme_io_md": false, 00:28:46.910 "write_zeroes": true, 00:28:46.910 "zcopy": true, 00:28:46.910 "get_zone_info": false, 00:28:46.910 "zone_management": false, 00:28:46.910 "zone_append": false, 00:28:46.910 "compare": false, 00:28:46.910 "compare_and_write": false, 00:28:46.910 "abort": true, 00:28:46.910 "seek_hole": false, 00:28:46.910 "seek_data": false, 00:28:46.910 "copy": true, 00:28:46.910 "nvme_iov_md": false 00:28:46.910 }, 00:28:46.910 "memory_domains": [ 00:28:46.910 { 00:28:46.910 "dma_device_id": "system", 00:28:46.910 "dma_device_type": 1 00:28:46.910 }, 00:28:46.910 { 00:28:46.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:46.911 "dma_device_type": 2 00:28:46.911 } 00:28:46.911 ], 00:28:46.911 "driver_specific": {} 00:28:46.911 }' 00:28:46.911 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:46.911 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:46.911 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:46.911 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:46.911 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:46.911 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:46.911 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:46.911 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:46.911 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:46.911 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:46.911 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:46.911 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:46.911 00:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:47.170 [2024-07-25 00:12:43.012733] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:47.170 [2024-07-25 00:12:43.013002] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:47.170 [2024-07-25 00:12:43.013120] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:47.170 [2024-07-25 00:12:43.013531] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:47.170 [2024-07-25 00:12:43.013558] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name Existed_Raid, state offline 00:28:47.170 00:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 103097 00:28:47.170 00:12:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 103097 ']' 00:28:47.170 00:12:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 103097 00:28:47.170 00:12:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:28:47.429 00:12:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:47.429 00:12:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 103097 00:28:47.429 00:12:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:47.429 00:12:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:47.429 killing process with pid 103097 00:28:47.429 00:12:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 103097' 00:28:47.429 00:12:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 103097 00:28:47.429 [2024-07-25 00:12:43.065824] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:47.429 00:12:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 103097 00:28:47.689 [2024-07-25 00:12:43.337430] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:48.623 ************************************ 00:28:48.623 END TEST raid5f_state_function_test_sb 00:28:48.623 ************************************ 00:28:48.623 00:12:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:28:48.623 00:28:48.623 real 0m23.225s 00:28:48.623 user 0m40.362s 00:28:48.623 sys 0m3.735s 00:28:48.623 00:12:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:48.623 00:12:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:48.623 00:12:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:28:48.623 00:12:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:48.623 00:12:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:48.623 00:12:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:48.623 ************************************ 00:28:48.623 START TEST raid5f_superblock_test 00:28:48.623 ************************************ 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid5f 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid5f '!=' raid1 ']' 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=103960 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 103960 /var/tmp/spdk-raid.sock 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 103960 ']' 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:48.623 00:12:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:48.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:48.624 00:12:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:48.624 00:12:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:48.624 00:12:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:48.882 [2024-07-25 00:12:44.519989] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:28:48.882 [2024-07-25 00:12:44.520171] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103960 ] 00:28:48.882 [2024-07-25 00:12:44.695270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.140 [2024-07-25 00:12:44.854647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.399 [2024-07-25 00:12:45.020359] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:49.658 00:12:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:49.658 00:12:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:28:49.658 00:12:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:28:49.658 00:12:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:49.658 00:12:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:28:49.658 00:12:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:28:49.658 00:12:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:49.658 00:12:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:49.658 00:12:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:28:49.658 00:12:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:49.658 00:12:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:28:49.917 malloc1 00:28:49.917 00:12:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:50.176 [2024-07-25 00:12:45.861273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:50.176 [2024-07-25 00:12:45.861363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:50.176 [2024-07-25 00:12:45.861396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:28:50.176 [2024-07-25 00:12:45.861409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:50.176 [2024-07-25 00:12:45.863657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:50.176 [2024-07-25 00:12:45.863702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:50.176 pt1 00:28:50.176 00:12:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:28:50.176 00:12:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:50.176 00:12:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:28:50.176 00:12:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:28:50.176 00:12:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:50.176 00:12:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:50.176 00:12:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:28:50.176 00:12:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:50.176 00:12:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:28:50.434 malloc2 00:28:50.434 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:50.693 [2024-07-25 00:12:46.309408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:50.693 [2024-07-25 00:12:46.309644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:50.693 [2024-07-25 00:12:46.309718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:28:50.693 [2024-07-25 00:12:46.309918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:50.693 [2024-07-25 00:12:46.312301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:50.693 [2024-07-25 00:12:46.312487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:50.693 pt2 00:28:50.693 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:28:50.693 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:50.693 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:28:50.693 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:28:50.693 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:28:50.693 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:50.693 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:28:50.693 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:50.693 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:28:50.693 malloc3 00:28:50.693 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:50.952 [2024-07-25 00:12:46.740492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:50.952 [2024-07-25 00:12:46.740579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:50.952 [2024-07-25 00:12:46.740610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:28:50.952 [2024-07-25 00:12:46.740624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:50.952 [2024-07-25 00:12:46.742894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:50.952 [2024-07-25 00:12:46.743085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:50.952 pt3 00:28:50.952 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:28:50.952 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:50.952 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:28:51.211 [2024-07-25 00:12:46.944559] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:51.211 [2024-07-25 00:12:46.946464] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:51.211 [2024-07-25 00:12:46.946539] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:51.211 [2024-07-25 00:12:46.946742] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:28:51.211 [2024-07-25 00:12:46.946762] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:28:51.211 [2024-07-25 00:12:46.946892] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:28:51.211 [2024-07-25 00:12:46.951371] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:28:51.211 [2024-07-25 00:12:46.951581] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008a80 00:28:51.211 [2024-07-25 00:12:46.952039] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:51.211 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:51.211 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:51.211 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:51.211 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:51.211 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:51.211 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:51.211 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:51.211 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:51.211 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:51.211 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:51.211 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:51.211 00:12:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:51.471 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:51.471 "name": "raid_bdev1", 00:28:51.471 "uuid": "86934dcb-26e0-403a-a079-22d134ce3618", 00:28:51.471 "strip_size_kb": 64, 00:28:51.471 "state": "online", 00:28:51.471 "raid_level": "raid5f", 00:28:51.471 "superblock": true, 00:28:51.471 "num_base_bdevs": 3, 00:28:51.471 "num_base_bdevs_discovered": 3, 00:28:51.471 "num_base_bdevs_operational": 3, 00:28:51.471 "base_bdevs_list": [ 00:28:51.471 { 00:28:51.471 "name": "pt1", 00:28:51.471 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:51.471 "is_configured": true, 00:28:51.471 "data_offset": 2048, 00:28:51.471 "data_size": 63488 00:28:51.471 }, 00:28:51.471 { 00:28:51.471 "name": "pt2", 00:28:51.471 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:51.471 "is_configured": true, 00:28:51.471 "data_offset": 2048, 00:28:51.471 "data_size": 63488 00:28:51.471 }, 00:28:51.471 { 00:28:51.471 "name": "pt3", 00:28:51.471 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:51.471 "is_configured": true, 00:28:51.471 "data_offset": 2048, 00:28:51.471 "data_size": 63488 00:28:51.471 } 00:28:51.471 ] 00:28:51.471 }' 00:28:51.471 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:51.471 00:12:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:51.730 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:28:51.730 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:28:51.730 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:51.730 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:51.730 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:51.730 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:28:51.730 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:51.730 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:51.989 [2024-07-25 00:12:47.665094] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:51.989 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:51.989 "name": "raid_bdev1", 00:28:51.989 "aliases": [ 00:28:51.989 "86934dcb-26e0-403a-a079-22d134ce3618" 00:28:51.989 ], 00:28:51.989 "product_name": "Raid Volume", 00:28:51.989 "block_size": 512, 00:28:51.989 "num_blocks": 126976, 00:28:51.989 "uuid": "86934dcb-26e0-403a-a079-22d134ce3618", 00:28:51.989 "assigned_rate_limits": { 00:28:51.989 "rw_ios_per_sec": 0, 00:28:51.989 "rw_mbytes_per_sec": 0, 00:28:51.989 "r_mbytes_per_sec": 0, 00:28:51.989 "w_mbytes_per_sec": 0 00:28:51.989 }, 00:28:51.989 "claimed": false, 00:28:51.989 "zoned": false, 00:28:51.989 "supported_io_types": { 00:28:51.989 "read": true, 00:28:51.989 "write": true, 00:28:51.989 "unmap": false, 00:28:51.989 "flush": false, 00:28:51.989 "reset": true, 00:28:51.989 "nvme_admin": false, 00:28:51.989 "nvme_io": false, 00:28:51.989 "nvme_io_md": false, 00:28:51.989 "write_zeroes": true, 00:28:51.989 "zcopy": false, 00:28:51.989 "get_zone_info": false, 00:28:51.989 "zone_management": false, 00:28:51.989 "zone_append": false, 00:28:51.989 "compare": false, 00:28:51.989 "compare_and_write": false, 00:28:51.989 "abort": false, 00:28:51.989 "seek_hole": false, 00:28:51.989 "seek_data": false, 00:28:51.989 "copy": false, 00:28:51.989 "nvme_iov_md": false 00:28:51.989 }, 00:28:51.989 "driver_specific": { 00:28:51.989 "raid": { 00:28:51.989 "uuid": "86934dcb-26e0-403a-a079-22d134ce3618", 00:28:51.989 "strip_size_kb": 64, 00:28:51.989 "state": "online", 00:28:51.989 "raid_level": "raid5f", 00:28:51.989 "superblock": true, 00:28:51.989 "num_base_bdevs": 3, 00:28:51.989 "num_base_bdevs_discovered": 3, 00:28:51.989 "num_base_bdevs_operational": 3, 00:28:51.989 "base_bdevs_list": [ 00:28:51.989 { 00:28:51.989 "name": "pt1", 00:28:51.989 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:51.989 "is_configured": true, 00:28:51.989 "data_offset": 2048, 00:28:51.989 "data_size": 63488 00:28:51.989 }, 00:28:51.989 { 00:28:51.989 "name": "pt2", 00:28:51.989 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:51.989 "is_configured": true, 00:28:51.989 "data_offset": 2048, 00:28:51.989 "data_size": 63488 00:28:51.989 }, 00:28:51.989 { 00:28:51.989 "name": "pt3", 00:28:51.989 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:51.989 "is_configured": true, 00:28:51.989 "data_offset": 2048, 00:28:51.989 "data_size": 63488 00:28:51.989 } 00:28:51.989 ] 00:28:51.989 } 00:28:51.989 } 00:28:51.989 }' 00:28:51.989 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:51.989 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:28:51.989 pt2 00:28:51.989 pt3' 00:28:51.989 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:51.989 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:51.989 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:52.258 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:52.258 "name": "pt1", 00:28:52.258 "aliases": [ 00:28:52.258 "00000000-0000-0000-0000-000000000001" 00:28:52.258 ], 00:28:52.258 "product_name": "passthru", 00:28:52.258 "block_size": 512, 00:28:52.258 "num_blocks": 65536, 00:28:52.258 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:52.258 "assigned_rate_limits": { 00:28:52.258 "rw_ios_per_sec": 0, 00:28:52.258 "rw_mbytes_per_sec": 0, 00:28:52.258 "r_mbytes_per_sec": 0, 00:28:52.258 "w_mbytes_per_sec": 0 00:28:52.258 }, 00:28:52.258 "claimed": true, 00:28:52.259 "claim_type": "exclusive_write", 00:28:52.259 "zoned": false, 00:28:52.259 "supported_io_types": { 00:28:52.259 "read": true, 00:28:52.259 "write": true, 00:28:52.259 "unmap": true, 00:28:52.259 "flush": true, 00:28:52.259 "reset": true, 00:28:52.259 "nvme_admin": false, 00:28:52.259 "nvme_io": false, 00:28:52.259 "nvme_io_md": false, 00:28:52.259 "write_zeroes": true, 00:28:52.259 "zcopy": true, 00:28:52.259 "get_zone_info": false, 00:28:52.259 "zone_management": false, 00:28:52.259 "zone_append": false, 00:28:52.259 "compare": false, 00:28:52.259 "compare_and_write": false, 00:28:52.259 "abort": true, 00:28:52.259 "seek_hole": false, 00:28:52.259 "seek_data": false, 00:28:52.259 "copy": true, 00:28:52.259 "nvme_iov_md": false 00:28:52.259 }, 00:28:52.259 "memory_domains": [ 00:28:52.259 { 00:28:52.259 "dma_device_id": "system", 00:28:52.259 "dma_device_type": 1 00:28:52.259 }, 00:28:52.259 { 00:28:52.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:52.259 "dma_device_type": 2 00:28:52.259 } 00:28:52.259 ], 00:28:52.259 "driver_specific": { 00:28:52.259 "passthru": { 00:28:52.259 "name": "pt1", 00:28:52.259 "base_bdev_name": "malloc1" 00:28:52.259 } 00:28:52.259 } 00:28:52.259 }' 00:28:52.259 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:52.259 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:52.259 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:52.259 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:52.259 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:52.259 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:52.259 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:52.259 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:52.259 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:52.259 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:52.259 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:52.259 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:52.259 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:52.259 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:52.259 00:12:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:52.531 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:52.531 "name": "pt2", 00:28:52.531 "aliases": [ 00:28:52.531 "00000000-0000-0000-0000-000000000002" 00:28:52.531 ], 00:28:52.531 "product_name": "passthru", 00:28:52.531 "block_size": 512, 00:28:52.531 "num_blocks": 65536, 00:28:52.531 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:52.531 "assigned_rate_limits": { 00:28:52.531 "rw_ios_per_sec": 0, 00:28:52.531 "rw_mbytes_per_sec": 0, 00:28:52.531 "r_mbytes_per_sec": 0, 00:28:52.531 "w_mbytes_per_sec": 0 00:28:52.531 }, 00:28:52.531 "claimed": true, 00:28:52.531 "claim_type": "exclusive_write", 00:28:52.531 "zoned": false, 00:28:52.531 "supported_io_types": { 00:28:52.531 "read": true, 00:28:52.531 "write": true, 00:28:52.531 "unmap": true, 00:28:52.531 "flush": true, 00:28:52.531 "reset": true, 00:28:52.531 "nvme_admin": false, 00:28:52.531 "nvme_io": false, 00:28:52.531 "nvme_io_md": false, 00:28:52.532 "write_zeroes": true, 00:28:52.532 "zcopy": true, 00:28:52.532 "get_zone_info": false, 00:28:52.532 "zone_management": false, 00:28:52.532 "zone_append": false, 00:28:52.532 "compare": false, 00:28:52.532 "compare_and_write": false, 00:28:52.532 "abort": true, 00:28:52.532 "seek_hole": false, 00:28:52.532 "seek_data": false, 00:28:52.532 "copy": true, 00:28:52.532 "nvme_iov_md": false 00:28:52.532 }, 00:28:52.532 "memory_domains": [ 00:28:52.532 { 00:28:52.532 "dma_device_id": "system", 00:28:52.532 "dma_device_type": 1 00:28:52.532 }, 00:28:52.532 { 00:28:52.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:52.532 "dma_device_type": 2 00:28:52.532 } 00:28:52.532 ], 00:28:52.532 "driver_specific": { 00:28:52.532 "passthru": { 00:28:52.532 "name": "pt2", 00:28:52.532 "base_bdev_name": "malloc2" 00:28:52.532 } 00:28:52.532 } 00:28:52.532 }' 00:28:52.532 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:52.532 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:52.532 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:52.532 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:52.532 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:52.532 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:52.532 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:52.532 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:52.532 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:52.532 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:52.532 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:52.532 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:52.532 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:52.532 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:52.532 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:28:52.790 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:52.790 "name": "pt3", 00:28:52.790 "aliases": [ 00:28:52.790 "00000000-0000-0000-0000-000000000003" 00:28:52.790 ], 00:28:52.790 "product_name": "passthru", 00:28:52.790 "block_size": 512, 00:28:52.790 "num_blocks": 65536, 00:28:52.790 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:52.790 "assigned_rate_limits": { 00:28:52.790 "rw_ios_per_sec": 0, 00:28:52.790 "rw_mbytes_per_sec": 0, 00:28:52.790 "r_mbytes_per_sec": 0, 00:28:52.790 "w_mbytes_per_sec": 0 00:28:52.790 }, 00:28:52.790 "claimed": true, 00:28:52.790 "claim_type": "exclusive_write", 00:28:52.790 "zoned": false, 00:28:52.790 "supported_io_types": { 00:28:52.790 "read": true, 00:28:52.790 "write": true, 00:28:52.790 "unmap": true, 00:28:52.790 "flush": true, 00:28:52.790 "reset": true, 00:28:52.790 "nvme_admin": false, 00:28:52.790 "nvme_io": false, 00:28:52.790 "nvme_io_md": false, 00:28:52.790 "write_zeroes": true, 00:28:52.790 "zcopy": true, 00:28:52.790 "get_zone_info": false, 00:28:52.790 "zone_management": false, 00:28:52.790 "zone_append": false, 00:28:52.790 "compare": false, 00:28:52.790 "compare_and_write": false, 00:28:52.790 "abort": true, 00:28:52.790 "seek_hole": false, 00:28:52.790 "seek_data": false, 00:28:52.790 "copy": true, 00:28:52.790 "nvme_iov_md": false 00:28:52.790 }, 00:28:52.790 "memory_domains": [ 00:28:52.790 { 00:28:52.790 "dma_device_id": "system", 00:28:52.790 "dma_device_type": 1 00:28:52.790 }, 00:28:52.790 { 00:28:52.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:52.790 "dma_device_type": 2 00:28:52.791 } 00:28:52.791 ], 00:28:52.791 "driver_specific": { 00:28:52.791 "passthru": { 00:28:52.791 "name": "pt3", 00:28:52.791 "base_bdev_name": "malloc3" 00:28:52.791 } 00:28:52.791 } 00:28:52.791 }' 00:28:52.791 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:52.791 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:52.791 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:52.791 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:52.791 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:52.791 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:52.791 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:52.791 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:52.791 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:52.791 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:52.791 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:52.791 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:52.791 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:52.791 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:28:53.049 [2024-07-25 00:12:48.793498] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:53.049 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=86934dcb-26e0-403a-a079-22d134ce3618 00:28:53.049 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 86934dcb-26e0-403a-a079-22d134ce3618 ']' 00:28:53.049 00:12:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:53.308 [2024-07-25 00:12:49.069417] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:53.308 [2024-07-25 00:12:49.069637] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:53.308 [2024-07-25 00:12:49.069737] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:53.308 [2024-07-25 00:12:49.069879] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:53.308 [2024-07-25 00:12:49.069898] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name raid_bdev1, state offline 00:28:53.308 00:12:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.308 00:12:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:28:53.567 00:12:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:28:53.567 00:12:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:28:53.567 00:12:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:28:53.567 00:12:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:53.825 00:12:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:28:53.825 00:12:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:54.084 00:12:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:28:54.084 00:12:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:28:54.343 00:12:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:28:54.343 00:12:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:28:54.602 00:12:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:28:54.602 00:12:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:28:54.602 00:12:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:28:54.602 00:12:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:28:54.602 00:12:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:54.602 00:12:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:54.602 00:12:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:54.602 00:12:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:54.602 00:12:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:54.602 00:12:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:54.602 00:12:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:54.602 00:12:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:54.602 00:12:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:28:54.861 [2024-07-25 00:12:50.473888] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:54.861 [2024-07-25 00:12:50.476325] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:54.861 [2024-07-25 00:12:50.476558] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:28:54.861 [2024-07-25 00:12:50.476642] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:28:54.861 [2024-07-25 00:12:50.476723] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:28:54.861 [2024-07-25 00:12:50.476755] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:28:54.861 [2024-07-25 00:12:50.476779] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:54.861 [2024-07-25 00:12:50.476790] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state configuring 00:28:54.861 request: 00:28:54.861 { 00:28:54.861 "name": "raid_bdev1", 00:28:54.861 "raid_level": "raid5f", 00:28:54.861 "base_bdevs": [ 00:28:54.861 "malloc1", 00:28:54.861 "malloc2", 00:28:54.861 "malloc3" 00:28:54.861 ], 00:28:54.861 "strip_size_kb": 64, 00:28:54.861 "superblock": false, 00:28:54.861 "method": "bdev_raid_create", 00:28:54.861 "req_id": 1 00:28:54.861 } 00:28:54.861 Got JSON-RPC error response 00:28:54.861 response: 00:28:54.861 { 00:28:54.861 "code": -17, 00:28:54.861 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:28:54.861 } 00:28:54.861 00:12:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:28:54.861 00:12:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:54.861 00:12:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:54.861 00:12:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:54.862 00:12:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:28:54.862 00:12:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:54.862 00:12:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:28:54.862 00:12:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:28:54.862 00:12:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:55.121 [2024-07-25 00:12:50.925950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:55.121 [2024-07-25 00:12:50.926029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:55.121 [2024-07-25 00:12:50.926057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009680 00:28:55.121 [2024-07-25 00:12:50.926069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:55.121 [2024-07-25 00:12:50.928424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:55.121 [2024-07-25 00:12:50.928466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:55.121 [2024-07-25 00:12:50.928578] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:55.121 [2024-07-25 00:12:50.928642] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:55.121 pt1 00:28:55.121 00:12:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:28:55.121 00:12:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:55.121 00:12:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:55.121 00:12:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:55.121 00:12:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:55.121 00:12:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:55.121 00:12:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:55.121 00:12:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:55.121 00:12:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:55.121 00:12:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:55.121 00:12:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:55.121 00:12:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:55.380 00:12:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:55.380 "name": "raid_bdev1", 00:28:55.380 "uuid": "86934dcb-26e0-403a-a079-22d134ce3618", 00:28:55.380 "strip_size_kb": 64, 00:28:55.380 "state": "configuring", 00:28:55.380 "raid_level": "raid5f", 00:28:55.380 "superblock": true, 00:28:55.380 "num_base_bdevs": 3, 00:28:55.380 "num_base_bdevs_discovered": 1, 00:28:55.380 "num_base_bdevs_operational": 3, 00:28:55.380 "base_bdevs_list": [ 00:28:55.380 { 00:28:55.380 "name": "pt1", 00:28:55.380 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:55.380 "is_configured": true, 00:28:55.380 "data_offset": 2048, 00:28:55.380 "data_size": 63488 00:28:55.380 }, 00:28:55.380 { 00:28:55.380 "name": null, 00:28:55.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:55.380 "is_configured": false, 00:28:55.380 "data_offset": 2048, 00:28:55.380 "data_size": 63488 00:28:55.380 }, 00:28:55.380 { 00:28:55.380 "name": null, 00:28:55.380 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:55.380 "is_configured": false, 00:28:55.380 "data_offset": 2048, 00:28:55.380 "data_size": 63488 00:28:55.380 } 00:28:55.380 ] 00:28:55.380 }' 00:28:55.380 00:12:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:55.380 00:12:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:55.948 00:12:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:28:55.948 00:12:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:55.948 [2024-07-25 00:12:51.722257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:55.948 [2024-07-25 00:12:51.722359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:55.948 [2024-07-25 00:12:51.722391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:28:55.948 [2024-07-25 00:12:51.722404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:55.948 [2024-07-25 00:12:51.722934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:55.948 [2024-07-25 00:12:51.722958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:55.948 [2024-07-25 00:12:51.723069] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:55.948 [2024-07-25 00:12:51.723103] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:55.948 pt2 00:28:55.948 00:12:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:56.207 [2024-07-25 00:12:51.998398] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:28:56.207 00:12:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:28:56.207 00:12:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:56.207 00:12:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:56.207 00:12:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:56.207 00:12:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:56.207 00:12:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:56.207 00:12:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:56.207 00:12:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:56.207 00:12:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:56.207 00:12:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:56.207 00:12:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:56.207 00:12:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.466 00:12:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:56.466 "name": "raid_bdev1", 00:28:56.466 "uuid": "86934dcb-26e0-403a-a079-22d134ce3618", 00:28:56.466 "strip_size_kb": 64, 00:28:56.466 "state": "configuring", 00:28:56.466 "raid_level": "raid5f", 00:28:56.466 "superblock": true, 00:28:56.466 "num_base_bdevs": 3, 00:28:56.466 "num_base_bdevs_discovered": 1, 00:28:56.466 "num_base_bdevs_operational": 3, 00:28:56.466 "base_bdevs_list": [ 00:28:56.466 { 00:28:56.466 "name": "pt1", 00:28:56.466 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:56.466 "is_configured": true, 00:28:56.466 "data_offset": 2048, 00:28:56.466 "data_size": 63488 00:28:56.466 }, 00:28:56.466 { 00:28:56.466 "name": null, 00:28:56.466 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:56.466 "is_configured": false, 00:28:56.466 "data_offset": 2048, 00:28:56.466 "data_size": 63488 00:28:56.466 }, 00:28:56.466 { 00:28:56.466 "name": null, 00:28:56.466 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:56.466 "is_configured": false, 00:28:56.466 "data_offset": 2048, 00:28:56.466 "data_size": 63488 00:28:56.466 } 00:28:56.466 ] 00:28:56.466 }' 00:28:56.466 00:12:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:56.466 00:12:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:56.726 00:12:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:28:56.726 00:12:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:28:56.726 00:12:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:56.984 [2024-07-25 00:12:52.766559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:56.984 [2024-07-25 00:12:52.766648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:56.984 [2024-07-25 00:12:52.766671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:28:56.984 [2024-07-25 00:12:52.766687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:56.984 [2024-07-25 00:12:52.767163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:56.984 [2024-07-25 00:12:52.767191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:56.984 [2024-07-25 00:12:52.767298] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:56.984 [2024-07-25 00:12:52.767338] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:56.984 pt2 00:28:56.984 00:12:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:28:56.984 00:12:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:28:56.984 00:12:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:57.243 [2024-07-25 00:12:53.018625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:57.243 [2024-07-25 00:12:53.018898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:57.243 [2024-07-25 00:12:53.018965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:28:57.243 [2024-07-25 00:12:53.019184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:57.243 [2024-07-25 00:12:53.019759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:57.243 [2024-07-25 00:12:53.019990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:57.243 [2024-07-25 00:12:53.020211] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:28:57.243 [2024-07-25 00:12:53.020356] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:57.243 [2024-07-25 00:12:53.020595] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009c80 00:28:57.243 [2024-07-25 00:12:53.020724] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:28:57.243 [2024-07-25 00:12:53.020877] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:28:57.243 [2024-07-25 00:12:53.025206] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009c80 00:28:57.243 [2024-07-25 00:12:53.025375] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009c80 00:28:57.243 [2024-07-25 00:12:53.025720] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:57.243 pt3 00:28:57.243 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:28:57.243 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:28:57.243 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:57.243 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:57.243 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:57.243 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:57.243 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:57.243 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:57.243 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:57.243 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:57.243 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:57.243 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:57.243 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:57.243 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:57.502 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:57.502 "name": "raid_bdev1", 00:28:57.502 "uuid": "86934dcb-26e0-403a-a079-22d134ce3618", 00:28:57.502 "strip_size_kb": 64, 00:28:57.502 "state": "online", 00:28:57.502 "raid_level": "raid5f", 00:28:57.502 "superblock": true, 00:28:57.502 "num_base_bdevs": 3, 00:28:57.502 "num_base_bdevs_discovered": 3, 00:28:57.502 "num_base_bdevs_operational": 3, 00:28:57.502 "base_bdevs_list": [ 00:28:57.502 { 00:28:57.502 "name": "pt1", 00:28:57.502 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:57.502 "is_configured": true, 00:28:57.502 "data_offset": 2048, 00:28:57.502 "data_size": 63488 00:28:57.502 }, 00:28:57.502 { 00:28:57.502 "name": "pt2", 00:28:57.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:57.502 "is_configured": true, 00:28:57.502 "data_offset": 2048, 00:28:57.502 "data_size": 63488 00:28:57.502 }, 00:28:57.502 { 00:28:57.502 "name": "pt3", 00:28:57.502 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:57.502 "is_configured": true, 00:28:57.502 "data_offset": 2048, 00:28:57.502 "data_size": 63488 00:28:57.502 } 00:28:57.502 ] 00:28:57.502 }' 00:28:57.502 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:57.502 00:12:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:58.068 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:28:58.068 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:28:58.068 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:58.068 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:58.068 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:58.068 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:28:58.068 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:58.068 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:58.068 [2024-07-25 00:12:53.802757] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:58.068 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:58.068 "name": "raid_bdev1", 00:28:58.068 "aliases": [ 00:28:58.068 "86934dcb-26e0-403a-a079-22d134ce3618" 00:28:58.068 ], 00:28:58.068 "product_name": "Raid Volume", 00:28:58.068 "block_size": 512, 00:28:58.068 "num_blocks": 126976, 00:28:58.068 "uuid": "86934dcb-26e0-403a-a079-22d134ce3618", 00:28:58.068 "assigned_rate_limits": { 00:28:58.068 "rw_ios_per_sec": 0, 00:28:58.068 "rw_mbytes_per_sec": 0, 00:28:58.068 "r_mbytes_per_sec": 0, 00:28:58.068 "w_mbytes_per_sec": 0 00:28:58.068 }, 00:28:58.068 "claimed": false, 00:28:58.068 "zoned": false, 00:28:58.068 "supported_io_types": { 00:28:58.068 "read": true, 00:28:58.068 "write": true, 00:28:58.068 "unmap": false, 00:28:58.068 "flush": false, 00:28:58.068 "reset": true, 00:28:58.068 "nvme_admin": false, 00:28:58.068 "nvme_io": false, 00:28:58.068 "nvme_io_md": false, 00:28:58.068 "write_zeroes": true, 00:28:58.068 "zcopy": false, 00:28:58.068 "get_zone_info": false, 00:28:58.068 "zone_management": false, 00:28:58.068 "zone_append": false, 00:28:58.068 "compare": false, 00:28:58.068 "compare_and_write": false, 00:28:58.068 "abort": false, 00:28:58.068 "seek_hole": false, 00:28:58.068 "seek_data": false, 00:28:58.068 "copy": false, 00:28:58.068 "nvme_iov_md": false 00:28:58.068 }, 00:28:58.068 "driver_specific": { 00:28:58.068 "raid": { 00:28:58.068 "uuid": "86934dcb-26e0-403a-a079-22d134ce3618", 00:28:58.068 "strip_size_kb": 64, 00:28:58.068 "state": "online", 00:28:58.068 "raid_level": "raid5f", 00:28:58.068 "superblock": true, 00:28:58.068 "num_base_bdevs": 3, 00:28:58.068 "num_base_bdevs_discovered": 3, 00:28:58.068 "num_base_bdevs_operational": 3, 00:28:58.068 "base_bdevs_list": [ 00:28:58.068 { 00:28:58.068 "name": "pt1", 00:28:58.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:58.068 "is_configured": true, 00:28:58.068 "data_offset": 2048, 00:28:58.068 "data_size": 63488 00:28:58.068 }, 00:28:58.068 { 00:28:58.068 "name": "pt2", 00:28:58.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:58.068 "is_configured": true, 00:28:58.068 "data_offset": 2048, 00:28:58.068 "data_size": 63488 00:28:58.068 }, 00:28:58.068 { 00:28:58.068 "name": "pt3", 00:28:58.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:58.068 "is_configured": true, 00:28:58.068 "data_offset": 2048, 00:28:58.068 "data_size": 63488 00:28:58.068 } 00:28:58.068 ] 00:28:58.068 } 00:28:58.068 } 00:28:58.068 }' 00:28:58.068 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:58.068 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:28:58.068 pt2 00:28:58.068 pt3' 00:28:58.068 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:58.068 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:58.068 00:12:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:58.327 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:58.327 "name": "pt1", 00:28:58.327 "aliases": [ 00:28:58.327 "00000000-0000-0000-0000-000000000001" 00:28:58.327 ], 00:28:58.327 "product_name": "passthru", 00:28:58.327 "block_size": 512, 00:28:58.327 "num_blocks": 65536, 00:28:58.327 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:58.327 "assigned_rate_limits": { 00:28:58.327 "rw_ios_per_sec": 0, 00:28:58.327 "rw_mbytes_per_sec": 0, 00:28:58.327 "r_mbytes_per_sec": 0, 00:28:58.327 "w_mbytes_per_sec": 0 00:28:58.327 }, 00:28:58.327 "claimed": true, 00:28:58.327 "claim_type": "exclusive_write", 00:28:58.327 "zoned": false, 00:28:58.327 "supported_io_types": { 00:28:58.327 "read": true, 00:28:58.327 "write": true, 00:28:58.327 "unmap": true, 00:28:58.327 "flush": true, 00:28:58.327 "reset": true, 00:28:58.327 "nvme_admin": false, 00:28:58.327 "nvme_io": false, 00:28:58.327 "nvme_io_md": false, 00:28:58.327 "write_zeroes": true, 00:28:58.327 "zcopy": true, 00:28:58.327 "get_zone_info": false, 00:28:58.327 "zone_management": false, 00:28:58.327 "zone_append": false, 00:28:58.327 "compare": false, 00:28:58.327 "compare_and_write": false, 00:28:58.327 "abort": true, 00:28:58.327 "seek_hole": false, 00:28:58.327 "seek_data": false, 00:28:58.327 "copy": true, 00:28:58.327 "nvme_iov_md": false 00:28:58.327 }, 00:28:58.327 "memory_domains": [ 00:28:58.327 { 00:28:58.327 "dma_device_id": "system", 00:28:58.327 "dma_device_type": 1 00:28:58.327 }, 00:28:58.327 { 00:28:58.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:58.327 "dma_device_type": 2 00:28:58.327 } 00:28:58.327 ], 00:28:58.327 "driver_specific": { 00:28:58.327 "passthru": { 00:28:58.327 "name": "pt1", 00:28:58.327 "base_bdev_name": "malloc1" 00:28:58.327 } 00:28:58.327 } 00:28:58.327 }' 00:28:58.327 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:58.327 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:58.327 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:58.327 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:58.327 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:58.327 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:58.327 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:58.327 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:58.327 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:58.327 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:58.327 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:58.327 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:58.327 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:58.327 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:58.327 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:58.586 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:58.586 "name": "pt2", 00:28:58.586 "aliases": [ 00:28:58.586 "00000000-0000-0000-0000-000000000002" 00:28:58.586 ], 00:28:58.586 "product_name": "passthru", 00:28:58.586 "block_size": 512, 00:28:58.586 "num_blocks": 65536, 00:28:58.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:58.586 "assigned_rate_limits": { 00:28:58.586 "rw_ios_per_sec": 0, 00:28:58.586 "rw_mbytes_per_sec": 0, 00:28:58.586 "r_mbytes_per_sec": 0, 00:28:58.586 "w_mbytes_per_sec": 0 00:28:58.586 }, 00:28:58.586 "claimed": true, 00:28:58.586 "claim_type": "exclusive_write", 00:28:58.586 "zoned": false, 00:28:58.586 "supported_io_types": { 00:28:58.586 "read": true, 00:28:58.586 "write": true, 00:28:58.586 "unmap": true, 00:28:58.586 "flush": true, 00:28:58.586 "reset": true, 00:28:58.586 "nvme_admin": false, 00:28:58.586 "nvme_io": false, 00:28:58.586 "nvme_io_md": false, 00:28:58.586 "write_zeroes": true, 00:28:58.586 "zcopy": true, 00:28:58.586 "get_zone_info": false, 00:28:58.586 "zone_management": false, 00:28:58.586 "zone_append": false, 00:28:58.586 "compare": false, 00:28:58.586 "compare_and_write": false, 00:28:58.586 "abort": true, 00:28:58.586 "seek_hole": false, 00:28:58.586 "seek_data": false, 00:28:58.586 "copy": true, 00:28:58.586 "nvme_iov_md": false 00:28:58.586 }, 00:28:58.586 "memory_domains": [ 00:28:58.586 { 00:28:58.586 "dma_device_id": "system", 00:28:58.586 "dma_device_type": 1 00:28:58.586 }, 00:28:58.586 { 00:28:58.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:58.586 "dma_device_type": 2 00:28:58.586 } 00:28:58.586 ], 00:28:58.586 "driver_specific": { 00:28:58.586 "passthru": { 00:28:58.586 "name": "pt2", 00:28:58.586 "base_bdev_name": "malloc2" 00:28:58.586 } 00:28:58.586 } 00:28:58.586 }' 00:28:58.586 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:58.586 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:58.586 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:58.586 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:58.586 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:58.586 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:58.586 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:58.586 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:58.586 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:58.586 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:58.586 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:58.586 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:58.586 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:58.586 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:28:58.586 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:58.845 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:58.845 "name": "pt3", 00:28:58.845 "aliases": [ 00:28:58.845 "00000000-0000-0000-0000-000000000003" 00:28:58.845 ], 00:28:58.845 "product_name": "passthru", 00:28:58.845 "block_size": 512, 00:28:58.845 "num_blocks": 65536, 00:28:58.845 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:58.845 "assigned_rate_limits": { 00:28:58.845 "rw_ios_per_sec": 0, 00:28:58.845 "rw_mbytes_per_sec": 0, 00:28:58.845 "r_mbytes_per_sec": 0, 00:28:58.845 "w_mbytes_per_sec": 0 00:28:58.845 }, 00:28:58.845 "claimed": true, 00:28:58.845 "claim_type": "exclusive_write", 00:28:58.845 "zoned": false, 00:28:58.845 "supported_io_types": { 00:28:58.845 "read": true, 00:28:58.845 "write": true, 00:28:58.845 "unmap": true, 00:28:58.845 "flush": true, 00:28:58.845 "reset": true, 00:28:58.845 "nvme_admin": false, 00:28:58.845 "nvme_io": false, 00:28:58.845 "nvme_io_md": false, 00:28:58.845 "write_zeroes": true, 00:28:58.845 "zcopy": true, 00:28:58.845 "get_zone_info": false, 00:28:58.845 "zone_management": false, 00:28:58.845 "zone_append": false, 00:28:58.845 "compare": false, 00:28:58.845 "compare_and_write": false, 00:28:58.845 "abort": true, 00:28:58.845 "seek_hole": false, 00:28:58.845 "seek_data": false, 00:28:58.845 "copy": true, 00:28:58.845 "nvme_iov_md": false 00:28:58.845 }, 00:28:58.845 "memory_domains": [ 00:28:58.845 { 00:28:58.845 "dma_device_id": "system", 00:28:58.845 "dma_device_type": 1 00:28:58.845 }, 00:28:58.845 { 00:28:58.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:58.845 "dma_device_type": 2 00:28:58.845 } 00:28:58.845 ], 00:28:58.845 "driver_specific": { 00:28:58.845 "passthru": { 00:28:58.845 "name": "pt3", 00:28:58.845 "base_bdev_name": "malloc3" 00:28:58.845 } 00:28:58.845 } 00:28:58.845 }' 00:28:58.845 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:58.845 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:58.845 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:58.845 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:58.845 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:58.845 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:58.845 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:58.845 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:58.845 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:58.845 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:58.845 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:58.845 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:58.845 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:58.845 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:28:59.103 [2024-07-25 00:12:54.887027] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:59.103 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 86934dcb-26e0-403a-a079-22d134ce3618 '!=' 86934dcb-26e0-403a-a079-22d134ce3618 ']' 00:28:59.103 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid5f 00:28:59.103 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:59.103 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:28:59.103 00:12:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:59.361 [2024-07-25 00:12:55.154960] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:28:59.361 00:12:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:28:59.361 00:12:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:59.361 00:12:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:59.361 00:12:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:59.361 00:12:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:59.361 00:12:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:59.361 00:12:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:59.361 00:12:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:59.361 00:12:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:59.361 00:12:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:59.361 00:12:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:59.361 00:12:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.619 00:12:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:59.619 "name": "raid_bdev1", 00:28:59.619 "uuid": "86934dcb-26e0-403a-a079-22d134ce3618", 00:28:59.619 "strip_size_kb": 64, 00:28:59.619 "state": "online", 00:28:59.619 "raid_level": "raid5f", 00:28:59.619 "superblock": true, 00:28:59.619 "num_base_bdevs": 3, 00:28:59.619 "num_base_bdevs_discovered": 2, 00:28:59.619 "num_base_bdevs_operational": 2, 00:28:59.619 "base_bdevs_list": [ 00:28:59.619 { 00:28:59.619 "name": null, 00:28:59.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.619 "is_configured": false, 00:28:59.619 "data_offset": 2048, 00:28:59.619 "data_size": 63488 00:28:59.619 }, 00:28:59.619 { 00:28:59.619 "name": "pt2", 00:28:59.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:59.619 "is_configured": true, 00:28:59.619 "data_offset": 2048, 00:28:59.619 "data_size": 63488 00:28:59.619 }, 00:28:59.619 { 00:28:59.619 "name": "pt3", 00:28:59.619 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:59.619 "is_configured": true, 00:28:59.619 "data_offset": 2048, 00:28:59.619 "data_size": 63488 00:28:59.619 } 00:28:59.619 ] 00:28:59.619 }' 00:28:59.619 00:12:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:59.619 00:12:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:59.878 00:12:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:00.136 [2024-07-25 00:12:55.919434] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:00.136 [2024-07-25 00:12:55.919494] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:00.136 [2024-07-25 00:12:55.919573] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:00.136 [2024-07-25 00:12:55.919645] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:00.136 [2024-07-25 00:12:55.919664] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009c80 name raid_bdev1, state offline 00:29:00.136 00:12:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:00.136 00:12:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:29:00.394 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:29:00.395 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:29:00.395 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:29:00.395 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:29:00.395 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:00.656 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:29:00.656 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:29:00.656 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:29:00.914 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:29:00.914 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:29:00.914 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:29:00.914 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:29:00.914 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:01.173 [2024-07-25 00:12:56.923731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:01.173 [2024-07-25 00:12:56.924619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:01.173 [2024-07-25 00:12:56.924655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:29:01.173 [2024-07-25 00:12:56.924671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:01.173 [2024-07-25 00:12:56.927076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:01.173 [2024-07-25 00:12:56.927124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:01.173 [2024-07-25 00:12:56.927236] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:01.173 [2024-07-25 00:12:56.927292] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:01.173 pt2 00:29:01.173 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:29:01.173 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:01.173 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:01.173 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:01.173 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:01.173 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:01.173 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:01.173 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:01.173 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:01.173 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:01.173 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:01.173 00:12:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:01.433 00:12:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:01.433 "name": "raid_bdev1", 00:29:01.433 "uuid": "86934dcb-26e0-403a-a079-22d134ce3618", 00:29:01.433 "strip_size_kb": 64, 00:29:01.433 "state": "configuring", 00:29:01.433 "raid_level": "raid5f", 00:29:01.433 "superblock": true, 00:29:01.433 "num_base_bdevs": 3, 00:29:01.433 "num_base_bdevs_discovered": 1, 00:29:01.433 "num_base_bdevs_operational": 2, 00:29:01.433 "base_bdevs_list": [ 00:29:01.433 { 00:29:01.433 "name": null, 00:29:01.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:01.433 "is_configured": false, 00:29:01.433 "data_offset": 2048, 00:29:01.433 "data_size": 63488 00:29:01.433 }, 00:29:01.433 { 00:29:01.433 "name": "pt2", 00:29:01.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:01.433 "is_configured": true, 00:29:01.433 "data_offset": 2048, 00:29:01.433 "data_size": 63488 00:29:01.433 }, 00:29:01.433 { 00:29:01.433 "name": null, 00:29:01.433 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:01.433 "is_configured": false, 00:29:01.433 "data_offset": 2048, 00:29:01.433 "data_size": 63488 00:29:01.433 } 00:29:01.433 ] 00:29:01.433 }' 00:29:01.433 00:12:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:01.433 00:12:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:01.692 00:12:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:29:01.692 00:12:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:29:01.692 00:12:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:29:01.692 00:12:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:01.951 [2024-07-25 00:12:57.700075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:01.951 [2024-07-25 00:12:57.700170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:01.951 [2024-07-25 00:12:57.700200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:29:01.951 [2024-07-25 00:12:57.700215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:01.951 [2024-07-25 00:12:57.700628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:01.951 [2024-07-25 00:12:57.700654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:01.951 [2024-07-25 00:12:57.700738] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:29:01.951 [2024-07-25 00:12:57.700772] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:01.951 [2024-07-25 00:12:57.700944] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ae80 00:29:01.951 [2024-07-25 00:12:57.700966] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:01.951 [2024-07-25 00:12:57.701062] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:29:01.951 [2024-07-25 00:12:57.705540] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ae80 00:29:01.951 [2024-07-25 00:12:57.705563] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ae80 00:29:01.951 [2024-07-25 00:12:57.705872] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:01.951 pt3 00:29:01.951 00:12:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:29:01.951 00:12:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:01.951 00:12:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:01.951 00:12:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:01.951 00:12:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:01.951 00:12:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:01.951 00:12:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:01.951 00:12:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:01.951 00:12:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:01.951 00:12:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:01.951 00:12:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:01.951 00:12:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:02.210 00:12:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:02.210 "name": "raid_bdev1", 00:29:02.210 "uuid": "86934dcb-26e0-403a-a079-22d134ce3618", 00:29:02.210 "strip_size_kb": 64, 00:29:02.210 "state": "online", 00:29:02.210 "raid_level": "raid5f", 00:29:02.210 "superblock": true, 00:29:02.210 "num_base_bdevs": 3, 00:29:02.210 "num_base_bdevs_discovered": 2, 00:29:02.210 "num_base_bdevs_operational": 2, 00:29:02.210 "base_bdevs_list": [ 00:29:02.210 { 00:29:02.210 "name": null, 00:29:02.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.210 "is_configured": false, 00:29:02.210 "data_offset": 2048, 00:29:02.210 "data_size": 63488 00:29:02.210 }, 00:29:02.210 { 00:29:02.210 "name": "pt2", 00:29:02.210 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:02.210 "is_configured": true, 00:29:02.210 "data_offset": 2048, 00:29:02.210 "data_size": 63488 00:29:02.210 }, 00:29:02.210 { 00:29:02.210 "name": "pt3", 00:29:02.210 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:02.210 "is_configured": true, 00:29:02.210 "data_offset": 2048, 00:29:02.210 "data_size": 63488 00:29:02.210 } 00:29:02.210 ] 00:29:02.210 }' 00:29:02.210 00:12:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:02.210 00:12:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:02.469 00:12:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:02.728 [2024-07-25 00:12:58.442956] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:02.728 [2024-07-25 00:12:58.442994] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:02.728 [2024-07-25 00:12:58.443065] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:02.728 [2024-07-25 00:12:58.443149] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:02.728 [2024-07-25 00:12:58.443162] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ae80 name raid_bdev1, state offline 00:29:02.728 00:12:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.728 00:12:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:29:02.988 00:12:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:29:02.988 00:12:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:29:02.988 00:12:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 3 -gt 2 ']' 00:29:02.988 00:12:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # i=2 00:29:02.988 00:12:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:29:03.246 00:12:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:03.505 [2024-07-25 00:12:59.139230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:03.505 [2024-07-25 00:12:59.139479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:03.505 [2024-07-25 00:12:59.139822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:29:03.505 [2024-07-25 00:12:59.140037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:03.505 [2024-07-25 00:12:59.142208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:03.505 [2024-07-25 00:12:59.142449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:03.505 pt1 00:29:03.505 [2024-07-25 00:12:59.142737] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:03.505 [2024-07-25 00:12:59.142790] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:03.505 [2024-07-25 00:12:59.142986] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:29:03.505 [2024-07-25 00:12:59.143004] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:03.505 [2024-07-25 00:12:59.143024] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state configuring 00:29:03.505 [2024-07-25 00:12:59.143087] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:03.505 00:12:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 3 -gt 2 ']' 00:29:03.505 00:12:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:29:03.505 00:12:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:03.505 00:12:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:03.505 00:12:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:03.505 00:12:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:03.505 00:12:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:03.505 00:12:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:03.505 00:12:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:03.505 00:12:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:03.505 00:12:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:03.505 00:12:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:03.505 00:12:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.764 00:12:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:03.764 "name": "raid_bdev1", 00:29:03.764 "uuid": "86934dcb-26e0-403a-a079-22d134ce3618", 00:29:03.764 "strip_size_kb": 64, 00:29:03.764 "state": "configuring", 00:29:03.764 "raid_level": "raid5f", 00:29:03.764 "superblock": true, 00:29:03.764 "num_base_bdevs": 3, 00:29:03.764 "num_base_bdevs_discovered": 1, 00:29:03.764 "num_base_bdevs_operational": 2, 00:29:03.764 "base_bdevs_list": [ 00:29:03.764 { 00:29:03.764 "name": null, 00:29:03.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.764 "is_configured": false, 00:29:03.764 "data_offset": 2048, 00:29:03.764 "data_size": 63488 00:29:03.764 }, 00:29:03.764 { 00:29:03.764 "name": "pt2", 00:29:03.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:03.764 "is_configured": true, 00:29:03.764 "data_offset": 2048, 00:29:03.764 "data_size": 63488 00:29:03.764 }, 00:29:03.764 { 00:29:03.764 "name": null, 00:29:03.764 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:03.764 "is_configured": false, 00:29:03.764 "data_offset": 2048, 00:29:03.764 "data_size": 63488 00:29:03.764 } 00:29:03.764 ] 00:29:03.764 }' 00:29:03.764 00:12:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:03.764 00:12:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:04.023 00:12:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:29:04.023 00:12:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:04.282 00:12:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:29:04.282 00:12:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:04.541 [2024-07-25 00:13:00.207639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:04.541 [2024-07-25 00:13:00.208165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:04.541 [2024-07-25 00:13:00.208205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:29:04.541 [2024-07-25 00:13:00.208221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:04.541 [2024-07-25 00:13:00.208689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:04.541 [2024-07-25 00:13:00.208711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:04.541 [2024-07-25 00:13:00.208800] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:29:04.541 [2024-07-25 00:13:00.208826] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:04.541 [2024-07-25 00:13:00.208960] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000bd80 00:29:04.541 [2024-07-25 00:13:00.208974] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:04.541 [2024-07-25 00:13:00.209077] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ba0 00:29:04.541 [2024-07-25 00:13:00.213425] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000bd80 00:29:04.541 [2024-07-25 00:13:00.213452] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000bd80 00:29:04.541 [2024-07-25 00:13:00.213680] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:04.541 pt3 00:29:04.541 00:13:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:29:04.541 00:13:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:04.541 00:13:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:04.541 00:13:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:04.541 00:13:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:04.541 00:13:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:04.541 00:13:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:04.541 00:13:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:04.541 00:13:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:04.541 00:13:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:04.541 00:13:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:04.541 00:13:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:04.800 00:13:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:04.800 "name": "raid_bdev1", 00:29:04.800 "uuid": "86934dcb-26e0-403a-a079-22d134ce3618", 00:29:04.800 "strip_size_kb": 64, 00:29:04.800 "state": "online", 00:29:04.800 "raid_level": "raid5f", 00:29:04.800 "superblock": true, 00:29:04.801 "num_base_bdevs": 3, 00:29:04.801 "num_base_bdevs_discovered": 2, 00:29:04.801 "num_base_bdevs_operational": 2, 00:29:04.801 "base_bdevs_list": [ 00:29:04.801 { 00:29:04.801 "name": null, 00:29:04.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:04.801 "is_configured": false, 00:29:04.801 "data_offset": 2048, 00:29:04.801 "data_size": 63488 00:29:04.801 }, 00:29:04.801 { 00:29:04.801 "name": "pt2", 00:29:04.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:04.801 "is_configured": true, 00:29:04.801 "data_offset": 2048, 00:29:04.801 "data_size": 63488 00:29:04.801 }, 00:29:04.801 { 00:29:04.801 "name": "pt3", 00:29:04.801 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:04.801 "is_configured": true, 00:29:04.801 "data_offset": 2048, 00:29:04.801 "data_size": 63488 00:29:04.801 } 00:29:04.801 ] 00:29:04.801 }' 00:29:04.801 00:13:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:04.801 00:13:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:05.060 00:13:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:29:05.060 00:13:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:05.060 00:13:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:29:05.060 00:13:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:05.060 00:13:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:29:05.319 [2024-07-25 00:13:01.175021] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:05.578 00:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 86934dcb-26e0-403a-a079-22d134ce3618 '!=' 86934dcb-26e0-403a-a079-22d134ce3618 ']' 00:29:05.578 00:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 103960 00:29:05.578 00:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 103960 ']' 00:29:05.578 00:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 103960 00:29:05.578 00:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:29:05.578 00:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:05.578 00:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 103960 00:29:05.578 killing process with pid 103960 00:29:05.578 00:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:05.578 00:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:05.578 00:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 103960' 00:29:05.578 00:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 103960 00:29:05.578 [2024-07-25 00:13:01.225490] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:05.578 00:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 103960 00:29:05.578 [2024-07-25 00:13:01.225575] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:05.578 [2024-07-25 00:13:01.225639] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:05.578 [2024-07-25 00:13:01.225660] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state offline 00:29:05.578 [2024-07-25 00:13:01.421971] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:06.515 00:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:29:06.515 ************************************ 00:29:06.515 END TEST raid5f_superblock_test 00:29:06.515 ************************************ 00:29:06.515 00:29:06.515 real 0m17.917s 00:29:06.515 user 0m31.067s 00:29:06.515 sys 0m2.940s 00:29:06.515 00:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:06.515 00:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:06.775 00:13:02 bdev_raid -- bdev/bdev_raid.sh@969 -- # '[' true = true ']' 00:29:06.775 00:13:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:29:06.775 00:13:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:29:06.775 00:13:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:06.775 00:13:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:06.775 ************************************ 00:29:06.775 START TEST raid5f_rebuild_test 00:29:06.775 ************************************ 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid5f 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=3 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:06.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid5f '!=' raid1 ']' 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # strip_size=64 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # create_arg+=' -z 64' 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=104612 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 104612 /var/tmp/spdk-raid.sock 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 104612 ']' 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:06.775 00:13:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:06.775 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:06.775 Zero copy mechanism will not be used. 00:29:06.775 [2024-07-25 00:13:02.501376] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:29:06.775 [2024-07-25 00:13:02.501566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104612 ] 00:29:07.035 [2024-07-25 00:13:02.676650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.294 [2024-07-25 00:13:02.917110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.294 [2024-07-25 00:13:03.062304] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:07.861 00:13:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:07.861 00:13:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:29:07.861 00:13:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:07.861 00:13:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:07.861 BaseBdev1_malloc 00:29:07.861 00:13:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:08.119 [2024-07-25 00:13:03.961553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:08.119 [2024-07-25 00:13:03.961648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:08.119 [2024-07-25 00:13:03.961680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:29:08.119 [2024-07-25 00:13:03.961696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:08.119 [2024-07-25 00:13:03.964070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:08.119 [2024-07-25 00:13:03.964112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:08.119 BaseBdev1 00:29:08.119 00:13:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:08.119 00:13:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:08.378 BaseBdev2_malloc 00:29:08.378 00:13:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:08.636 [2024-07-25 00:13:04.388793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:08.636 [2024-07-25 00:13:04.388906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:08.636 [2024-07-25 00:13:04.388934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:29:08.636 [2024-07-25 00:13:04.388952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:08.636 [2024-07-25 00:13:04.391120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:08.636 [2024-07-25 00:13:04.391161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:08.636 BaseBdev2 00:29:08.636 00:13:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:08.636 00:13:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:08.894 BaseBdev3_malloc 00:29:08.894 00:13:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:09.152 [2024-07-25 00:13:04.801959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:09.152 [2024-07-25 00:13:04.802043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:09.152 [2024-07-25 00:13:04.802071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:29:09.152 [2024-07-25 00:13:04.802087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:09.152 [2024-07-25 00:13:04.804382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:09.152 [2024-07-25 00:13:04.804438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:09.152 BaseBdev3 00:29:09.152 00:13:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:09.411 spare_malloc 00:29:09.411 00:13:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:09.411 spare_delay 00:29:09.411 00:13:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:09.669 [2024-07-25 00:13:05.396288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:09.669 [2024-07-25 00:13:05.396343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:09.669 [2024-07-25 00:13:05.396367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009680 00:29:09.669 [2024-07-25 00:13:05.396381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:09.669 [2024-07-25 00:13:05.398473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:09.669 [2024-07-25 00:13:05.398512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:09.669 spare 00:29:09.669 00:13:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:29:09.928 [2024-07-25 00:13:05.640442] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:09.928 [2024-07-25 00:13:05.642420] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:09.928 [2024-07-25 00:13:05.642517] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:09.928 [2024-07-25 00:13:05.642640] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009c80 00:29:09.928 [2024-07-25 00:13:05.642656] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:29:09.928 [2024-07-25 00:13:05.642888] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:29:09.928 [2024-07-25 00:13:05.648265] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009c80 00:29:09.928 [2024-07-25 00:13:05.648332] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009c80 00:29:09.928 [2024-07-25 00:13:05.648614] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:09.928 00:13:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:09.928 00:13:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:09.928 00:13:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:09.928 00:13:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:09.928 00:13:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:09.928 00:13:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:09.928 00:13:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:09.928 00:13:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:09.928 00:13:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:09.928 00:13:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:09.928 00:13:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:09.928 00:13:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:10.186 00:13:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:10.186 "name": "raid_bdev1", 00:29:10.186 "uuid": "1d9569bf-67c4-480e-b9d1-b298c84233d2", 00:29:10.186 "strip_size_kb": 64, 00:29:10.186 "state": "online", 00:29:10.186 "raid_level": "raid5f", 00:29:10.186 "superblock": false, 00:29:10.186 "num_base_bdevs": 3, 00:29:10.186 "num_base_bdevs_discovered": 3, 00:29:10.186 "num_base_bdevs_operational": 3, 00:29:10.186 "base_bdevs_list": [ 00:29:10.186 { 00:29:10.186 "name": "BaseBdev1", 00:29:10.186 "uuid": "56f1ca0b-8e12-576e-b85b-cc1b3fc0cc67", 00:29:10.186 "is_configured": true, 00:29:10.186 "data_offset": 0, 00:29:10.186 "data_size": 65536 00:29:10.186 }, 00:29:10.186 { 00:29:10.186 "name": "BaseBdev2", 00:29:10.186 "uuid": "a07aa462-d188-598c-910a-6677677aa90c", 00:29:10.186 "is_configured": true, 00:29:10.186 "data_offset": 0, 00:29:10.186 "data_size": 65536 00:29:10.186 }, 00:29:10.186 { 00:29:10.186 "name": "BaseBdev3", 00:29:10.186 "uuid": "f5472b94-3e5b-5c61-aa22-9cb92c6b1a95", 00:29:10.186 "is_configured": true, 00:29:10.186 "data_offset": 0, 00:29:10.186 "data_size": 65536 00:29:10.186 } 00:29:10.186 ] 00:29:10.186 }' 00:29:10.186 00:13:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:10.186 00:13:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:10.452 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:10.452 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:29:10.729 [2024-07-25 00:13:06.426004] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:10.729 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=131072 00:29:10.729 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:10.729 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:10.987 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:29:10.987 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:29:10.987 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:29:10.987 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:29:10.987 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:10.987 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:10.987 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:10.987 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:10.987 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:10.987 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:10.987 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:10.987 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:10.987 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:10.987 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:10.987 [2024-07-25 00:13:06.834033] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ba0 00:29:10.987 /dev/nbd0 00:29:11.246 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:11.246 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:11.246 00:13:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:11.247 00:13:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:29:11.247 00:13:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:11.247 00:13:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:11.247 00:13:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:11.247 00:13:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:29:11.247 00:13:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:11.247 00:13:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:11.247 00:13:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:11.247 1+0 records in 00:29:11.247 1+0 records out 00:29:11.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263648 s, 15.5 MB/s 00:29:11.247 00:13:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:11.247 00:13:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:29:11.247 00:13:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:11.247 00:13:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:11.247 00:13:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:29:11.247 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:11.247 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:11.247 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid5f ']' 00:29:11.247 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # write_unit_size=256 00:29:11.247 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # echo 128 00:29:11.247 00:13:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:29:11.505 512+0 records in 00:29:11.505 512+0 records out 00:29:11.505 67108864 bytes (67 MB, 64 MiB) copied, 0.390099 s, 172 MB/s 00:29:11.505 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:11.505 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:11.505 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:11.505 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:11.505 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:11.505 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:11.505 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:11.763 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:11.764 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:11.764 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:11.764 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:11.764 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:11.764 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:11.764 [2024-07-25 00:13:07.481019] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:11.764 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:11.764 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:11.764 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:12.022 [2024-07-25 00:13:07.670601] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:12.022 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:29:12.022 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:12.022 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:12.022 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:12.022 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:12.022 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:12.022 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:12.022 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:12.022 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:12.022 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:12.022 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:12.022 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:12.280 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:12.280 "name": "raid_bdev1", 00:29:12.280 "uuid": "1d9569bf-67c4-480e-b9d1-b298c84233d2", 00:29:12.280 "strip_size_kb": 64, 00:29:12.280 "state": "online", 00:29:12.280 "raid_level": "raid5f", 00:29:12.280 "superblock": false, 00:29:12.280 "num_base_bdevs": 3, 00:29:12.280 "num_base_bdevs_discovered": 2, 00:29:12.280 "num_base_bdevs_operational": 2, 00:29:12.280 "base_bdevs_list": [ 00:29:12.280 { 00:29:12.280 "name": null, 00:29:12.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:12.280 "is_configured": false, 00:29:12.280 "data_offset": 0, 00:29:12.280 "data_size": 65536 00:29:12.280 }, 00:29:12.280 { 00:29:12.280 "name": "BaseBdev2", 00:29:12.280 "uuid": "a07aa462-d188-598c-910a-6677677aa90c", 00:29:12.280 "is_configured": true, 00:29:12.280 "data_offset": 0, 00:29:12.280 "data_size": 65536 00:29:12.280 }, 00:29:12.280 { 00:29:12.280 "name": "BaseBdev3", 00:29:12.280 "uuid": "f5472b94-3e5b-5c61-aa22-9cb92c6b1a95", 00:29:12.280 "is_configured": true, 00:29:12.280 "data_offset": 0, 00:29:12.280 "data_size": 65536 00:29:12.280 } 00:29:12.280 ] 00:29:12.280 }' 00:29:12.280 00:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:12.280 00:13:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.539 00:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:12.797 [2024-07-25 00:13:08.498823] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:12.797 [2024-07-25 00:13:08.509522] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002b1a0 00:29:12.797 [2024-07-25 00:13:08.515389] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:12.797 00:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:13.733 00:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:13.733 00:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:13.733 00:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:13.733 00:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:13.733 00:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:13.733 00:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:13.733 00:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:13.991 00:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:13.991 "name": "raid_bdev1", 00:29:13.991 "uuid": "1d9569bf-67c4-480e-b9d1-b298c84233d2", 00:29:13.991 "strip_size_kb": 64, 00:29:13.991 "state": "online", 00:29:13.991 "raid_level": "raid5f", 00:29:13.991 "superblock": false, 00:29:13.991 "num_base_bdevs": 3, 00:29:13.991 "num_base_bdevs_discovered": 3, 00:29:13.991 "num_base_bdevs_operational": 3, 00:29:13.991 "process": { 00:29:13.991 "type": "rebuild", 00:29:13.991 "target": "spare", 00:29:13.991 "progress": { 00:29:13.991 "blocks": 24576, 00:29:13.991 "percent": 18 00:29:13.991 } 00:29:13.991 }, 00:29:13.991 "base_bdevs_list": [ 00:29:13.991 { 00:29:13.991 "name": "spare", 00:29:13.991 "uuid": "9e6e1374-6d28-52cf-90ca-12676ac7c185", 00:29:13.991 "is_configured": true, 00:29:13.991 "data_offset": 0, 00:29:13.991 "data_size": 65536 00:29:13.991 }, 00:29:13.991 { 00:29:13.991 "name": "BaseBdev2", 00:29:13.991 "uuid": "a07aa462-d188-598c-910a-6677677aa90c", 00:29:13.991 "is_configured": true, 00:29:13.991 "data_offset": 0, 00:29:13.991 "data_size": 65536 00:29:13.991 }, 00:29:13.991 { 00:29:13.991 "name": "BaseBdev3", 00:29:13.991 "uuid": "f5472b94-3e5b-5c61-aa22-9cb92c6b1a95", 00:29:13.991 "is_configured": true, 00:29:13.991 "data_offset": 0, 00:29:13.991 "data_size": 65536 00:29:13.991 } 00:29:13.991 ] 00:29:13.991 }' 00:29:13.991 00:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:13.991 00:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:13.991 00:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:13.991 00:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:13.991 00:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:14.250 [2024-07-25 00:13:10.024788] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:14.250 [2024-07-25 00:13:10.028360] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:14.250 [2024-07-25 00:13:10.028493] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:14.250 [2024-07-25 00:13:10.028546] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:14.250 [2024-07-25 00:13:10.028571] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:14.250 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:29:14.250 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:14.250 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:14.250 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:14.250 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:14.250 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:14.250 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:14.250 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:14.250 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:14.250 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:14.250 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:14.250 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:14.509 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:14.509 "name": "raid_bdev1", 00:29:14.509 "uuid": "1d9569bf-67c4-480e-b9d1-b298c84233d2", 00:29:14.509 "strip_size_kb": 64, 00:29:14.509 "state": "online", 00:29:14.509 "raid_level": "raid5f", 00:29:14.509 "superblock": false, 00:29:14.509 "num_base_bdevs": 3, 00:29:14.509 "num_base_bdevs_discovered": 2, 00:29:14.509 "num_base_bdevs_operational": 2, 00:29:14.509 "base_bdevs_list": [ 00:29:14.509 { 00:29:14.509 "name": null, 00:29:14.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:14.509 "is_configured": false, 00:29:14.509 "data_offset": 0, 00:29:14.509 "data_size": 65536 00:29:14.509 }, 00:29:14.509 { 00:29:14.509 "name": "BaseBdev2", 00:29:14.509 "uuid": "a07aa462-d188-598c-910a-6677677aa90c", 00:29:14.509 "is_configured": true, 00:29:14.509 "data_offset": 0, 00:29:14.509 "data_size": 65536 00:29:14.509 }, 00:29:14.509 { 00:29:14.509 "name": "BaseBdev3", 00:29:14.509 "uuid": "f5472b94-3e5b-5c61-aa22-9cb92c6b1a95", 00:29:14.509 "is_configured": true, 00:29:14.509 "data_offset": 0, 00:29:14.509 "data_size": 65536 00:29:14.509 } 00:29:14.509 ] 00:29:14.509 }' 00:29:14.509 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:14.509 00:13:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.768 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:14.768 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:14.768 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:14.768 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:14.768 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:14.768 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:14.768 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:15.027 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:15.027 "name": "raid_bdev1", 00:29:15.027 "uuid": "1d9569bf-67c4-480e-b9d1-b298c84233d2", 00:29:15.027 "strip_size_kb": 64, 00:29:15.027 "state": "online", 00:29:15.027 "raid_level": "raid5f", 00:29:15.027 "superblock": false, 00:29:15.027 "num_base_bdevs": 3, 00:29:15.027 "num_base_bdevs_discovered": 2, 00:29:15.027 "num_base_bdevs_operational": 2, 00:29:15.027 "base_bdevs_list": [ 00:29:15.027 { 00:29:15.027 "name": null, 00:29:15.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:15.027 "is_configured": false, 00:29:15.027 "data_offset": 0, 00:29:15.027 "data_size": 65536 00:29:15.027 }, 00:29:15.027 { 00:29:15.027 "name": "BaseBdev2", 00:29:15.027 "uuid": "a07aa462-d188-598c-910a-6677677aa90c", 00:29:15.027 "is_configured": true, 00:29:15.027 "data_offset": 0, 00:29:15.027 "data_size": 65536 00:29:15.027 }, 00:29:15.027 { 00:29:15.027 "name": "BaseBdev3", 00:29:15.027 "uuid": "f5472b94-3e5b-5c61-aa22-9cb92c6b1a95", 00:29:15.027 "is_configured": true, 00:29:15.027 "data_offset": 0, 00:29:15.027 "data_size": 65536 00:29:15.027 } 00:29:15.027 ] 00:29:15.027 }' 00:29:15.027 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:15.027 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:15.027 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:15.027 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:15.027 00:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:15.286 [2024-07-25 00:13:10.993197] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:15.286 [2024-07-25 00:13:11.003119] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002b270 00:29:15.286 [2024-07-25 00:13:11.009005] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:15.286 00:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:29:16.224 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:16.224 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:16.224 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:16.224 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:16.224 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:16.224 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:16.224 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:16.483 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:16.483 "name": "raid_bdev1", 00:29:16.483 "uuid": "1d9569bf-67c4-480e-b9d1-b298c84233d2", 00:29:16.483 "strip_size_kb": 64, 00:29:16.483 "state": "online", 00:29:16.483 "raid_level": "raid5f", 00:29:16.483 "superblock": false, 00:29:16.483 "num_base_bdevs": 3, 00:29:16.483 "num_base_bdevs_discovered": 3, 00:29:16.483 "num_base_bdevs_operational": 3, 00:29:16.483 "process": { 00:29:16.483 "type": "rebuild", 00:29:16.483 "target": "spare", 00:29:16.483 "progress": { 00:29:16.483 "blocks": 24576, 00:29:16.483 "percent": 18 00:29:16.483 } 00:29:16.483 }, 00:29:16.483 "base_bdevs_list": [ 00:29:16.483 { 00:29:16.483 "name": "spare", 00:29:16.483 "uuid": "9e6e1374-6d28-52cf-90ca-12676ac7c185", 00:29:16.483 "is_configured": true, 00:29:16.483 "data_offset": 0, 00:29:16.483 "data_size": 65536 00:29:16.483 }, 00:29:16.483 { 00:29:16.483 "name": "BaseBdev2", 00:29:16.483 "uuid": "a07aa462-d188-598c-910a-6677677aa90c", 00:29:16.483 "is_configured": true, 00:29:16.483 "data_offset": 0, 00:29:16.483 "data_size": 65536 00:29:16.483 }, 00:29:16.483 { 00:29:16.483 "name": "BaseBdev3", 00:29:16.483 "uuid": "f5472b94-3e5b-5c61-aa22-9cb92c6b1a95", 00:29:16.483 "is_configured": true, 00:29:16.483 "data_offset": 0, 00:29:16.483 "data_size": 65536 00:29:16.483 } 00:29:16.483 ] 00:29:16.483 }' 00:29:16.483 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:16.483 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:16.483 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:16.483 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:16.483 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:29:16.484 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=3 00:29:16.484 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid5f = raid1 ']' 00:29:16.484 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=982 00:29:16.484 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:16.484 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:16.484 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:16.484 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:16.484 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:16.484 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:16.484 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:16.484 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:16.743 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:16.743 "name": "raid_bdev1", 00:29:16.743 "uuid": "1d9569bf-67c4-480e-b9d1-b298c84233d2", 00:29:16.743 "strip_size_kb": 64, 00:29:16.743 "state": "online", 00:29:16.743 "raid_level": "raid5f", 00:29:16.743 "superblock": false, 00:29:16.743 "num_base_bdevs": 3, 00:29:16.743 "num_base_bdevs_discovered": 3, 00:29:16.743 "num_base_bdevs_operational": 3, 00:29:16.743 "process": { 00:29:16.743 "type": "rebuild", 00:29:16.743 "target": "spare", 00:29:16.743 "progress": { 00:29:16.743 "blocks": 28672, 00:29:16.743 "percent": 21 00:29:16.743 } 00:29:16.743 }, 00:29:16.743 "base_bdevs_list": [ 00:29:16.743 { 00:29:16.743 "name": "spare", 00:29:16.743 "uuid": "9e6e1374-6d28-52cf-90ca-12676ac7c185", 00:29:16.743 "is_configured": true, 00:29:16.743 "data_offset": 0, 00:29:16.743 "data_size": 65536 00:29:16.743 }, 00:29:16.743 { 00:29:16.743 "name": "BaseBdev2", 00:29:16.743 "uuid": "a07aa462-d188-598c-910a-6677677aa90c", 00:29:16.743 "is_configured": true, 00:29:16.743 "data_offset": 0, 00:29:16.743 "data_size": 65536 00:29:16.743 }, 00:29:16.743 { 00:29:16.743 "name": "BaseBdev3", 00:29:16.743 "uuid": "f5472b94-3e5b-5c61-aa22-9cb92c6b1a95", 00:29:16.743 "is_configured": true, 00:29:16.743 "data_offset": 0, 00:29:16.743 "data_size": 65536 00:29:16.743 } 00:29:16.743 ] 00:29:16.743 }' 00:29:16.743 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:16.743 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:16.743 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:16.743 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:16.743 00:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:17.678 00:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:17.678 00:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:17.678 00:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:17.678 00:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:17.678 00:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:17.678 00:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:17.678 00:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:17.678 00:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.936 00:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:17.936 "name": "raid_bdev1", 00:29:17.936 "uuid": "1d9569bf-67c4-480e-b9d1-b298c84233d2", 00:29:17.936 "strip_size_kb": 64, 00:29:17.936 "state": "online", 00:29:17.936 "raid_level": "raid5f", 00:29:17.936 "superblock": false, 00:29:17.936 "num_base_bdevs": 3, 00:29:17.936 "num_base_bdevs_discovered": 3, 00:29:17.936 "num_base_bdevs_operational": 3, 00:29:17.936 "process": { 00:29:17.936 "type": "rebuild", 00:29:17.936 "target": "spare", 00:29:17.936 "progress": { 00:29:17.936 "blocks": 55296, 00:29:17.936 "percent": 42 00:29:17.936 } 00:29:17.936 }, 00:29:17.936 "base_bdevs_list": [ 00:29:17.936 { 00:29:17.936 "name": "spare", 00:29:17.936 "uuid": "9e6e1374-6d28-52cf-90ca-12676ac7c185", 00:29:17.936 "is_configured": true, 00:29:17.936 "data_offset": 0, 00:29:17.936 "data_size": 65536 00:29:17.936 }, 00:29:17.936 { 00:29:17.936 "name": "BaseBdev2", 00:29:17.936 "uuid": "a07aa462-d188-598c-910a-6677677aa90c", 00:29:17.936 "is_configured": true, 00:29:17.936 "data_offset": 0, 00:29:17.936 "data_size": 65536 00:29:17.936 }, 00:29:17.936 { 00:29:17.936 "name": "BaseBdev3", 00:29:17.936 "uuid": "f5472b94-3e5b-5c61-aa22-9cb92c6b1a95", 00:29:17.936 "is_configured": true, 00:29:17.936 "data_offset": 0, 00:29:17.936 "data_size": 65536 00:29:17.936 } 00:29:17.936 ] 00:29:17.936 }' 00:29:17.936 00:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:17.936 00:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:17.936 00:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:17.936 00:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:17.936 00:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:19.310 00:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:19.310 00:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:19.310 00:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:19.310 00:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:19.310 00:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:19.310 00:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:19.310 00:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:19.310 00:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:19.310 00:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:19.310 "name": "raid_bdev1", 00:29:19.310 "uuid": "1d9569bf-67c4-480e-b9d1-b298c84233d2", 00:29:19.310 "strip_size_kb": 64, 00:29:19.310 "state": "online", 00:29:19.310 "raid_level": "raid5f", 00:29:19.310 "superblock": false, 00:29:19.310 "num_base_bdevs": 3, 00:29:19.310 "num_base_bdevs_discovered": 3, 00:29:19.310 "num_base_bdevs_operational": 3, 00:29:19.310 "process": { 00:29:19.310 "type": "rebuild", 00:29:19.310 "target": "spare", 00:29:19.310 "progress": { 00:29:19.310 "blocks": 79872, 00:29:19.310 "percent": 60 00:29:19.310 } 00:29:19.310 }, 00:29:19.310 "base_bdevs_list": [ 00:29:19.310 { 00:29:19.310 "name": "spare", 00:29:19.310 "uuid": "9e6e1374-6d28-52cf-90ca-12676ac7c185", 00:29:19.310 "is_configured": true, 00:29:19.310 "data_offset": 0, 00:29:19.310 "data_size": 65536 00:29:19.310 }, 00:29:19.310 { 00:29:19.310 "name": "BaseBdev2", 00:29:19.310 "uuid": "a07aa462-d188-598c-910a-6677677aa90c", 00:29:19.310 "is_configured": true, 00:29:19.310 "data_offset": 0, 00:29:19.310 "data_size": 65536 00:29:19.310 }, 00:29:19.310 { 00:29:19.310 "name": "BaseBdev3", 00:29:19.310 "uuid": "f5472b94-3e5b-5c61-aa22-9cb92c6b1a95", 00:29:19.310 "is_configured": true, 00:29:19.310 "data_offset": 0, 00:29:19.310 "data_size": 65536 00:29:19.310 } 00:29:19.310 ] 00:29:19.310 }' 00:29:19.311 00:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:19.311 00:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:19.311 00:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:19.311 00:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:19.311 00:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:20.245 00:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:20.245 00:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:20.245 00:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:20.245 00:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:20.245 00:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:20.245 00:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:20.245 00:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:20.245 00:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:20.503 00:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:20.503 "name": "raid_bdev1", 00:29:20.503 "uuid": "1d9569bf-67c4-480e-b9d1-b298c84233d2", 00:29:20.503 "strip_size_kb": 64, 00:29:20.503 "state": "online", 00:29:20.503 "raid_level": "raid5f", 00:29:20.503 "superblock": false, 00:29:20.503 "num_base_bdevs": 3, 00:29:20.503 "num_base_bdevs_discovered": 3, 00:29:20.503 "num_base_bdevs_operational": 3, 00:29:20.503 "process": { 00:29:20.503 "type": "rebuild", 00:29:20.503 "target": "spare", 00:29:20.503 "progress": { 00:29:20.503 "blocks": 106496, 00:29:20.503 "percent": 81 00:29:20.503 } 00:29:20.503 }, 00:29:20.503 "base_bdevs_list": [ 00:29:20.503 { 00:29:20.503 "name": "spare", 00:29:20.503 "uuid": "9e6e1374-6d28-52cf-90ca-12676ac7c185", 00:29:20.503 "is_configured": true, 00:29:20.503 "data_offset": 0, 00:29:20.503 "data_size": 65536 00:29:20.503 }, 00:29:20.503 { 00:29:20.503 "name": "BaseBdev2", 00:29:20.503 "uuid": "a07aa462-d188-598c-910a-6677677aa90c", 00:29:20.503 "is_configured": true, 00:29:20.503 "data_offset": 0, 00:29:20.503 "data_size": 65536 00:29:20.503 }, 00:29:20.503 { 00:29:20.503 "name": "BaseBdev3", 00:29:20.503 "uuid": "f5472b94-3e5b-5c61-aa22-9cb92c6b1a95", 00:29:20.503 "is_configured": true, 00:29:20.503 "data_offset": 0, 00:29:20.503 "data_size": 65536 00:29:20.503 } 00:29:20.503 ] 00:29:20.503 }' 00:29:20.503 00:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:20.503 00:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:20.503 00:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:20.503 00:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:20.504 00:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:21.879 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:21.879 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:21.879 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:21.879 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:21.879 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:21.879 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:21.879 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:21.879 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:21.879 [2024-07-25 00:13:17.461794] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:21.879 [2024-07-25 00:13:17.462080] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:21.879 [2024-07-25 00:13:17.462161] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:21.879 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:21.879 "name": "raid_bdev1", 00:29:21.879 "uuid": "1d9569bf-67c4-480e-b9d1-b298c84233d2", 00:29:21.879 "strip_size_kb": 64, 00:29:21.879 "state": "online", 00:29:21.879 "raid_level": "raid5f", 00:29:21.879 "superblock": false, 00:29:21.879 "num_base_bdevs": 3, 00:29:21.879 "num_base_bdevs_discovered": 3, 00:29:21.879 "num_base_bdevs_operational": 3, 00:29:21.879 "base_bdevs_list": [ 00:29:21.879 { 00:29:21.879 "name": "spare", 00:29:21.879 "uuid": "9e6e1374-6d28-52cf-90ca-12676ac7c185", 00:29:21.879 "is_configured": true, 00:29:21.879 "data_offset": 0, 00:29:21.879 "data_size": 65536 00:29:21.879 }, 00:29:21.879 { 00:29:21.879 "name": "BaseBdev2", 00:29:21.879 "uuid": "a07aa462-d188-598c-910a-6677677aa90c", 00:29:21.879 "is_configured": true, 00:29:21.879 "data_offset": 0, 00:29:21.879 "data_size": 65536 00:29:21.879 }, 00:29:21.879 { 00:29:21.879 "name": "BaseBdev3", 00:29:21.879 "uuid": "f5472b94-3e5b-5c61-aa22-9cb92c6b1a95", 00:29:21.879 "is_configured": true, 00:29:21.879 "data_offset": 0, 00:29:21.879 "data_size": 65536 00:29:21.879 } 00:29:21.879 ] 00:29:21.879 }' 00:29:21.879 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:21.879 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:21.879 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:21.879 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:21.879 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:29:21.879 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:21.879 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:21.879 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:21.879 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:21.879 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:21.879 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:21.879 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:22.137 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:22.137 "name": "raid_bdev1", 00:29:22.137 "uuid": "1d9569bf-67c4-480e-b9d1-b298c84233d2", 00:29:22.137 "strip_size_kb": 64, 00:29:22.137 "state": "online", 00:29:22.137 "raid_level": "raid5f", 00:29:22.137 "superblock": false, 00:29:22.137 "num_base_bdevs": 3, 00:29:22.137 "num_base_bdevs_discovered": 3, 00:29:22.137 "num_base_bdevs_operational": 3, 00:29:22.137 "base_bdevs_list": [ 00:29:22.137 { 00:29:22.137 "name": "spare", 00:29:22.137 "uuid": "9e6e1374-6d28-52cf-90ca-12676ac7c185", 00:29:22.137 "is_configured": true, 00:29:22.137 "data_offset": 0, 00:29:22.137 "data_size": 65536 00:29:22.137 }, 00:29:22.137 { 00:29:22.137 "name": "BaseBdev2", 00:29:22.137 "uuid": "a07aa462-d188-598c-910a-6677677aa90c", 00:29:22.137 "is_configured": true, 00:29:22.137 "data_offset": 0, 00:29:22.137 "data_size": 65536 00:29:22.137 }, 00:29:22.137 { 00:29:22.137 "name": "BaseBdev3", 00:29:22.137 "uuid": "f5472b94-3e5b-5c61-aa22-9cb92c6b1a95", 00:29:22.137 "is_configured": true, 00:29:22.137 "data_offset": 0, 00:29:22.137 "data_size": 65536 00:29:22.137 } 00:29:22.137 ] 00:29:22.137 }' 00:29:22.137 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:22.137 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:22.137 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:22.137 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:22.137 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:22.137 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:22.137 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:22.137 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:22.137 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:22.138 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:22.138 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:22.138 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:22.138 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:22.138 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:22.138 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:22.138 00:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:22.396 00:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:22.396 "name": "raid_bdev1", 00:29:22.396 "uuid": "1d9569bf-67c4-480e-b9d1-b298c84233d2", 00:29:22.396 "strip_size_kb": 64, 00:29:22.396 "state": "online", 00:29:22.396 "raid_level": "raid5f", 00:29:22.396 "superblock": false, 00:29:22.396 "num_base_bdevs": 3, 00:29:22.396 "num_base_bdevs_discovered": 3, 00:29:22.396 "num_base_bdevs_operational": 3, 00:29:22.396 "base_bdevs_list": [ 00:29:22.396 { 00:29:22.396 "name": "spare", 00:29:22.396 "uuid": "9e6e1374-6d28-52cf-90ca-12676ac7c185", 00:29:22.396 "is_configured": true, 00:29:22.396 "data_offset": 0, 00:29:22.396 "data_size": 65536 00:29:22.396 }, 00:29:22.396 { 00:29:22.396 "name": "BaseBdev2", 00:29:22.396 "uuid": "a07aa462-d188-598c-910a-6677677aa90c", 00:29:22.396 "is_configured": true, 00:29:22.396 "data_offset": 0, 00:29:22.396 "data_size": 65536 00:29:22.396 }, 00:29:22.396 { 00:29:22.396 "name": "BaseBdev3", 00:29:22.396 "uuid": "f5472b94-3e5b-5c61-aa22-9cb92c6b1a95", 00:29:22.396 "is_configured": true, 00:29:22.396 "data_offset": 0, 00:29:22.396 "data_size": 65536 00:29:22.396 } 00:29:22.396 ] 00:29:22.396 }' 00:29:22.396 00:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:22.396 00:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:22.654 00:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:22.654 [2024-07-25 00:13:18.517732] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:22.654 [2024-07-25 00:13:18.517765] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:22.654 [2024-07-25 00:13:18.517858] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:22.654 [2024-07-25 00:13:18.517965] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:22.654 [2024-07-25 00:13:18.517987] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009c80 name raid_bdev1, state offline 00:29:22.912 00:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:29:22.912 00:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:23.170 /dev/nbd0 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:23.170 00:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:23.170 1+0 records in 00:29:23.170 1+0 records out 00:29:23.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277538 s, 14.8 MB/s 00:29:23.170 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:23.170 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:29:23.170 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:23.170 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:23.170 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:29:23.170 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:23.170 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:23.170 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:23.429 /dev/nbd1 00:29:23.429 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:23.429 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:23.429 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:29:23.429 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:29:23.429 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:23.429 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:23.429 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:29:23.429 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:29:23.429 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:23.429 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:23.429 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:23.429 1+0 records in 00:29:23.429 1+0 records out 00:29:23.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411469 s, 10.0 MB/s 00:29:23.429 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:23.429 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:29:23.429 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:23.429 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:23.429 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:29:23.429 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:23.429 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:23.429 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:23.688 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:23.688 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:23.688 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:23.688 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:23.688 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:23.688 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:23.688 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:23.946 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:23.946 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:23.946 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:23.946 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:23.946 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:23.946 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:23.946 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:23.946 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:23.946 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:23.946 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:24.205 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:24.205 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:24.205 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:24.205 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:24.205 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:24.205 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:24.205 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:24.205 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:24.205 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:29:24.205 00:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 104612 00:29:24.205 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 104612 ']' 00:29:24.205 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 104612 00:29:24.205 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:29:24.205 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:24.205 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 104612 00:29:24.205 killing process with pid 104612 00:29:24.205 Received shutdown signal, test time was about 60.000000 seconds 00:29:24.205 00:29:24.205 Latency(us) 00:29:24.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.205 =================================================================================================================== 00:29:24.205 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:24.205 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:24.205 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:24.205 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 104612' 00:29:24.205 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 104612 00:29:24.205 [2024-07-25 00:13:19.913676] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:24.205 00:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 104612 00:29:24.463 [2024-07-25 00:13:20.176785] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:29:25.400 00:29:25.400 real 0m18.668s 00:29:25.400 user 0m26.573s 00:29:25.400 sys 0m2.453s 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:25.400 ************************************ 00:29:25.400 END TEST raid5f_rebuild_test 00:29:25.400 ************************************ 00:29:25.400 00:13:21 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:29:25.400 00:13:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:29:25.400 00:13:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:25.400 00:13:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:25.400 ************************************ 00:29:25.400 START TEST raid5f_rebuild_test_sb 00:29:25.400 ************************************ 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid5f 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=3 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid5f '!=' raid1 ']' 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # strip_size=64 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # create_arg+=' -z 64' 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=105104 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 105104 /var/tmp/spdk-raid.sock 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 105104 ']' 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:25.400 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:25.401 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:25.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:25.401 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:25.401 00:13:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.401 [2024-07-25 00:13:21.225868] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:29:25.401 [2024-07-25 00:13:21.226237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105104 ] 00:29:25.401 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:25.401 Zero copy mechanism will not be used. 00:29:25.660 [2024-07-25 00:13:21.397032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.919 [2024-07-25 00:13:21.549170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.919 [2024-07-25 00:13:21.692725] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:26.486 00:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:26.486 00:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:29:26.486 00:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:26.486 00:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:26.745 BaseBdev1_malloc 00:29:26.745 00:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:26.745 [2024-07-25 00:13:22.599737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:26.745 [2024-07-25 00:13:22.599875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:26.745 [2024-07-25 00:13:22.599909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:29:26.745 [2024-07-25 00:13:22.599926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:26.745 [2024-07-25 00:13:22.602195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:26.745 [2024-07-25 00:13:22.602386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:26.745 BaseBdev1 00:29:27.003 00:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:27.003 00:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:27.003 BaseBdev2_malloc 00:29:27.262 00:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:27.262 [2024-07-25 00:13:23.111208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:27.262 [2024-07-25 00:13:23.111423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:27.262 [2024-07-25 00:13:23.111495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:29:27.262 [2024-07-25 00:13:23.111653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:27.262 [2024-07-25 00:13:23.113756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:27.262 [2024-07-25 00:13:23.113993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:27.262 BaseBdev2 00:29:27.262 00:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:27.262 00:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:27.521 BaseBdev3_malloc 00:29:27.521 00:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:27.778 [2024-07-25 00:13:23.515619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:27.778 [2024-07-25 00:13:23.515707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:27.778 [2024-07-25 00:13:23.515738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:29:27.778 [2024-07-25 00:13:23.515754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:27.778 [2024-07-25 00:13:23.518034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:27.778 [2024-07-25 00:13:23.518079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:27.778 BaseBdev3 00:29:27.778 00:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:28.036 spare_malloc 00:29:28.036 00:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:28.294 spare_delay 00:29:28.294 00:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:28.554 [2024-07-25 00:13:24.208328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:28.554 [2024-07-25 00:13:24.208542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:28.554 [2024-07-25 00:13:24.208581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009680 00:29:28.554 [2024-07-25 00:13:24.208598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:28.554 [2024-07-25 00:13:24.210822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:28.554 [2024-07-25 00:13:24.210865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:28.554 spare 00:29:28.554 00:13:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:29:28.554 [2024-07-25 00:13:24.392421] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:28.554 [2024-07-25 00:13:24.394306] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:28.554 [2024-07-25 00:13:24.394380] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:28.554 [2024-07-25 00:13:24.394576] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009c80 00:29:28.554 [2024-07-25 00:13:24.394591] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:28.554 [2024-07-25 00:13:24.394688] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:29:28.554 [2024-07-25 00:13:24.399074] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009c80 00:29:28.554 [2024-07-25 00:13:24.399235] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009c80 00:29:28.554 [2024-07-25 00:13:24.399576] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:28.554 00:13:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:28.554 00:13:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:28.554 00:13:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:28.554 00:13:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:28.554 00:13:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:28.554 00:13:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:28.554 00:13:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:28.554 00:13:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:28.554 00:13:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:28.554 00:13:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:28.554 00:13:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:28.554 00:13:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.812 00:13:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:28.812 "name": "raid_bdev1", 00:29:28.812 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:28.812 "strip_size_kb": 64, 00:29:28.812 "state": "online", 00:29:28.812 "raid_level": "raid5f", 00:29:28.812 "superblock": true, 00:29:28.812 "num_base_bdevs": 3, 00:29:28.812 "num_base_bdevs_discovered": 3, 00:29:28.812 "num_base_bdevs_operational": 3, 00:29:28.812 "base_bdevs_list": [ 00:29:28.812 { 00:29:28.812 "name": "BaseBdev1", 00:29:28.812 "uuid": "3ad94f8d-3b6c-5a9d-aead-dbdca3be9d92", 00:29:28.812 "is_configured": true, 00:29:28.812 "data_offset": 2048, 00:29:28.812 "data_size": 63488 00:29:28.812 }, 00:29:28.812 { 00:29:28.812 "name": "BaseBdev2", 00:29:28.812 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:28.812 "is_configured": true, 00:29:28.812 "data_offset": 2048, 00:29:28.812 "data_size": 63488 00:29:28.812 }, 00:29:28.812 { 00:29:28.812 "name": "BaseBdev3", 00:29:28.812 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:28.812 "is_configured": true, 00:29:28.812 "data_offset": 2048, 00:29:28.812 "data_size": 63488 00:29:28.812 } 00:29:28.812 ] 00:29:28.813 }' 00:29:28.813 00:13:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:28.813 00:13:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.072 00:13:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:29.072 00:13:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:29:29.330 [2024-07-25 00:13:25.184398] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:29.590 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=126976 00:29:29.590 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:29.590 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:29.590 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:29:29.590 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:29:29.590 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:29:29.590 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:29:29.590 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:29.590 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:29.590 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:29.590 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:29.590 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:29.590 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:29.590 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:29.590 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:29.590 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:29.590 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:29.848 [2024-07-25 00:13:25.664381] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ba0 00:29:29.848 /dev/nbd0 00:29:29.848 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:29.848 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:29.848 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:29.848 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:29:29.848 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:29.848 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:29.848 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:29.848 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:29:29.848 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:29.848 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:29.848 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:29.848 1+0 records in 00:29:29.848 1+0 records out 00:29:29.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256391 s, 16.0 MB/s 00:29:29.849 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:29.849 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:29:29.849 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:29.849 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:29.849 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:29:29.849 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:29.849 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:29.849 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid5f ']' 00:29:29.849 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # write_unit_size=256 00:29:29.849 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # echo 128 00:29:29.849 00:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:29:30.416 496+0 records in 00:29:30.416 496+0 records out 00:29:30.416 65011712 bytes (65 MB, 62 MiB) copied, 0.419225 s, 155 MB/s 00:29:30.416 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:30.416 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:30.416 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:30.416 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:30.416 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:29:30.416 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:30.416 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:30.675 [2024-07-25 00:13:26.387966] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:30.675 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:30.675 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:30.675 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:30.675 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:30.675 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:30.675 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:30.675 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:30.675 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:30.675 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:30.933 [2024-07-25 00:13:26.581238] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:30.933 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:29:30.934 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:30.934 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:30.934 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:30.934 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:30.934 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:30.934 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:30.934 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:30.934 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:30.934 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:30.934 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:30.934 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:31.192 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:31.192 "name": "raid_bdev1", 00:29:31.192 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:31.192 "strip_size_kb": 64, 00:29:31.192 "state": "online", 00:29:31.192 "raid_level": "raid5f", 00:29:31.192 "superblock": true, 00:29:31.192 "num_base_bdevs": 3, 00:29:31.192 "num_base_bdevs_discovered": 2, 00:29:31.192 "num_base_bdevs_operational": 2, 00:29:31.192 "base_bdevs_list": [ 00:29:31.192 { 00:29:31.192 "name": null, 00:29:31.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:31.192 "is_configured": false, 00:29:31.192 "data_offset": 2048, 00:29:31.192 "data_size": 63488 00:29:31.192 }, 00:29:31.192 { 00:29:31.192 "name": "BaseBdev2", 00:29:31.192 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:31.192 "is_configured": true, 00:29:31.192 "data_offset": 2048, 00:29:31.192 "data_size": 63488 00:29:31.192 }, 00:29:31.192 { 00:29:31.192 "name": "BaseBdev3", 00:29:31.192 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:31.192 "is_configured": true, 00:29:31.192 "data_offset": 2048, 00:29:31.192 "data_size": 63488 00:29:31.192 } 00:29:31.192 ] 00:29:31.192 }' 00:29:31.192 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:31.192 00:13:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:31.451 00:13:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:31.710 [2024-07-25 00:13:27.353509] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:31.710 [2024-07-25 00:13:27.365053] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000028aa0 00:29:31.710 [2024-07-25 00:13:27.371280] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:31.710 00:13:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:32.647 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:32.647 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:32.647 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:32.647 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:32.647 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:32.647 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:32.647 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:32.906 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:32.906 "name": "raid_bdev1", 00:29:32.906 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:32.906 "strip_size_kb": 64, 00:29:32.906 "state": "online", 00:29:32.906 "raid_level": "raid5f", 00:29:32.906 "superblock": true, 00:29:32.906 "num_base_bdevs": 3, 00:29:32.906 "num_base_bdevs_discovered": 3, 00:29:32.906 "num_base_bdevs_operational": 3, 00:29:32.906 "process": { 00:29:32.906 "type": "rebuild", 00:29:32.906 "target": "spare", 00:29:32.906 "progress": { 00:29:32.906 "blocks": 22528, 00:29:32.906 "percent": 17 00:29:32.906 } 00:29:32.906 }, 00:29:32.906 "base_bdevs_list": [ 00:29:32.906 { 00:29:32.906 "name": "spare", 00:29:32.906 "uuid": "b5c2040f-1621-5190-8ee8-0e4447f464ea", 00:29:32.906 "is_configured": true, 00:29:32.906 "data_offset": 2048, 00:29:32.906 "data_size": 63488 00:29:32.906 }, 00:29:32.906 { 00:29:32.906 "name": "BaseBdev2", 00:29:32.906 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:32.906 "is_configured": true, 00:29:32.906 "data_offset": 2048, 00:29:32.906 "data_size": 63488 00:29:32.906 }, 00:29:32.906 { 00:29:32.906 "name": "BaseBdev3", 00:29:32.906 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:32.906 "is_configured": true, 00:29:32.906 "data_offset": 2048, 00:29:32.906 "data_size": 63488 00:29:32.906 } 00:29:32.906 ] 00:29:32.906 }' 00:29:32.906 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:32.906 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:32.906 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:32.906 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:32.906 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:33.165 [2024-07-25 00:13:28.829114] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:33.165 [2024-07-25 00:13:28.883710] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:33.165 [2024-07-25 00:13:28.883794] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:33.165 [2024-07-25 00:13:28.883856] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:33.165 [2024-07-25 00:13:28.883884] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:33.165 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:29:33.165 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:33.165 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:33.165 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:33.165 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:33.165 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:33.165 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:33.165 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:33.165 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:33.165 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:33.165 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:33.165 00:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:33.423 00:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:33.424 "name": "raid_bdev1", 00:29:33.424 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:33.424 "strip_size_kb": 64, 00:29:33.424 "state": "online", 00:29:33.424 "raid_level": "raid5f", 00:29:33.424 "superblock": true, 00:29:33.424 "num_base_bdevs": 3, 00:29:33.424 "num_base_bdevs_discovered": 2, 00:29:33.424 "num_base_bdevs_operational": 2, 00:29:33.424 "base_bdevs_list": [ 00:29:33.424 { 00:29:33.424 "name": null, 00:29:33.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:33.424 "is_configured": false, 00:29:33.424 "data_offset": 2048, 00:29:33.424 "data_size": 63488 00:29:33.424 }, 00:29:33.424 { 00:29:33.424 "name": "BaseBdev2", 00:29:33.424 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:33.424 "is_configured": true, 00:29:33.424 "data_offset": 2048, 00:29:33.424 "data_size": 63488 00:29:33.424 }, 00:29:33.424 { 00:29:33.424 "name": "BaseBdev3", 00:29:33.424 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:33.424 "is_configured": true, 00:29:33.424 "data_offset": 2048, 00:29:33.424 "data_size": 63488 00:29:33.424 } 00:29:33.424 ] 00:29:33.424 }' 00:29:33.424 00:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:33.424 00:13:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.682 00:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:33.682 00:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:33.682 00:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:33.682 00:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:33.682 00:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:33.682 00:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:33.682 00:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:33.941 00:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:33.941 "name": "raid_bdev1", 00:29:33.941 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:33.941 "strip_size_kb": 64, 00:29:33.941 "state": "online", 00:29:33.941 "raid_level": "raid5f", 00:29:33.941 "superblock": true, 00:29:33.941 "num_base_bdevs": 3, 00:29:33.941 "num_base_bdevs_discovered": 2, 00:29:33.941 "num_base_bdevs_operational": 2, 00:29:33.941 "base_bdevs_list": [ 00:29:33.941 { 00:29:33.941 "name": null, 00:29:33.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:33.941 "is_configured": false, 00:29:33.941 "data_offset": 2048, 00:29:33.941 "data_size": 63488 00:29:33.941 }, 00:29:33.941 { 00:29:33.941 "name": "BaseBdev2", 00:29:33.941 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:33.941 "is_configured": true, 00:29:33.941 "data_offset": 2048, 00:29:33.941 "data_size": 63488 00:29:33.941 }, 00:29:33.941 { 00:29:33.941 "name": "BaseBdev3", 00:29:33.941 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:33.941 "is_configured": true, 00:29:33.941 "data_offset": 2048, 00:29:33.941 "data_size": 63488 00:29:33.941 } 00:29:33.941 ] 00:29:33.941 }' 00:29:33.941 00:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:33.941 00:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:33.941 00:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:33.941 00:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:33.941 00:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:34.200 [2024-07-25 00:13:29.903200] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:34.200 [2024-07-25 00:13:29.913178] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000028b70 00:29:34.200 [2024-07-25 00:13:29.918799] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:34.200 00:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:29:35.136 00:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:35.136 00:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:35.136 00:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:35.136 00:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:35.136 00:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:35.136 00:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:35.136 00:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.395 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:35.395 "name": "raid_bdev1", 00:29:35.395 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:35.395 "strip_size_kb": 64, 00:29:35.395 "state": "online", 00:29:35.395 "raid_level": "raid5f", 00:29:35.395 "superblock": true, 00:29:35.395 "num_base_bdevs": 3, 00:29:35.395 "num_base_bdevs_discovered": 3, 00:29:35.395 "num_base_bdevs_operational": 3, 00:29:35.395 "process": { 00:29:35.395 "type": "rebuild", 00:29:35.395 "target": "spare", 00:29:35.395 "progress": { 00:29:35.395 "blocks": 24576, 00:29:35.395 "percent": 19 00:29:35.395 } 00:29:35.395 }, 00:29:35.395 "base_bdevs_list": [ 00:29:35.395 { 00:29:35.395 "name": "spare", 00:29:35.395 "uuid": "b5c2040f-1621-5190-8ee8-0e4447f464ea", 00:29:35.395 "is_configured": true, 00:29:35.395 "data_offset": 2048, 00:29:35.395 "data_size": 63488 00:29:35.395 }, 00:29:35.395 { 00:29:35.395 "name": "BaseBdev2", 00:29:35.395 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:35.395 "is_configured": true, 00:29:35.395 "data_offset": 2048, 00:29:35.395 "data_size": 63488 00:29:35.395 }, 00:29:35.395 { 00:29:35.395 "name": "BaseBdev3", 00:29:35.395 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:35.395 "is_configured": true, 00:29:35.395 "data_offset": 2048, 00:29:35.395 "data_size": 63488 00:29:35.395 } 00:29:35.395 ] 00:29:35.395 }' 00:29:35.395 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:35.395 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:35.395 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:35.395 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:35.395 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:29:35.395 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:29:35.395 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:29:35.395 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=3 00:29:35.395 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid5f = raid1 ']' 00:29:35.395 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=1001 00:29:35.395 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:35.395 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:35.395 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:35.395 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:35.395 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:35.395 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:35.395 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:35.395 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.654 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:35.654 "name": "raid_bdev1", 00:29:35.654 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:35.654 "strip_size_kb": 64, 00:29:35.654 "state": "online", 00:29:35.654 "raid_level": "raid5f", 00:29:35.654 "superblock": true, 00:29:35.654 "num_base_bdevs": 3, 00:29:35.654 "num_base_bdevs_discovered": 3, 00:29:35.654 "num_base_bdevs_operational": 3, 00:29:35.654 "process": { 00:29:35.654 "type": "rebuild", 00:29:35.654 "target": "spare", 00:29:35.654 "progress": { 00:29:35.654 "blocks": 30720, 00:29:35.654 "percent": 24 00:29:35.654 } 00:29:35.654 }, 00:29:35.654 "base_bdevs_list": [ 00:29:35.654 { 00:29:35.654 "name": "spare", 00:29:35.654 "uuid": "b5c2040f-1621-5190-8ee8-0e4447f464ea", 00:29:35.654 "is_configured": true, 00:29:35.654 "data_offset": 2048, 00:29:35.654 "data_size": 63488 00:29:35.654 }, 00:29:35.654 { 00:29:35.654 "name": "BaseBdev2", 00:29:35.654 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:35.654 "is_configured": true, 00:29:35.654 "data_offset": 2048, 00:29:35.654 "data_size": 63488 00:29:35.654 }, 00:29:35.654 { 00:29:35.654 "name": "BaseBdev3", 00:29:35.654 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:35.654 "is_configured": true, 00:29:35.654 "data_offset": 2048, 00:29:35.654 "data_size": 63488 00:29:35.654 } 00:29:35.654 ] 00:29:35.654 }' 00:29:35.654 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:35.654 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:35.654 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:35.654 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:35.654 00:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:37.031 00:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:37.031 00:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:37.031 00:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:37.031 00:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:37.031 00:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:37.031 00:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:37.031 00:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.031 00:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:37.031 00:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:37.031 "name": "raid_bdev1", 00:29:37.031 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:37.031 "strip_size_kb": 64, 00:29:37.031 "state": "online", 00:29:37.031 "raid_level": "raid5f", 00:29:37.031 "superblock": true, 00:29:37.031 "num_base_bdevs": 3, 00:29:37.031 "num_base_bdevs_discovered": 3, 00:29:37.031 "num_base_bdevs_operational": 3, 00:29:37.031 "process": { 00:29:37.031 "type": "rebuild", 00:29:37.031 "target": "spare", 00:29:37.031 "progress": { 00:29:37.031 "blocks": 55296, 00:29:37.031 "percent": 43 00:29:37.031 } 00:29:37.031 }, 00:29:37.031 "base_bdevs_list": [ 00:29:37.031 { 00:29:37.031 "name": "spare", 00:29:37.031 "uuid": "b5c2040f-1621-5190-8ee8-0e4447f464ea", 00:29:37.031 "is_configured": true, 00:29:37.031 "data_offset": 2048, 00:29:37.031 "data_size": 63488 00:29:37.031 }, 00:29:37.031 { 00:29:37.031 "name": "BaseBdev2", 00:29:37.031 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:37.031 "is_configured": true, 00:29:37.031 "data_offset": 2048, 00:29:37.031 "data_size": 63488 00:29:37.031 }, 00:29:37.031 { 00:29:37.031 "name": "BaseBdev3", 00:29:37.031 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:37.031 "is_configured": true, 00:29:37.031 "data_offset": 2048, 00:29:37.031 "data_size": 63488 00:29:37.031 } 00:29:37.031 ] 00:29:37.031 }' 00:29:37.031 00:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:37.031 00:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:37.031 00:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:37.031 00:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:37.031 00:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:37.967 00:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:37.967 00:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:37.967 00:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:37.967 00:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:37.967 00:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:37.967 00:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:37.967 00:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:37.967 00:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:38.225 00:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:38.225 "name": "raid_bdev1", 00:29:38.225 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:38.225 "strip_size_kb": 64, 00:29:38.225 "state": "online", 00:29:38.225 "raid_level": "raid5f", 00:29:38.225 "superblock": true, 00:29:38.225 "num_base_bdevs": 3, 00:29:38.225 "num_base_bdevs_discovered": 3, 00:29:38.225 "num_base_bdevs_operational": 3, 00:29:38.225 "process": { 00:29:38.225 "type": "rebuild", 00:29:38.225 "target": "spare", 00:29:38.225 "progress": { 00:29:38.225 "blocks": 79872, 00:29:38.225 "percent": 62 00:29:38.225 } 00:29:38.225 }, 00:29:38.225 "base_bdevs_list": [ 00:29:38.225 { 00:29:38.225 "name": "spare", 00:29:38.225 "uuid": "b5c2040f-1621-5190-8ee8-0e4447f464ea", 00:29:38.225 "is_configured": true, 00:29:38.225 "data_offset": 2048, 00:29:38.225 "data_size": 63488 00:29:38.225 }, 00:29:38.225 { 00:29:38.225 "name": "BaseBdev2", 00:29:38.225 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:38.225 "is_configured": true, 00:29:38.225 "data_offset": 2048, 00:29:38.225 "data_size": 63488 00:29:38.225 }, 00:29:38.225 { 00:29:38.225 "name": "BaseBdev3", 00:29:38.225 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:38.225 "is_configured": true, 00:29:38.225 "data_offset": 2048, 00:29:38.225 "data_size": 63488 00:29:38.225 } 00:29:38.225 ] 00:29:38.225 }' 00:29:38.225 00:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:38.225 00:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:38.225 00:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:38.225 00:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:38.225 00:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:39.214 00:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:39.214 00:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:39.214 00:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:39.214 00:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:39.214 00:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:39.214 00:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:39.214 00:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:39.214 00:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.473 00:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:39.473 "name": "raid_bdev1", 00:29:39.473 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:39.473 "strip_size_kb": 64, 00:29:39.473 "state": "online", 00:29:39.473 "raid_level": "raid5f", 00:29:39.473 "superblock": true, 00:29:39.473 "num_base_bdevs": 3, 00:29:39.473 "num_base_bdevs_discovered": 3, 00:29:39.473 "num_base_bdevs_operational": 3, 00:29:39.473 "process": { 00:29:39.473 "type": "rebuild", 00:29:39.473 "target": "spare", 00:29:39.473 "progress": { 00:29:39.473 "blocks": 106496, 00:29:39.473 "percent": 83 00:29:39.473 } 00:29:39.473 }, 00:29:39.473 "base_bdevs_list": [ 00:29:39.473 { 00:29:39.473 "name": "spare", 00:29:39.473 "uuid": "b5c2040f-1621-5190-8ee8-0e4447f464ea", 00:29:39.473 "is_configured": true, 00:29:39.473 "data_offset": 2048, 00:29:39.473 "data_size": 63488 00:29:39.473 }, 00:29:39.473 { 00:29:39.473 "name": "BaseBdev2", 00:29:39.473 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:39.473 "is_configured": true, 00:29:39.473 "data_offset": 2048, 00:29:39.473 "data_size": 63488 00:29:39.473 }, 00:29:39.473 { 00:29:39.473 "name": "BaseBdev3", 00:29:39.473 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:39.473 "is_configured": true, 00:29:39.473 "data_offset": 2048, 00:29:39.473 "data_size": 63488 00:29:39.473 } 00:29:39.473 ] 00:29:39.473 }' 00:29:39.473 00:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:39.473 00:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:39.473 00:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:39.473 00:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:39.473 00:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:40.408 [2024-07-25 00:13:36.169990] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:40.408 [2024-07-25 00:13:36.170088] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:40.408 [2024-07-25 00:13:36.170229] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:40.408 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:40.408 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:40.408 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:40.408 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:40.408 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:40.408 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:40.408 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:40.408 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.666 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:40.666 "name": "raid_bdev1", 00:29:40.666 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:40.666 "strip_size_kb": 64, 00:29:40.666 "state": "online", 00:29:40.666 "raid_level": "raid5f", 00:29:40.666 "superblock": true, 00:29:40.666 "num_base_bdevs": 3, 00:29:40.666 "num_base_bdevs_discovered": 3, 00:29:40.666 "num_base_bdevs_operational": 3, 00:29:40.666 "base_bdevs_list": [ 00:29:40.666 { 00:29:40.666 "name": "spare", 00:29:40.666 "uuid": "b5c2040f-1621-5190-8ee8-0e4447f464ea", 00:29:40.666 "is_configured": true, 00:29:40.666 "data_offset": 2048, 00:29:40.666 "data_size": 63488 00:29:40.666 }, 00:29:40.666 { 00:29:40.666 "name": "BaseBdev2", 00:29:40.666 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:40.666 "is_configured": true, 00:29:40.666 "data_offset": 2048, 00:29:40.666 "data_size": 63488 00:29:40.666 }, 00:29:40.666 { 00:29:40.666 "name": "BaseBdev3", 00:29:40.666 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:40.666 "is_configured": true, 00:29:40.666 "data_offset": 2048, 00:29:40.666 "data_size": 63488 00:29:40.666 } 00:29:40.666 ] 00:29:40.666 }' 00:29:40.666 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:40.666 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:40.666 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:40.666 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:40.666 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:29:40.667 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:40.667 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:40.667 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:40.667 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:40.667 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:40.667 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:40.667 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.925 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:40.925 "name": "raid_bdev1", 00:29:40.925 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:40.925 "strip_size_kb": 64, 00:29:40.925 "state": "online", 00:29:40.925 "raid_level": "raid5f", 00:29:40.925 "superblock": true, 00:29:40.925 "num_base_bdevs": 3, 00:29:40.925 "num_base_bdevs_discovered": 3, 00:29:40.925 "num_base_bdevs_operational": 3, 00:29:40.925 "base_bdevs_list": [ 00:29:40.925 { 00:29:40.925 "name": "spare", 00:29:40.925 "uuid": "b5c2040f-1621-5190-8ee8-0e4447f464ea", 00:29:40.925 "is_configured": true, 00:29:40.925 "data_offset": 2048, 00:29:40.925 "data_size": 63488 00:29:40.925 }, 00:29:40.925 { 00:29:40.925 "name": "BaseBdev2", 00:29:40.925 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:40.925 "is_configured": true, 00:29:40.925 "data_offset": 2048, 00:29:40.925 "data_size": 63488 00:29:40.925 }, 00:29:40.925 { 00:29:40.925 "name": "BaseBdev3", 00:29:40.925 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:40.925 "is_configured": true, 00:29:40.925 "data_offset": 2048, 00:29:40.925 "data_size": 63488 00:29:40.925 } 00:29:40.925 ] 00:29:40.925 }' 00:29:40.925 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:40.925 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:40.925 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:40.925 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:40.925 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:40.925 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:40.925 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:40.925 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:40.925 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:40.925 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:40.925 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:40.925 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:40.925 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:40.925 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:40.925 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.925 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:41.183 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:41.183 "name": "raid_bdev1", 00:29:41.183 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:41.183 "strip_size_kb": 64, 00:29:41.183 "state": "online", 00:29:41.183 "raid_level": "raid5f", 00:29:41.183 "superblock": true, 00:29:41.183 "num_base_bdevs": 3, 00:29:41.183 "num_base_bdevs_discovered": 3, 00:29:41.183 "num_base_bdevs_operational": 3, 00:29:41.183 "base_bdevs_list": [ 00:29:41.183 { 00:29:41.183 "name": "spare", 00:29:41.183 "uuid": "b5c2040f-1621-5190-8ee8-0e4447f464ea", 00:29:41.184 "is_configured": true, 00:29:41.184 "data_offset": 2048, 00:29:41.184 "data_size": 63488 00:29:41.184 }, 00:29:41.184 { 00:29:41.184 "name": "BaseBdev2", 00:29:41.184 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:41.184 "is_configured": true, 00:29:41.184 "data_offset": 2048, 00:29:41.184 "data_size": 63488 00:29:41.184 }, 00:29:41.184 { 00:29:41.184 "name": "BaseBdev3", 00:29:41.184 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:41.184 "is_configured": true, 00:29:41.184 "data_offset": 2048, 00:29:41.184 "data_size": 63488 00:29:41.184 } 00:29:41.184 ] 00:29:41.184 }' 00:29:41.184 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:41.184 00:13:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:41.442 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:41.700 [2024-07-25 00:13:37.421067] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:41.700 [2024-07-25 00:13:37.421096] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:41.700 [2024-07-25 00:13:37.421173] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:41.700 [2024-07-25 00:13:37.421257] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:41.700 [2024-07-25 00:13:37.421276] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009c80 name raid_bdev1, state offline 00:29:41.700 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:41.700 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:29:41.958 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:29:41.958 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:29:41.958 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:29:41.958 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:41.958 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:41.958 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:41.958 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:41.958 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:41.958 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:41.958 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:41.958 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:41.958 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:41.959 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:42.217 /dev/nbd0 00:29:42.217 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:42.217 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:42.217 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:42.217 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:29:42.217 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:42.217 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:42.217 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:42.217 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:29:42.217 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:42.217 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:42.217 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:42.217 1+0 records in 00:29:42.217 1+0 records out 00:29:42.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183763 s, 22.3 MB/s 00:29:42.217 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:42.217 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:29:42.217 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:42.217 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:42.217 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:29:42.217 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:42.217 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:42.217 00:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:42.475 /dev/nbd1 00:29:42.475 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:42.475 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:42.475 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:29:42.475 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:29:42.475 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:42.475 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:42.475 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:29:42.475 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:29:42.475 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:42.475 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:42.475 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:42.475 1+0 records in 00:29:42.475 1+0 records out 00:29:42.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338952 s, 12.1 MB/s 00:29:42.475 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:42.475 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:29:42.475 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:42.475 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:42.475 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:29:42.475 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:42.475 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:42.475 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:42.734 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:42.734 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:42.734 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:42.734 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:42.734 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:29:42.734 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:42.734 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:42.734 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:42.734 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:42.734 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:42.734 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:42.734 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:42.734 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:42.734 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:42.734 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:42.734 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:42.734 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:42.992 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:42.992 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:42.992 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:42.992 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:42.992 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:42.992 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:42.992 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:42.992 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:42.992 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:29:42.992 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:43.250 00:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:43.510 [2024-07-25 00:13:39.227292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:43.510 [2024-07-25 00:13:39.227394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:43.510 [2024-07-25 00:13:39.227430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:29:43.510 [2024-07-25 00:13:39.227446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:43.510 [2024-07-25 00:13:39.229926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:43.510 [2024-07-25 00:13:39.229988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:43.510 [2024-07-25 00:13:39.230095] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:43.510 [2024-07-25 00:13:39.230163] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:43.510 [2024-07-25 00:13:39.230370] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:43.510 [2024-07-25 00:13:39.230490] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:43.510 spare 00:29:43.510 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:43.510 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:43.510 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:43.510 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:43.510 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:43.510 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:43.510 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:43.510 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:43.510 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:43.510 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:43.510 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:43.510 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:43.510 [2024-07-25 00:13:39.330585] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000b180 00:29:43.510 [2024-07-25 00:13:39.330636] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:43.510 [2024-07-25 00:13:39.330753] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000047220 00:29:43.510 [2024-07-25 00:13:39.334944] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000b180 00:29:43.510 [2024-07-25 00:13:39.334971] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000b180 00:29:43.510 [2024-07-25 00:13:39.335181] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:43.769 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:43.769 "name": "raid_bdev1", 00:29:43.769 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:43.769 "strip_size_kb": 64, 00:29:43.769 "state": "online", 00:29:43.769 "raid_level": "raid5f", 00:29:43.769 "superblock": true, 00:29:43.769 "num_base_bdevs": 3, 00:29:43.769 "num_base_bdevs_discovered": 3, 00:29:43.769 "num_base_bdevs_operational": 3, 00:29:43.769 "base_bdevs_list": [ 00:29:43.769 { 00:29:43.769 "name": "spare", 00:29:43.769 "uuid": "b5c2040f-1621-5190-8ee8-0e4447f464ea", 00:29:43.769 "is_configured": true, 00:29:43.769 "data_offset": 2048, 00:29:43.769 "data_size": 63488 00:29:43.769 }, 00:29:43.769 { 00:29:43.769 "name": "BaseBdev2", 00:29:43.769 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:43.769 "is_configured": true, 00:29:43.769 "data_offset": 2048, 00:29:43.769 "data_size": 63488 00:29:43.769 }, 00:29:43.769 { 00:29:43.769 "name": "BaseBdev3", 00:29:43.769 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:43.769 "is_configured": true, 00:29:43.769 "data_offset": 2048, 00:29:43.769 "data_size": 63488 00:29:43.769 } 00:29:43.769 ] 00:29:43.769 }' 00:29:43.769 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:43.769 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:44.027 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:44.028 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:44.028 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:44.028 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:44.028 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:44.028 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:44.028 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:44.286 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:44.286 "name": "raid_bdev1", 00:29:44.286 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:44.286 "strip_size_kb": 64, 00:29:44.286 "state": "online", 00:29:44.286 "raid_level": "raid5f", 00:29:44.286 "superblock": true, 00:29:44.286 "num_base_bdevs": 3, 00:29:44.286 "num_base_bdevs_discovered": 3, 00:29:44.286 "num_base_bdevs_operational": 3, 00:29:44.286 "base_bdevs_list": [ 00:29:44.286 { 00:29:44.286 "name": "spare", 00:29:44.286 "uuid": "b5c2040f-1621-5190-8ee8-0e4447f464ea", 00:29:44.286 "is_configured": true, 00:29:44.286 "data_offset": 2048, 00:29:44.286 "data_size": 63488 00:29:44.286 }, 00:29:44.286 { 00:29:44.286 "name": "BaseBdev2", 00:29:44.286 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:44.286 "is_configured": true, 00:29:44.286 "data_offset": 2048, 00:29:44.286 "data_size": 63488 00:29:44.286 }, 00:29:44.286 { 00:29:44.286 "name": "BaseBdev3", 00:29:44.286 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:44.286 "is_configured": true, 00:29:44.286 "data_offset": 2048, 00:29:44.286 "data_size": 63488 00:29:44.286 } 00:29:44.286 ] 00:29:44.286 }' 00:29:44.286 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:44.286 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:44.286 00:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:44.286 00:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:44.286 00:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:44.286 00:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:44.545 00:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:29:44.545 00:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:44.804 [2024-07-25 00:13:40.432269] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:44.804 00:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:29:44.804 00:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:44.804 00:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:44.804 00:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:44.804 00:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:44.804 00:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:44.804 00:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:44.804 00:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:44.804 00:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:44.804 00:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:44.804 00:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:44.804 00:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:44.804 00:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:44.804 "name": "raid_bdev1", 00:29:44.804 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:44.804 "strip_size_kb": 64, 00:29:44.804 "state": "online", 00:29:44.804 "raid_level": "raid5f", 00:29:44.804 "superblock": true, 00:29:44.804 "num_base_bdevs": 3, 00:29:44.804 "num_base_bdevs_discovered": 2, 00:29:44.804 "num_base_bdevs_operational": 2, 00:29:44.804 "base_bdevs_list": [ 00:29:44.804 { 00:29:44.804 "name": null, 00:29:44.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:44.804 "is_configured": false, 00:29:44.804 "data_offset": 2048, 00:29:44.804 "data_size": 63488 00:29:44.804 }, 00:29:44.804 { 00:29:44.804 "name": "BaseBdev2", 00:29:44.804 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:44.804 "is_configured": true, 00:29:44.804 "data_offset": 2048, 00:29:44.804 "data_size": 63488 00:29:44.804 }, 00:29:44.804 { 00:29:44.804 "name": "BaseBdev3", 00:29:44.804 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:44.804 "is_configured": true, 00:29:44.804 "data_offset": 2048, 00:29:44.804 "data_size": 63488 00:29:44.804 } 00:29:44.804 ] 00:29:44.804 }' 00:29:44.804 00:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:44.804 00:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:45.371 00:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:45.371 [2024-07-25 00:13:41.148443] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:45.371 [2024-07-25 00:13:41.148630] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:45.371 [2024-07-25 00:13:41.148652] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:45.371 [2024-07-25 00:13:41.148707] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:45.371 [2024-07-25 00:13:41.158604] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000472f0 00:29:45.371 [2024-07-25 00:13:41.164330] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:45.371 00:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:29:46.307 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:46.307 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:46.307 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:46.307 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:46.307 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:46.565 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:46.565 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.565 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:46.565 "name": "raid_bdev1", 00:29:46.565 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:46.565 "strip_size_kb": 64, 00:29:46.565 "state": "online", 00:29:46.565 "raid_level": "raid5f", 00:29:46.565 "superblock": true, 00:29:46.565 "num_base_bdevs": 3, 00:29:46.565 "num_base_bdevs_discovered": 3, 00:29:46.565 "num_base_bdevs_operational": 3, 00:29:46.565 "process": { 00:29:46.565 "type": "rebuild", 00:29:46.565 "target": "spare", 00:29:46.565 "progress": { 00:29:46.565 "blocks": 24576, 00:29:46.565 "percent": 19 00:29:46.565 } 00:29:46.565 }, 00:29:46.565 "base_bdevs_list": [ 00:29:46.565 { 00:29:46.565 "name": "spare", 00:29:46.565 "uuid": "b5c2040f-1621-5190-8ee8-0e4447f464ea", 00:29:46.565 "is_configured": true, 00:29:46.565 "data_offset": 2048, 00:29:46.565 "data_size": 63488 00:29:46.565 }, 00:29:46.565 { 00:29:46.565 "name": "BaseBdev2", 00:29:46.565 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:46.565 "is_configured": true, 00:29:46.565 "data_offset": 2048, 00:29:46.565 "data_size": 63488 00:29:46.565 }, 00:29:46.565 { 00:29:46.565 "name": "BaseBdev3", 00:29:46.565 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:46.565 "is_configured": true, 00:29:46.565 "data_offset": 2048, 00:29:46.565 "data_size": 63488 00:29:46.565 } 00:29:46.565 ] 00:29:46.565 }' 00:29:46.565 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:46.565 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:46.565 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:46.824 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:46.824 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:46.824 [2024-07-25 00:13:42.609729] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:46.824 [2024-07-25 00:13:42.676985] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:46.824 [2024-07-25 00:13:42.677065] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:46.824 [2024-07-25 00:13:42.677086] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:46.824 [2024-07-25 00:13:42.677097] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:47.082 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:29:47.082 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:47.082 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:47.082 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:47.082 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:47.082 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:47.082 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:47.082 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:47.082 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:47.082 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:47.082 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:47.082 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.340 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:47.340 "name": "raid_bdev1", 00:29:47.340 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:47.340 "strip_size_kb": 64, 00:29:47.340 "state": "online", 00:29:47.340 "raid_level": "raid5f", 00:29:47.340 "superblock": true, 00:29:47.340 "num_base_bdevs": 3, 00:29:47.340 "num_base_bdevs_discovered": 2, 00:29:47.340 "num_base_bdevs_operational": 2, 00:29:47.340 "base_bdevs_list": [ 00:29:47.340 { 00:29:47.340 "name": null, 00:29:47.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:47.340 "is_configured": false, 00:29:47.340 "data_offset": 2048, 00:29:47.340 "data_size": 63488 00:29:47.340 }, 00:29:47.340 { 00:29:47.340 "name": "BaseBdev2", 00:29:47.340 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:47.340 "is_configured": true, 00:29:47.340 "data_offset": 2048, 00:29:47.340 "data_size": 63488 00:29:47.340 }, 00:29:47.340 { 00:29:47.340 "name": "BaseBdev3", 00:29:47.340 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:47.340 "is_configured": true, 00:29:47.340 "data_offset": 2048, 00:29:47.340 "data_size": 63488 00:29:47.340 } 00:29:47.340 ] 00:29:47.340 }' 00:29:47.340 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:47.340 00:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:47.598 00:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:47.598 [2024-07-25 00:13:43.451959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:47.598 [2024-07-25 00:13:43.452075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:47.598 [2024-07-25 00:13:43.452127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b780 00:29:47.598 [2024-07-25 00:13:43.452157] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:47.598 [2024-07-25 00:13:43.452668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:47.598 [2024-07-25 00:13:43.452709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:47.598 [2024-07-25 00:13:43.452858] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:47.598 [2024-07-25 00:13:43.452881] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:47.598 [2024-07-25 00:13:43.452893] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:47.598 [2024-07-25 00:13:43.452950] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:47.598 [2024-07-25 00:13:43.464323] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000473c0 00:29:47.598 spare 00:29:47.856 [2024-07-25 00:13:43.471419] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:47.856 00:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:29:48.790 00:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:48.790 00:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:48.790 00:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:48.790 00:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:48.790 00:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:48.790 00:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:48.790 00:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:49.048 00:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:49.048 "name": "raid_bdev1", 00:29:49.048 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:49.048 "strip_size_kb": 64, 00:29:49.048 "state": "online", 00:29:49.048 "raid_level": "raid5f", 00:29:49.048 "superblock": true, 00:29:49.048 "num_base_bdevs": 3, 00:29:49.048 "num_base_bdevs_discovered": 3, 00:29:49.048 "num_base_bdevs_operational": 3, 00:29:49.048 "process": { 00:29:49.048 "type": "rebuild", 00:29:49.048 "target": "spare", 00:29:49.048 "progress": { 00:29:49.048 "blocks": 24576, 00:29:49.048 "percent": 19 00:29:49.048 } 00:29:49.048 }, 00:29:49.048 "base_bdevs_list": [ 00:29:49.048 { 00:29:49.048 "name": "spare", 00:29:49.048 "uuid": "b5c2040f-1621-5190-8ee8-0e4447f464ea", 00:29:49.048 "is_configured": true, 00:29:49.048 "data_offset": 2048, 00:29:49.048 "data_size": 63488 00:29:49.048 }, 00:29:49.048 { 00:29:49.048 "name": "BaseBdev2", 00:29:49.048 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:49.048 "is_configured": true, 00:29:49.048 "data_offset": 2048, 00:29:49.048 "data_size": 63488 00:29:49.048 }, 00:29:49.048 { 00:29:49.048 "name": "BaseBdev3", 00:29:49.048 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:49.048 "is_configured": true, 00:29:49.048 "data_offset": 2048, 00:29:49.048 "data_size": 63488 00:29:49.048 } 00:29:49.048 ] 00:29:49.048 }' 00:29:49.048 00:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:49.048 00:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:49.048 00:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:49.048 00:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:49.048 00:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:49.048 [2024-07-25 00:13:44.917457] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:49.306 [2024-07-25 00:13:44.983830] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:49.306 [2024-07-25 00:13:44.983923] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:49.306 [2024-07-25 00:13:44.983948] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:49.306 [2024-07-25 00:13:44.983957] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:49.306 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:29:49.306 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:49.306 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:49.306 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:49.306 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:49.306 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:49.306 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:49.306 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:49.306 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:49.306 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:49.306 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:49.306 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:49.564 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:49.564 "name": "raid_bdev1", 00:29:49.564 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:49.564 "strip_size_kb": 64, 00:29:49.564 "state": "online", 00:29:49.564 "raid_level": "raid5f", 00:29:49.564 "superblock": true, 00:29:49.564 "num_base_bdevs": 3, 00:29:49.564 "num_base_bdevs_discovered": 2, 00:29:49.564 "num_base_bdevs_operational": 2, 00:29:49.564 "base_bdevs_list": [ 00:29:49.564 { 00:29:49.564 "name": null, 00:29:49.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:49.564 "is_configured": false, 00:29:49.564 "data_offset": 2048, 00:29:49.564 "data_size": 63488 00:29:49.564 }, 00:29:49.564 { 00:29:49.564 "name": "BaseBdev2", 00:29:49.564 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:49.564 "is_configured": true, 00:29:49.564 "data_offset": 2048, 00:29:49.564 "data_size": 63488 00:29:49.564 }, 00:29:49.564 { 00:29:49.564 "name": "BaseBdev3", 00:29:49.564 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:49.564 "is_configured": true, 00:29:49.564 "data_offset": 2048, 00:29:49.564 "data_size": 63488 00:29:49.564 } 00:29:49.564 ] 00:29:49.564 }' 00:29:49.564 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:49.564 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:49.822 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:49.822 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:49.822 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:49.822 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:49.822 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:49.822 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:49.822 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:50.081 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:50.081 "name": "raid_bdev1", 00:29:50.081 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:50.081 "strip_size_kb": 64, 00:29:50.081 "state": "online", 00:29:50.081 "raid_level": "raid5f", 00:29:50.081 "superblock": true, 00:29:50.081 "num_base_bdevs": 3, 00:29:50.081 "num_base_bdevs_discovered": 2, 00:29:50.081 "num_base_bdevs_operational": 2, 00:29:50.081 "base_bdevs_list": [ 00:29:50.081 { 00:29:50.081 "name": null, 00:29:50.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.081 "is_configured": false, 00:29:50.081 "data_offset": 2048, 00:29:50.081 "data_size": 63488 00:29:50.081 }, 00:29:50.081 { 00:29:50.081 "name": "BaseBdev2", 00:29:50.081 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:50.081 "is_configured": true, 00:29:50.081 "data_offset": 2048, 00:29:50.081 "data_size": 63488 00:29:50.081 }, 00:29:50.081 { 00:29:50.081 "name": "BaseBdev3", 00:29:50.081 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:50.081 "is_configured": true, 00:29:50.081 "data_offset": 2048, 00:29:50.081 "data_size": 63488 00:29:50.081 } 00:29:50.081 ] 00:29:50.081 }' 00:29:50.081 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:50.081 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:50.081 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:50.081 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:50.081 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:29:50.339 00:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:50.339 [2024-07-25 00:13:46.146116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:50.339 [2024-07-25 00:13:46.146196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:50.339 [2024-07-25 00:13:46.146226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000bd80 00:29:50.339 [2024-07-25 00:13:46.146239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:50.339 [2024-07-25 00:13:46.146689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:50.339 [2024-07-25 00:13:46.146723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:50.339 [2024-07-25 00:13:46.146832] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:50.339 [2024-07-25 00:13:46.146850] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:50.339 [2024-07-25 00:13:46.146865] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:50.339 BaseBdev1 00:29:50.339 00:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:29:51.715 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:29:51.715 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:51.715 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:51.715 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:51.715 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:51.715 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:51.715 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:51.715 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:51.715 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:51.715 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:51.715 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:51.715 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:51.715 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:51.715 "name": "raid_bdev1", 00:29:51.715 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:51.715 "strip_size_kb": 64, 00:29:51.715 "state": "online", 00:29:51.715 "raid_level": "raid5f", 00:29:51.715 "superblock": true, 00:29:51.715 "num_base_bdevs": 3, 00:29:51.715 "num_base_bdevs_discovered": 2, 00:29:51.715 "num_base_bdevs_operational": 2, 00:29:51.715 "base_bdevs_list": [ 00:29:51.715 { 00:29:51.715 "name": null, 00:29:51.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:51.715 "is_configured": false, 00:29:51.715 "data_offset": 2048, 00:29:51.715 "data_size": 63488 00:29:51.715 }, 00:29:51.715 { 00:29:51.715 "name": "BaseBdev2", 00:29:51.715 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:51.715 "is_configured": true, 00:29:51.715 "data_offset": 2048, 00:29:51.715 "data_size": 63488 00:29:51.715 }, 00:29:51.715 { 00:29:51.715 "name": "BaseBdev3", 00:29:51.716 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:51.716 "is_configured": true, 00:29:51.716 "data_offset": 2048, 00:29:51.716 "data_size": 63488 00:29:51.716 } 00:29:51.716 ] 00:29:51.716 }' 00:29:51.716 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:51.716 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:51.975 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:51.975 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:51.975 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:51.975 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:51.975 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:51.975 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:51.975 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:52.234 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:52.234 "name": "raid_bdev1", 00:29:52.234 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:52.234 "strip_size_kb": 64, 00:29:52.234 "state": "online", 00:29:52.234 "raid_level": "raid5f", 00:29:52.234 "superblock": true, 00:29:52.234 "num_base_bdevs": 3, 00:29:52.234 "num_base_bdevs_discovered": 2, 00:29:52.234 "num_base_bdevs_operational": 2, 00:29:52.234 "base_bdevs_list": [ 00:29:52.234 { 00:29:52.234 "name": null, 00:29:52.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:52.234 "is_configured": false, 00:29:52.234 "data_offset": 2048, 00:29:52.234 "data_size": 63488 00:29:52.234 }, 00:29:52.234 { 00:29:52.234 "name": "BaseBdev2", 00:29:52.234 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:52.234 "is_configured": true, 00:29:52.234 "data_offset": 2048, 00:29:52.234 "data_size": 63488 00:29:52.234 }, 00:29:52.234 { 00:29:52.234 "name": "BaseBdev3", 00:29:52.234 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:52.234 "is_configured": true, 00:29:52.234 "data_offset": 2048, 00:29:52.234 "data_size": 63488 00:29:52.234 } 00:29:52.234 ] 00:29:52.234 }' 00:29:52.234 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:52.234 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:52.234 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:52.234 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:52.234 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:52.234 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:29:52.234 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:52.234 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:52.234 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:52.234 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:52.234 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:52.234 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:52.234 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:52.234 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:52.234 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:29:52.234 00:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:52.493 [2024-07-25 00:13:48.182629] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:52.493 [2024-07-25 00:13:48.182786] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:52.493 [2024-07-25 00:13:48.182819] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:52.493 request: 00:29:52.493 { 00:29:52.493 "base_bdev": "BaseBdev1", 00:29:52.493 "raid_bdev": "raid_bdev1", 00:29:52.493 "method": "bdev_raid_add_base_bdev", 00:29:52.493 "req_id": 1 00:29:52.493 } 00:29:52.493 Got JSON-RPC error response 00:29:52.493 response: 00:29:52.493 { 00:29:52.493 "code": -22, 00:29:52.493 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:52.493 } 00:29:52.493 00:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:29:52.493 00:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:52.493 00:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:52.493 00:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:52.493 00:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:29:53.430 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:29:53.430 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:53.430 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:53.430 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:53.430 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:53.430 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:53.430 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:53.430 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:53.430 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:53.430 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:53.430 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.430 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:53.689 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:53.689 "name": "raid_bdev1", 00:29:53.689 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:53.689 "strip_size_kb": 64, 00:29:53.689 "state": "online", 00:29:53.689 "raid_level": "raid5f", 00:29:53.689 "superblock": true, 00:29:53.689 "num_base_bdevs": 3, 00:29:53.689 "num_base_bdevs_discovered": 2, 00:29:53.689 "num_base_bdevs_operational": 2, 00:29:53.689 "base_bdevs_list": [ 00:29:53.689 { 00:29:53.689 "name": null, 00:29:53.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.689 "is_configured": false, 00:29:53.689 "data_offset": 2048, 00:29:53.689 "data_size": 63488 00:29:53.689 }, 00:29:53.689 { 00:29:53.689 "name": "BaseBdev2", 00:29:53.689 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:53.689 "is_configured": true, 00:29:53.689 "data_offset": 2048, 00:29:53.689 "data_size": 63488 00:29:53.689 }, 00:29:53.689 { 00:29:53.689 "name": "BaseBdev3", 00:29:53.689 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:53.689 "is_configured": true, 00:29:53.689 "data_offset": 2048, 00:29:53.689 "data_size": 63488 00:29:53.689 } 00:29:53.689 ] 00:29:53.689 }' 00:29:53.689 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:53.689 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:53.956 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:53.956 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:53.956 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:53.956 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:53.956 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:53.956 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:53.956 00:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:54.231 00:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:54.231 "name": "raid_bdev1", 00:29:54.231 "uuid": "162164e2-0aec-42f3-bce0-95f7562a7d2c", 00:29:54.231 "strip_size_kb": 64, 00:29:54.232 "state": "online", 00:29:54.232 "raid_level": "raid5f", 00:29:54.232 "superblock": true, 00:29:54.232 "num_base_bdevs": 3, 00:29:54.232 "num_base_bdevs_discovered": 2, 00:29:54.232 "num_base_bdevs_operational": 2, 00:29:54.232 "base_bdevs_list": [ 00:29:54.232 { 00:29:54.232 "name": null, 00:29:54.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:54.232 "is_configured": false, 00:29:54.232 "data_offset": 2048, 00:29:54.232 "data_size": 63488 00:29:54.232 }, 00:29:54.232 { 00:29:54.232 "name": "BaseBdev2", 00:29:54.232 "uuid": "766d786f-5260-5432-9b11-e876955a2ca0", 00:29:54.232 "is_configured": true, 00:29:54.232 "data_offset": 2048, 00:29:54.232 "data_size": 63488 00:29:54.232 }, 00:29:54.232 { 00:29:54.232 "name": "BaseBdev3", 00:29:54.232 "uuid": "76134165-2984-5150-9b81-148f4f5aa190", 00:29:54.232 "is_configured": true, 00:29:54.232 "data_offset": 2048, 00:29:54.232 "data_size": 63488 00:29:54.232 } 00:29:54.232 ] 00:29:54.232 }' 00:29:54.232 00:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:54.232 00:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:54.232 00:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:54.232 00:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:54.232 00:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 105104 00:29:54.232 00:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 105104 ']' 00:29:54.232 00:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 105104 00:29:54.232 00:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:29:54.232 00:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:54.232 00:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105104 00:29:54.232 killing process with pid 105104 00:29:54.232 Received shutdown signal, test time was about 60.000000 seconds 00:29:54.232 00:29:54.232 Latency(us) 00:29:54.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.232 =================================================================================================================== 00:29:54.232 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:54.232 00:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:54.232 00:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:54.232 00:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105104' 00:29:54.232 00:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 105104 00:29:54.232 00:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 105104 00:29:54.232 [2024-07-25 00:13:50.093344] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:54.232 [2024-07-25 00:13:50.093487] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:54.232 [2024-07-25 00:13:50.093581] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:54.232 [2024-07-25 00:13:50.093606] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000b180 name raid_bdev1, state offline 00:29:54.490 [2024-07-25 00:13:50.344371] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:55.427 ************************************ 00:29:55.427 END TEST raid5f_rebuild_test_sb 00:29:55.427 ************************************ 00:29:55.427 00:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:29:55.427 00:29:55.427 real 0m30.111s 00:29:55.427 user 0m44.003s 00:29:55.427 sys 0m3.718s 00:29:55.427 00:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:55.427 00:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:55.685 00:13:51 bdev_raid -- bdev/bdev_raid.sh@965 -- # for n in {3..4} 00:29:55.685 00:13:51 bdev_raid -- bdev/bdev_raid.sh@966 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:29:55.685 00:13:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:29:55.685 00:13:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:55.685 00:13:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:55.685 ************************************ 00:29:55.685 START TEST raid5f_state_function_test 00:29:55.685 ************************************ 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=105941 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 105941' 00:29:55.685 Process raid pid: 105941 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 105941 /var/tmp/spdk-raid.sock 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 105941 ']' 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:29:55.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:55.685 00:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.685 [2024-07-25 00:13:51.392165] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:29:55.685 [2024-07-25 00:13:51.392336] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:55.944 [2024-07-25 00:13:51.565066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.944 [2024-07-25 00:13:51.722507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.202 [2024-07-25 00:13:51.865928] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:56.769 00:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:56.769 00:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:29:56.769 00:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:29:56.769 [2024-07-25 00:13:52.506488] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:56.769 [2024-07-25 00:13:52.506585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:56.769 [2024-07-25 00:13:52.506600] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:56.769 [2024-07-25 00:13:52.506615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:56.769 [2024-07-25 00:13:52.506624] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:56.769 [2024-07-25 00:13:52.506636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:56.769 [2024-07-25 00:13:52.506644] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:56.769 [2024-07-25 00:13:52.506657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:29:56.769 00:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:29:56.769 00:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:56.769 00:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:56.769 00:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:56.769 00:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:56.769 00:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:56.769 00:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:56.769 00:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:56.769 00:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:56.769 00:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:56.769 00:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:56.769 00:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:57.026 00:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:57.026 "name": "Existed_Raid", 00:29:57.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:57.026 "strip_size_kb": 64, 00:29:57.026 "state": "configuring", 00:29:57.026 "raid_level": "raid5f", 00:29:57.026 "superblock": false, 00:29:57.026 "num_base_bdevs": 4, 00:29:57.026 "num_base_bdevs_discovered": 0, 00:29:57.026 "num_base_bdevs_operational": 4, 00:29:57.026 "base_bdevs_list": [ 00:29:57.026 { 00:29:57.026 "name": "BaseBdev1", 00:29:57.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:57.026 "is_configured": false, 00:29:57.026 "data_offset": 0, 00:29:57.026 "data_size": 0 00:29:57.026 }, 00:29:57.026 { 00:29:57.026 "name": "BaseBdev2", 00:29:57.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:57.026 "is_configured": false, 00:29:57.026 "data_offset": 0, 00:29:57.026 "data_size": 0 00:29:57.026 }, 00:29:57.026 { 00:29:57.026 "name": "BaseBdev3", 00:29:57.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:57.026 "is_configured": false, 00:29:57.026 "data_offset": 0, 00:29:57.026 "data_size": 0 00:29:57.026 }, 00:29:57.026 { 00:29:57.026 "name": "BaseBdev4", 00:29:57.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:57.026 "is_configured": false, 00:29:57.026 "data_offset": 0, 00:29:57.026 "data_size": 0 00:29:57.026 } 00:29:57.026 ] 00:29:57.026 }' 00:29:57.026 00:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:57.026 00:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.284 00:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:29:57.542 [2024-07-25 00:13:53.282549] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:57.542 [2024-07-25 00:13:53.282594] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:29:57.542 00:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:29:57.813 [2024-07-25 00:13:53.466598] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:57.813 [2024-07-25 00:13:53.466664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:57.813 [2024-07-25 00:13:53.466677] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:57.813 [2024-07-25 00:13:53.466691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:57.813 [2024-07-25 00:13:53.466700] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:57.813 [2024-07-25 00:13:53.466712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:57.813 [2024-07-25 00:13:53.466720] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:57.813 [2024-07-25 00:13:53.466733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:29:57.813 00:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:29:58.072 [2024-07-25 00:13:53.732937] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:58.072 BaseBdev1 00:29:58.072 00:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:29:58.072 00:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:29:58.072 00:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:58.072 00:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:29:58.073 00:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:58.073 00:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:58.073 00:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:58.073 00:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:58.329 [ 00:29:58.329 { 00:29:58.329 "name": "BaseBdev1", 00:29:58.329 "aliases": [ 00:29:58.329 "ac0ba910-0467-4eb9-841c-e1b786926dd1" 00:29:58.329 ], 00:29:58.329 "product_name": "Malloc disk", 00:29:58.329 "block_size": 512, 00:29:58.329 "num_blocks": 65536, 00:29:58.329 "uuid": "ac0ba910-0467-4eb9-841c-e1b786926dd1", 00:29:58.329 "assigned_rate_limits": { 00:29:58.329 "rw_ios_per_sec": 0, 00:29:58.329 "rw_mbytes_per_sec": 0, 00:29:58.329 "r_mbytes_per_sec": 0, 00:29:58.329 "w_mbytes_per_sec": 0 00:29:58.329 }, 00:29:58.329 "claimed": true, 00:29:58.329 "claim_type": "exclusive_write", 00:29:58.329 "zoned": false, 00:29:58.329 "supported_io_types": { 00:29:58.329 "read": true, 00:29:58.329 "write": true, 00:29:58.329 "unmap": true, 00:29:58.329 "flush": true, 00:29:58.329 "reset": true, 00:29:58.329 "nvme_admin": false, 00:29:58.329 "nvme_io": false, 00:29:58.329 "nvme_io_md": false, 00:29:58.329 "write_zeroes": true, 00:29:58.329 "zcopy": true, 00:29:58.329 "get_zone_info": false, 00:29:58.329 "zone_management": false, 00:29:58.329 "zone_append": false, 00:29:58.329 "compare": false, 00:29:58.329 "compare_and_write": false, 00:29:58.329 "abort": true, 00:29:58.329 "seek_hole": false, 00:29:58.329 "seek_data": false, 00:29:58.329 "copy": true, 00:29:58.329 "nvme_iov_md": false 00:29:58.329 }, 00:29:58.329 "memory_domains": [ 00:29:58.329 { 00:29:58.330 "dma_device_id": "system", 00:29:58.330 "dma_device_type": 1 00:29:58.330 }, 00:29:58.330 { 00:29:58.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:58.330 "dma_device_type": 2 00:29:58.330 } 00:29:58.330 ], 00:29:58.330 "driver_specific": {} 00:29:58.330 } 00:29:58.330 ] 00:29:58.330 00:13:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:29:58.330 00:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:29:58.330 00:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:58.330 00:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:58.330 00:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:58.330 00:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:58.330 00:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:58.330 00:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:58.330 00:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:58.330 00:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:58.330 00:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:58.330 00:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:58.330 00:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:58.588 00:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:58.588 "name": "Existed_Raid", 00:29:58.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:58.588 "strip_size_kb": 64, 00:29:58.588 "state": "configuring", 00:29:58.588 "raid_level": "raid5f", 00:29:58.588 "superblock": false, 00:29:58.588 "num_base_bdevs": 4, 00:29:58.588 "num_base_bdevs_discovered": 1, 00:29:58.588 "num_base_bdevs_operational": 4, 00:29:58.588 "base_bdevs_list": [ 00:29:58.588 { 00:29:58.588 "name": "BaseBdev1", 00:29:58.588 "uuid": "ac0ba910-0467-4eb9-841c-e1b786926dd1", 00:29:58.588 "is_configured": true, 00:29:58.588 "data_offset": 0, 00:29:58.588 "data_size": 65536 00:29:58.588 }, 00:29:58.588 { 00:29:58.588 "name": "BaseBdev2", 00:29:58.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:58.588 "is_configured": false, 00:29:58.588 "data_offset": 0, 00:29:58.588 "data_size": 0 00:29:58.588 }, 00:29:58.588 { 00:29:58.588 "name": "BaseBdev3", 00:29:58.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:58.588 "is_configured": false, 00:29:58.588 "data_offset": 0, 00:29:58.588 "data_size": 0 00:29:58.588 }, 00:29:58.588 { 00:29:58.588 "name": "BaseBdev4", 00:29:58.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:58.588 "is_configured": false, 00:29:58.588 "data_offset": 0, 00:29:58.588 "data_size": 0 00:29:58.588 } 00:29:58.588 ] 00:29:58.588 }' 00:29:58.588 00:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:58.588 00:13:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:58.846 00:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:29:59.105 [2024-07-25 00:13:54.889279] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:59.105 [2024-07-25 00:13:54.889353] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:29:59.105 00:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:29:59.364 [2024-07-25 00:13:55.157377] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:59.364 [2024-07-25 00:13:55.159173] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:59.364 [2024-07-25 00:13:55.159251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:59.364 [2024-07-25 00:13:55.159265] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:59.364 [2024-07-25 00:13:55.159279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:59.364 [2024-07-25 00:13:55.159288] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:59.364 [2024-07-25 00:13:55.159303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:29:59.364 00:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:29:59.364 00:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:29:59.364 00:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:29:59.364 00:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:59.364 00:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:59.364 00:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:59.364 00:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:59.364 00:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:59.364 00:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:59.364 00:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:59.364 00:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:59.364 00:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:59.364 00:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:59.364 00:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:59.623 00:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:59.623 "name": "Existed_Raid", 00:29:59.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.623 "strip_size_kb": 64, 00:29:59.623 "state": "configuring", 00:29:59.623 "raid_level": "raid5f", 00:29:59.623 "superblock": false, 00:29:59.623 "num_base_bdevs": 4, 00:29:59.623 "num_base_bdevs_discovered": 1, 00:29:59.623 "num_base_bdevs_operational": 4, 00:29:59.623 "base_bdevs_list": [ 00:29:59.623 { 00:29:59.623 "name": "BaseBdev1", 00:29:59.623 "uuid": "ac0ba910-0467-4eb9-841c-e1b786926dd1", 00:29:59.623 "is_configured": true, 00:29:59.623 "data_offset": 0, 00:29:59.623 "data_size": 65536 00:29:59.623 }, 00:29:59.623 { 00:29:59.623 "name": "BaseBdev2", 00:29:59.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.623 "is_configured": false, 00:29:59.623 "data_offset": 0, 00:29:59.623 "data_size": 0 00:29:59.623 }, 00:29:59.623 { 00:29:59.623 "name": "BaseBdev3", 00:29:59.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.623 "is_configured": false, 00:29:59.623 "data_offset": 0, 00:29:59.623 "data_size": 0 00:29:59.623 }, 00:29:59.623 { 00:29:59.623 "name": "BaseBdev4", 00:29:59.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.623 "is_configured": false, 00:29:59.623 "data_offset": 0, 00:29:59.623 "data_size": 0 00:29:59.623 } 00:29:59.623 ] 00:29:59.623 }' 00:29:59.623 00:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:59.623 00:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.882 00:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:00.141 [2024-07-25 00:13:55.915236] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:00.141 BaseBdev2 00:30:00.141 00:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:30:00.141 00:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:30:00.141 00:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:00.141 00:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:00.141 00:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:00.141 00:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:00.141 00:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:00.399 00:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:00.658 [ 00:30:00.658 { 00:30:00.658 "name": "BaseBdev2", 00:30:00.658 "aliases": [ 00:30:00.658 "b6a83268-45ad-4fdf-aade-68299ed975df" 00:30:00.658 ], 00:30:00.658 "product_name": "Malloc disk", 00:30:00.658 "block_size": 512, 00:30:00.658 "num_blocks": 65536, 00:30:00.658 "uuid": "b6a83268-45ad-4fdf-aade-68299ed975df", 00:30:00.658 "assigned_rate_limits": { 00:30:00.658 "rw_ios_per_sec": 0, 00:30:00.658 "rw_mbytes_per_sec": 0, 00:30:00.658 "r_mbytes_per_sec": 0, 00:30:00.658 "w_mbytes_per_sec": 0 00:30:00.658 }, 00:30:00.658 "claimed": true, 00:30:00.658 "claim_type": "exclusive_write", 00:30:00.658 "zoned": false, 00:30:00.658 "supported_io_types": { 00:30:00.658 "read": true, 00:30:00.658 "write": true, 00:30:00.658 "unmap": true, 00:30:00.658 "flush": true, 00:30:00.658 "reset": true, 00:30:00.658 "nvme_admin": false, 00:30:00.658 "nvme_io": false, 00:30:00.658 "nvme_io_md": false, 00:30:00.658 "write_zeroes": true, 00:30:00.658 "zcopy": true, 00:30:00.658 "get_zone_info": false, 00:30:00.658 "zone_management": false, 00:30:00.658 "zone_append": false, 00:30:00.658 "compare": false, 00:30:00.658 "compare_and_write": false, 00:30:00.658 "abort": true, 00:30:00.658 "seek_hole": false, 00:30:00.658 "seek_data": false, 00:30:00.658 "copy": true, 00:30:00.658 "nvme_iov_md": false 00:30:00.658 }, 00:30:00.658 "memory_domains": [ 00:30:00.658 { 00:30:00.658 "dma_device_id": "system", 00:30:00.658 "dma_device_type": 1 00:30:00.658 }, 00:30:00.658 { 00:30:00.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:00.658 "dma_device_type": 2 00:30:00.658 } 00:30:00.658 ], 00:30:00.658 "driver_specific": {} 00:30:00.658 } 00:30:00.658 ] 00:30:00.658 00:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:00.658 00:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:30:00.658 00:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:00.658 00:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:00.658 00:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:00.658 00:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:00.658 00:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:00.658 00:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:00.658 00:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:00.658 00:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:00.658 00:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:00.658 00:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:00.658 00:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:00.658 00:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:00.658 00:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:00.917 00:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:00.917 "name": "Existed_Raid", 00:30:00.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.917 "strip_size_kb": 64, 00:30:00.917 "state": "configuring", 00:30:00.917 "raid_level": "raid5f", 00:30:00.917 "superblock": false, 00:30:00.917 "num_base_bdevs": 4, 00:30:00.917 "num_base_bdevs_discovered": 2, 00:30:00.917 "num_base_bdevs_operational": 4, 00:30:00.917 "base_bdevs_list": [ 00:30:00.917 { 00:30:00.917 "name": "BaseBdev1", 00:30:00.917 "uuid": "ac0ba910-0467-4eb9-841c-e1b786926dd1", 00:30:00.917 "is_configured": true, 00:30:00.917 "data_offset": 0, 00:30:00.917 "data_size": 65536 00:30:00.917 }, 00:30:00.917 { 00:30:00.917 "name": "BaseBdev2", 00:30:00.917 "uuid": "b6a83268-45ad-4fdf-aade-68299ed975df", 00:30:00.917 "is_configured": true, 00:30:00.917 "data_offset": 0, 00:30:00.917 "data_size": 65536 00:30:00.917 }, 00:30:00.917 { 00:30:00.917 "name": "BaseBdev3", 00:30:00.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.917 "is_configured": false, 00:30:00.917 "data_offset": 0, 00:30:00.917 "data_size": 0 00:30:00.917 }, 00:30:00.917 { 00:30:00.917 "name": "BaseBdev4", 00:30:00.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.917 "is_configured": false, 00:30:00.917 "data_offset": 0, 00:30:00.917 "data_size": 0 00:30:00.917 } 00:30:00.917 ] 00:30:00.917 }' 00:30:00.917 00:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:00.917 00:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.175 00:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:30:01.434 [2024-07-25 00:13:57.222786] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:01.434 BaseBdev3 00:30:01.434 00:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:30:01.434 00:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:30:01.434 00:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:01.434 00:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:01.434 00:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:01.434 00:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:01.434 00:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:01.693 00:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:01.952 [ 00:30:01.952 { 00:30:01.952 "name": "BaseBdev3", 00:30:01.952 "aliases": [ 00:30:01.952 "780fd367-94a6-44af-9736-372bcbb46930" 00:30:01.952 ], 00:30:01.952 "product_name": "Malloc disk", 00:30:01.952 "block_size": 512, 00:30:01.952 "num_blocks": 65536, 00:30:01.952 "uuid": "780fd367-94a6-44af-9736-372bcbb46930", 00:30:01.952 "assigned_rate_limits": { 00:30:01.952 "rw_ios_per_sec": 0, 00:30:01.952 "rw_mbytes_per_sec": 0, 00:30:01.952 "r_mbytes_per_sec": 0, 00:30:01.952 "w_mbytes_per_sec": 0 00:30:01.952 }, 00:30:01.952 "claimed": true, 00:30:01.952 "claim_type": "exclusive_write", 00:30:01.952 "zoned": false, 00:30:01.952 "supported_io_types": { 00:30:01.952 "read": true, 00:30:01.952 "write": true, 00:30:01.952 "unmap": true, 00:30:01.952 "flush": true, 00:30:01.952 "reset": true, 00:30:01.952 "nvme_admin": false, 00:30:01.952 "nvme_io": false, 00:30:01.952 "nvme_io_md": false, 00:30:01.952 "write_zeroes": true, 00:30:01.952 "zcopy": true, 00:30:01.952 "get_zone_info": false, 00:30:01.952 "zone_management": false, 00:30:01.952 "zone_append": false, 00:30:01.952 "compare": false, 00:30:01.952 "compare_and_write": false, 00:30:01.952 "abort": true, 00:30:01.952 "seek_hole": false, 00:30:01.952 "seek_data": false, 00:30:01.952 "copy": true, 00:30:01.952 "nvme_iov_md": false 00:30:01.952 }, 00:30:01.952 "memory_domains": [ 00:30:01.952 { 00:30:01.952 "dma_device_id": "system", 00:30:01.952 "dma_device_type": 1 00:30:01.952 }, 00:30:01.952 { 00:30:01.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:01.952 "dma_device_type": 2 00:30:01.952 } 00:30:01.952 ], 00:30:01.952 "driver_specific": {} 00:30:01.952 } 00:30:01.952 ] 00:30:01.952 00:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:01.952 00:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:30:01.952 00:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:01.952 00:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:01.952 00:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:01.952 00:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:01.952 00:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:01.952 00:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:01.952 00:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:01.952 00:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:01.952 00:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:01.952 00:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:01.952 00:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:01.952 00:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:01.952 00:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:02.211 00:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:02.211 "name": "Existed_Raid", 00:30:02.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.211 "strip_size_kb": 64, 00:30:02.211 "state": "configuring", 00:30:02.211 "raid_level": "raid5f", 00:30:02.211 "superblock": false, 00:30:02.211 "num_base_bdevs": 4, 00:30:02.211 "num_base_bdevs_discovered": 3, 00:30:02.211 "num_base_bdevs_operational": 4, 00:30:02.211 "base_bdevs_list": [ 00:30:02.211 { 00:30:02.211 "name": "BaseBdev1", 00:30:02.211 "uuid": "ac0ba910-0467-4eb9-841c-e1b786926dd1", 00:30:02.211 "is_configured": true, 00:30:02.211 "data_offset": 0, 00:30:02.211 "data_size": 65536 00:30:02.211 }, 00:30:02.211 { 00:30:02.211 "name": "BaseBdev2", 00:30:02.211 "uuid": "b6a83268-45ad-4fdf-aade-68299ed975df", 00:30:02.211 "is_configured": true, 00:30:02.211 "data_offset": 0, 00:30:02.211 "data_size": 65536 00:30:02.211 }, 00:30:02.211 { 00:30:02.211 "name": "BaseBdev3", 00:30:02.211 "uuid": "780fd367-94a6-44af-9736-372bcbb46930", 00:30:02.211 "is_configured": true, 00:30:02.211 "data_offset": 0, 00:30:02.211 "data_size": 65536 00:30:02.211 }, 00:30:02.211 { 00:30:02.211 "name": "BaseBdev4", 00:30:02.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.211 "is_configured": false, 00:30:02.211 "data_offset": 0, 00:30:02.211 "data_size": 0 00:30:02.211 } 00:30:02.211 ] 00:30:02.211 }' 00:30:02.211 00:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:02.211 00:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:02.470 00:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:30:02.729 [2024-07-25 00:13:58.398092] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:02.729 [2024-07-25 00:13:58.398167] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:30:02.729 [2024-07-25 00:13:58.398181] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:30:02.729 [2024-07-25 00:13:58.398280] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:30:02.729 [2024-07-25 00:13:58.403710] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:30:02.729 [2024-07-25 00:13:58.403741] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:30:02.729 [2024-07-25 00:13:58.404062] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:02.729 BaseBdev4 00:30:02.729 00:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:30:02.729 00:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:30:02.729 00:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:02.729 00:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:02.729 00:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:02.729 00:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:02.729 00:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:02.988 00:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:02.988 [ 00:30:02.988 { 00:30:02.988 "name": "BaseBdev4", 00:30:02.988 "aliases": [ 00:30:02.988 "3348d4b2-d590-4021-abb7-fac541538cd7" 00:30:02.988 ], 00:30:02.988 "product_name": "Malloc disk", 00:30:02.988 "block_size": 512, 00:30:02.988 "num_blocks": 65536, 00:30:02.988 "uuid": "3348d4b2-d590-4021-abb7-fac541538cd7", 00:30:02.988 "assigned_rate_limits": { 00:30:02.988 "rw_ios_per_sec": 0, 00:30:02.988 "rw_mbytes_per_sec": 0, 00:30:02.988 "r_mbytes_per_sec": 0, 00:30:02.988 "w_mbytes_per_sec": 0 00:30:02.988 }, 00:30:02.988 "claimed": true, 00:30:02.988 "claim_type": "exclusive_write", 00:30:02.988 "zoned": false, 00:30:02.988 "supported_io_types": { 00:30:02.988 "read": true, 00:30:02.988 "write": true, 00:30:02.988 "unmap": true, 00:30:02.988 "flush": true, 00:30:02.988 "reset": true, 00:30:02.988 "nvme_admin": false, 00:30:02.988 "nvme_io": false, 00:30:02.988 "nvme_io_md": false, 00:30:02.988 "write_zeroes": true, 00:30:02.988 "zcopy": true, 00:30:02.988 "get_zone_info": false, 00:30:02.988 "zone_management": false, 00:30:02.988 "zone_append": false, 00:30:02.988 "compare": false, 00:30:02.988 "compare_and_write": false, 00:30:02.988 "abort": true, 00:30:02.988 "seek_hole": false, 00:30:02.988 "seek_data": false, 00:30:02.988 "copy": true, 00:30:02.988 "nvme_iov_md": false 00:30:02.988 }, 00:30:02.988 "memory_domains": [ 00:30:02.988 { 00:30:02.988 "dma_device_id": "system", 00:30:02.988 "dma_device_type": 1 00:30:02.988 }, 00:30:02.988 { 00:30:02.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:02.988 "dma_device_type": 2 00:30:02.988 } 00:30:02.988 ], 00:30:02.988 "driver_specific": {} 00:30:02.988 } 00:30:02.988 ] 00:30:02.988 00:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:02.988 00:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:30:02.988 00:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:02.988 00:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:30:02.988 00:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:02.988 00:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:02.988 00:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:02.988 00:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:02.988 00:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:02.988 00:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:02.988 00:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:02.988 00:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:02.988 00:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:02.988 00:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:02.988 00:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:03.246 00:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:03.246 "name": "Existed_Raid", 00:30:03.246 "uuid": "c859ef6c-48ef-40f5-ac30-c40d4a255b63", 00:30:03.246 "strip_size_kb": 64, 00:30:03.246 "state": "online", 00:30:03.246 "raid_level": "raid5f", 00:30:03.246 "superblock": false, 00:30:03.246 "num_base_bdevs": 4, 00:30:03.246 "num_base_bdevs_discovered": 4, 00:30:03.246 "num_base_bdevs_operational": 4, 00:30:03.246 "base_bdevs_list": [ 00:30:03.246 { 00:30:03.246 "name": "BaseBdev1", 00:30:03.246 "uuid": "ac0ba910-0467-4eb9-841c-e1b786926dd1", 00:30:03.246 "is_configured": true, 00:30:03.246 "data_offset": 0, 00:30:03.246 "data_size": 65536 00:30:03.246 }, 00:30:03.246 { 00:30:03.246 "name": "BaseBdev2", 00:30:03.246 "uuid": "b6a83268-45ad-4fdf-aade-68299ed975df", 00:30:03.246 "is_configured": true, 00:30:03.246 "data_offset": 0, 00:30:03.246 "data_size": 65536 00:30:03.246 }, 00:30:03.246 { 00:30:03.246 "name": "BaseBdev3", 00:30:03.246 "uuid": "780fd367-94a6-44af-9736-372bcbb46930", 00:30:03.246 "is_configured": true, 00:30:03.246 "data_offset": 0, 00:30:03.246 "data_size": 65536 00:30:03.246 }, 00:30:03.246 { 00:30:03.246 "name": "BaseBdev4", 00:30:03.246 "uuid": "3348d4b2-d590-4021-abb7-fac541538cd7", 00:30:03.246 "is_configured": true, 00:30:03.246 "data_offset": 0, 00:30:03.246 "data_size": 65536 00:30:03.246 } 00:30:03.247 ] 00:30:03.247 }' 00:30:03.247 00:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:03.247 00:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:03.504 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:30:03.504 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:30:03.504 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:03.504 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:03.504 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:03.504 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:30:03.504 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:30:03.504 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:03.763 [2024-07-25 00:13:59.534258] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:03.763 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:03.763 "name": "Existed_Raid", 00:30:03.763 "aliases": [ 00:30:03.763 "c859ef6c-48ef-40f5-ac30-c40d4a255b63" 00:30:03.763 ], 00:30:03.763 "product_name": "Raid Volume", 00:30:03.763 "block_size": 512, 00:30:03.763 "num_blocks": 196608, 00:30:03.763 "uuid": "c859ef6c-48ef-40f5-ac30-c40d4a255b63", 00:30:03.763 "assigned_rate_limits": { 00:30:03.763 "rw_ios_per_sec": 0, 00:30:03.763 "rw_mbytes_per_sec": 0, 00:30:03.763 "r_mbytes_per_sec": 0, 00:30:03.763 "w_mbytes_per_sec": 0 00:30:03.763 }, 00:30:03.763 "claimed": false, 00:30:03.763 "zoned": false, 00:30:03.763 "supported_io_types": { 00:30:03.763 "read": true, 00:30:03.763 "write": true, 00:30:03.763 "unmap": false, 00:30:03.763 "flush": false, 00:30:03.763 "reset": true, 00:30:03.763 "nvme_admin": false, 00:30:03.763 "nvme_io": false, 00:30:03.763 "nvme_io_md": false, 00:30:03.763 "write_zeroes": true, 00:30:03.763 "zcopy": false, 00:30:03.763 "get_zone_info": false, 00:30:03.763 "zone_management": false, 00:30:03.763 "zone_append": false, 00:30:03.763 "compare": false, 00:30:03.763 "compare_and_write": false, 00:30:03.763 "abort": false, 00:30:03.763 "seek_hole": false, 00:30:03.763 "seek_data": false, 00:30:03.763 "copy": false, 00:30:03.763 "nvme_iov_md": false 00:30:03.763 }, 00:30:03.763 "driver_specific": { 00:30:03.763 "raid": { 00:30:03.763 "uuid": "c859ef6c-48ef-40f5-ac30-c40d4a255b63", 00:30:03.763 "strip_size_kb": 64, 00:30:03.763 "state": "online", 00:30:03.763 "raid_level": "raid5f", 00:30:03.763 "superblock": false, 00:30:03.763 "num_base_bdevs": 4, 00:30:03.763 "num_base_bdevs_discovered": 4, 00:30:03.763 "num_base_bdevs_operational": 4, 00:30:03.763 "base_bdevs_list": [ 00:30:03.763 { 00:30:03.763 "name": "BaseBdev1", 00:30:03.763 "uuid": "ac0ba910-0467-4eb9-841c-e1b786926dd1", 00:30:03.763 "is_configured": true, 00:30:03.763 "data_offset": 0, 00:30:03.763 "data_size": 65536 00:30:03.763 }, 00:30:03.763 { 00:30:03.763 "name": "BaseBdev2", 00:30:03.763 "uuid": "b6a83268-45ad-4fdf-aade-68299ed975df", 00:30:03.763 "is_configured": true, 00:30:03.763 "data_offset": 0, 00:30:03.763 "data_size": 65536 00:30:03.763 }, 00:30:03.763 { 00:30:03.763 "name": "BaseBdev3", 00:30:03.763 "uuid": "780fd367-94a6-44af-9736-372bcbb46930", 00:30:03.763 "is_configured": true, 00:30:03.763 "data_offset": 0, 00:30:03.763 "data_size": 65536 00:30:03.763 }, 00:30:03.763 { 00:30:03.763 "name": "BaseBdev4", 00:30:03.763 "uuid": "3348d4b2-d590-4021-abb7-fac541538cd7", 00:30:03.763 "is_configured": true, 00:30:03.763 "data_offset": 0, 00:30:03.763 "data_size": 65536 00:30:03.763 } 00:30:03.763 ] 00:30:03.763 } 00:30:03.763 } 00:30:03.763 }' 00:30:03.763 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:03.763 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:30:03.763 BaseBdev2 00:30:03.763 BaseBdev3 00:30:03.763 BaseBdev4' 00:30:03.763 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:03.763 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:03.763 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:30:04.021 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:04.021 "name": "BaseBdev1", 00:30:04.021 "aliases": [ 00:30:04.021 "ac0ba910-0467-4eb9-841c-e1b786926dd1" 00:30:04.021 ], 00:30:04.021 "product_name": "Malloc disk", 00:30:04.021 "block_size": 512, 00:30:04.022 "num_blocks": 65536, 00:30:04.022 "uuid": "ac0ba910-0467-4eb9-841c-e1b786926dd1", 00:30:04.022 "assigned_rate_limits": { 00:30:04.022 "rw_ios_per_sec": 0, 00:30:04.022 "rw_mbytes_per_sec": 0, 00:30:04.022 "r_mbytes_per_sec": 0, 00:30:04.022 "w_mbytes_per_sec": 0 00:30:04.022 }, 00:30:04.022 "claimed": true, 00:30:04.022 "claim_type": "exclusive_write", 00:30:04.022 "zoned": false, 00:30:04.022 "supported_io_types": { 00:30:04.022 "read": true, 00:30:04.022 "write": true, 00:30:04.022 "unmap": true, 00:30:04.022 "flush": true, 00:30:04.022 "reset": true, 00:30:04.022 "nvme_admin": false, 00:30:04.022 "nvme_io": false, 00:30:04.022 "nvme_io_md": false, 00:30:04.022 "write_zeroes": true, 00:30:04.022 "zcopy": true, 00:30:04.022 "get_zone_info": false, 00:30:04.022 "zone_management": false, 00:30:04.022 "zone_append": false, 00:30:04.022 "compare": false, 00:30:04.022 "compare_and_write": false, 00:30:04.022 "abort": true, 00:30:04.022 "seek_hole": false, 00:30:04.022 "seek_data": false, 00:30:04.022 "copy": true, 00:30:04.022 "nvme_iov_md": false 00:30:04.022 }, 00:30:04.022 "memory_domains": [ 00:30:04.022 { 00:30:04.022 "dma_device_id": "system", 00:30:04.022 "dma_device_type": 1 00:30:04.022 }, 00:30:04.022 { 00:30:04.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:04.022 "dma_device_type": 2 00:30:04.022 } 00:30:04.022 ], 00:30:04.022 "driver_specific": {} 00:30:04.022 }' 00:30:04.022 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:04.022 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:04.022 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:04.022 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:04.022 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:04.022 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:04.022 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:04.022 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:04.022 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:04.022 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:04.280 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:04.280 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:04.280 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:04.280 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:04.280 00:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:30:04.538 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:04.538 "name": "BaseBdev2", 00:30:04.538 "aliases": [ 00:30:04.538 "b6a83268-45ad-4fdf-aade-68299ed975df" 00:30:04.538 ], 00:30:04.538 "product_name": "Malloc disk", 00:30:04.538 "block_size": 512, 00:30:04.538 "num_blocks": 65536, 00:30:04.538 "uuid": "b6a83268-45ad-4fdf-aade-68299ed975df", 00:30:04.538 "assigned_rate_limits": { 00:30:04.538 "rw_ios_per_sec": 0, 00:30:04.538 "rw_mbytes_per_sec": 0, 00:30:04.538 "r_mbytes_per_sec": 0, 00:30:04.538 "w_mbytes_per_sec": 0 00:30:04.538 }, 00:30:04.538 "claimed": true, 00:30:04.538 "claim_type": "exclusive_write", 00:30:04.538 "zoned": false, 00:30:04.538 "supported_io_types": { 00:30:04.538 "read": true, 00:30:04.538 "write": true, 00:30:04.538 "unmap": true, 00:30:04.538 "flush": true, 00:30:04.538 "reset": true, 00:30:04.538 "nvme_admin": false, 00:30:04.538 "nvme_io": false, 00:30:04.538 "nvme_io_md": false, 00:30:04.538 "write_zeroes": true, 00:30:04.538 "zcopy": true, 00:30:04.538 "get_zone_info": false, 00:30:04.538 "zone_management": false, 00:30:04.538 "zone_append": false, 00:30:04.538 "compare": false, 00:30:04.538 "compare_and_write": false, 00:30:04.538 "abort": true, 00:30:04.538 "seek_hole": false, 00:30:04.538 "seek_data": false, 00:30:04.538 "copy": true, 00:30:04.538 "nvme_iov_md": false 00:30:04.538 }, 00:30:04.538 "memory_domains": [ 00:30:04.538 { 00:30:04.538 "dma_device_id": "system", 00:30:04.538 "dma_device_type": 1 00:30:04.538 }, 00:30:04.538 { 00:30:04.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:04.538 "dma_device_type": 2 00:30:04.538 } 00:30:04.538 ], 00:30:04.538 "driver_specific": {} 00:30:04.538 }' 00:30:04.538 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:04.538 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:04.538 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:04.538 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:04.538 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:04.538 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:04.538 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:04.538 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:04.538 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:04.538 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:04.538 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:04.538 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:04.538 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:04.538 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:30:04.538 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:04.796 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:04.796 "name": "BaseBdev3", 00:30:04.796 "aliases": [ 00:30:04.796 "780fd367-94a6-44af-9736-372bcbb46930" 00:30:04.796 ], 00:30:04.796 "product_name": "Malloc disk", 00:30:04.796 "block_size": 512, 00:30:04.796 "num_blocks": 65536, 00:30:04.796 "uuid": "780fd367-94a6-44af-9736-372bcbb46930", 00:30:04.796 "assigned_rate_limits": { 00:30:04.796 "rw_ios_per_sec": 0, 00:30:04.796 "rw_mbytes_per_sec": 0, 00:30:04.796 "r_mbytes_per_sec": 0, 00:30:04.796 "w_mbytes_per_sec": 0 00:30:04.796 }, 00:30:04.796 "claimed": true, 00:30:04.796 "claim_type": "exclusive_write", 00:30:04.796 "zoned": false, 00:30:04.796 "supported_io_types": { 00:30:04.796 "read": true, 00:30:04.796 "write": true, 00:30:04.796 "unmap": true, 00:30:04.796 "flush": true, 00:30:04.796 "reset": true, 00:30:04.796 "nvme_admin": false, 00:30:04.796 "nvme_io": false, 00:30:04.796 "nvme_io_md": false, 00:30:04.796 "write_zeroes": true, 00:30:04.796 "zcopy": true, 00:30:04.796 "get_zone_info": false, 00:30:04.796 "zone_management": false, 00:30:04.796 "zone_append": false, 00:30:04.796 "compare": false, 00:30:04.796 "compare_and_write": false, 00:30:04.796 "abort": true, 00:30:04.796 "seek_hole": false, 00:30:04.796 "seek_data": false, 00:30:04.796 "copy": true, 00:30:04.796 "nvme_iov_md": false 00:30:04.796 }, 00:30:04.796 "memory_domains": [ 00:30:04.796 { 00:30:04.796 "dma_device_id": "system", 00:30:04.796 "dma_device_type": 1 00:30:04.796 }, 00:30:04.796 { 00:30:04.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:04.796 "dma_device_type": 2 00:30:04.796 } 00:30:04.796 ], 00:30:04.796 "driver_specific": {} 00:30:04.796 }' 00:30:04.796 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:04.796 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:04.796 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:04.796 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:04.796 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:04.796 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:04.796 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:04.796 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:04.796 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:04.796 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:04.796 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:04.796 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:04.796 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:04.796 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:30:04.796 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:05.055 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:05.055 "name": "BaseBdev4", 00:30:05.055 "aliases": [ 00:30:05.055 "3348d4b2-d590-4021-abb7-fac541538cd7" 00:30:05.055 ], 00:30:05.055 "product_name": "Malloc disk", 00:30:05.055 "block_size": 512, 00:30:05.055 "num_blocks": 65536, 00:30:05.055 "uuid": "3348d4b2-d590-4021-abb7-fac541538cd7", 00:30:05.055 "assigned_rate_limits": { 00:30:05.055 "rw_ios_per_sec": 0, 00:30:05.055 "rw_mbytes_per_sec": 0, 00:30:05.055 "r_mbytes_per_sec": 0, 00:30:05.055 "w_mbytes_per_sec": 0 00:30:05.055 }, 00:30:05.055 "claimed": true, 00:30:05.055 "claim_type": "exclusive_write", 00:30:05.055 "zoned": false, 00:30:05.055 "supported_io_types": { 00:30:05.055 "read": true, 00:30:05.055 "write": true, 00:30:05.055 "unmap": true, 00:30:05.055 "flush": true, 00:30:05.055 "reset": true, 00:30:05.055 "nvme_admin": false, 00:30:05.055 "nvme_io": false, 00:30:05.055 "nvme_io_md": false, 00:30:05.055 "write_zeroes": true, 00:30:05.055 "zcopy": true, 00:30:05.055 "get_zone_info": false, 00:30:05.055 "zone_management": false, 00:30:05.055 "zone_append": false, 00:30:05.055 "compare": false, 00:30:05.055 "compare_and_write": false, 00:30:05.055 "abort": true, 00:30:05.055 "seek_hole": false, 00:30:05.055 "seek_data": false, 00:30:05.055 "copy": true, 00:30:05.055 "nvme_iov_md": false 00:30:05.055 }, 00:30:05.055 "memory_domains": [ 00:30:05.055 { 00:30:05.055 "dma_device_id": "system", 00:30:05.055 "dma_device_type": 1 00:30:05.055 }, 00:30:05.055 { 00:30:05.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:05.055 "dma_device_type": 2 00:30:05.055 } 00:30:05.055 ], 00:30:05.055 "driver_specific": {} 00:30:05.055 }' 00:30:05.055 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:05.055 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:05.055 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:05.055 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:05.055 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:05.055 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:05.055 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:05.055 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:05.055 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:05.055 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:05.055 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:05.055 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:05.055 00:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:05.313 [2024-07-25 00:14:01.130520] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:05.571 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:30:05.571 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:30:05.571 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:30:05.571 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:30:05.571 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:30:05.571 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:30:05.571 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:05.571 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:05.571 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:05.571 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:05.571 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:05.571 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:05.571 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:05.571 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:05.571 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:05.571 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:05.571 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:05.829 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:05.829 "name": "Existed_Raid", 00:30:05.829 "uuid": "c859ef6c-48ef-40f5-ac30-c40d4a255b63", 00:30:05.829 "strip_size_kb": 64, 00:30:05.829 "state": "online", 00:30:05.829 "raid_level": "raid5f", 00:30:05.829 "superblock": false, 00:30:05.829 "num_base_bdevs": 4, 00:30:05.829 "num_base_bdevs_discovered": 3, 00:30:05.829 "num_base_bdevs_operational": 3, 00:30:05.829 "base_bdevs_list": [ 00:30:05.829 { 00:30:05.829 "name": null, 00:30:05.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.829 "is_configured": false, 00:30:05.829 "data_offset": 0, 00:30:05.829 "data_size": 65536 00:30:05.829 }, 00:30:05.829 { 00:30:05.829 "name": "BaseBdev2", 00:30:05.829 "uuid": "b6a83268-45ad-4fdf-aade-68299ed975df", 00:30:05.829 "is_configured": true, 00:30:05.829 "data_offset": 0, 00:30:05.829 "data_size": 65536 00:30:05.829 }, 00:30:05.829 { 00:30:05.829 "name": "BaseBdev3", 00:30:05.829 "uuid": "780fd367-94a6-44af-9736-372bcbb46930", 00:30:05.829 "is_configured": true, 00:30:05.829 "data_offset": 0, 00:30:05.829 "data_size": 65536 00:30:05.829 }, 00:30:05.829 { 00:30:05.829 "name": "BaseBdev4", 00:30:05.829 "uuid": "3348d4b2-d590-4021-abb7-fac541538cd7", 00:30:05.829 "is_configured": true, 00:30:05.829 "data_offset": 0, 00:30:05.829 "data_size": 65536 00:30:05.829 } 00:30:05.829 ] 00:30:05.829 }' 00:30:05.829 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:05.829 00:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:06.086 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:30:06.086 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:06.086 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:06.086 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:30:06.343 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:30:06.343 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:06.343 00:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:30:06.601 [2024-07-25 00:14:02.229872] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:06.601 [2024-07-25 00:14:02.229995] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:06.601 [2024-07-25 00:14:02.291884] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:06.601 00:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:30:06.601 00:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:06.601 00:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:06.601 00:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:30:06.859 00:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:30:06.859 00:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:06.859 00:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:30:07.117 [2024-07-25 00:14:02.736107] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:07.117 00:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:30:07.117 00:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:07.117 00:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:30:07.117 00:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:07.375 00:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:30:07.375 00:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:07.375 00:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:30:07.633 [2024-07-25 00:14:03.282054] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:30:07.633 [2024-07-25 00:14:03.282130] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:30:07.633 00:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:30:07.633 00:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:07.633 00:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:07.633 00:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:30:07.890 00:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:30:07.890 00:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:30:07.890 00:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:30:07.890 00:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:30:07.890 00:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:07.890 00:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:07.890 BaseBdev2 00:30:07.890 00:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:30:07.890 00:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:30:07.890 00:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:07.890 00:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:07.890 00:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:07.890 00:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:07.890 00:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:08.148 00:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:08.406 [ 00:30:08.406 { 00:30:08.406 "name": "BaseBdev2", 00:30:08.406 "aliases": [ 00:30:08.406 "cb910949-5f59-44ef-ac8b-ed20309a239c" 00:30:08.406 ], 00:30:08.406 "product_name": "Malloc disk", 00:30:08.406 "block_size": 512, 00:30:08.406 "num_blocks": 65536, 00:30:08.406 "uuid": "cb910949-5f59-44ef-ac8b-ed20309a239c", 00:30:08.406 "assigned_rate_limits": { 00:30:08.406 "rw_ios_per_sec": 0, 00:30:08.406 "rw_mbytes_per_sec": 0, 00:30:08.406 "r_mbytes_per_sec": 0, 00:30:08.406 "w_mbytes_per_sec": 0 00:30:08.406 }, 00:30:08.406 "claimed": false, 00:30:08.406 "zoned": false, 00:30:08.406 "supported_io_types": { 00:30:08.406 "read": true, 00:30:08.406 "write": true, 00:30:08.406 "unmap": true, 00:30:08.406 "flush": true, 00:30:08.406 "reset": true, 00:30:08.406 "nvme_admin": false, 00:30:08.406 "nvme_io": false, 00:30:08.406 "nvme_io_md": false, 00:30:08.406 "write_zeroes": true, 00:30:08.406 "zcopy": true, 00:30:08.406 "get_zone_info": false, 00:30:08.406 "zone_management": false, 00:30:08.406 "zone_append": false, 00:30:08.406 "compare": false, 00:30:08.406 "compare_and_write": false, 00:30:08.406 "abort": true, 00:30:08.406 "seek_hole": false, 00:30:08.406 "seek_data": false, 00:30:08.406 "copy": true, 00:30:08.406 "nvme_iov_md": false 00:30:08.406 }, 00:30:08.406 "memory_domains": [ 00:30:08.406 { 00:30:08.406 "dma_device_id": "system", 00:30:08.406 "dma_device_type": 1 00:30:08.406 }, 00:30:08.406 { 00:30:08.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:08.406 "dma_device_type": 2 00:30:08.406 } 00:30:08.406 ], 00:30:08.406 "driver_specific": {} 00:30:08.406 } 00:30:08.406 ] 00:30:08.406 00:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:08.406 00:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:30:08.406 00:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:08.406 00:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:30:08.665 BaseBdev3 00:30:08.665 00:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:30:08.665 00:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:30:08.665 00:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:08.665 00:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:08.665 00:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:08.665 00:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:08.665 00:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:08.924 00:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:08.924 [ 00:30:08.924 { 00:30:08.924 "name": "BaseBdev3", 00:30:08.924 "aliases": [ 00:30:08.924 "7f9d1846-95b2-46b6-b22d-01bfc6103470" 00:30:08.924 ], 00:30:08.924 "product_name": "Malloc disk", 00:30:08.924 "block_size": 512, 00:30:08.924 "num_blocks": 65536, 00:30:08.924 "uuid": "7f9d1846-95b2-46b6-b22d-01bfc6103470", 00:30:08.924 "assigned_rate_limits": { 00:30:08.924 "rw_ios_per_sec": 0, 00:30:08.924 "rw_mbytes_per_sec": 0, 00:30:08.924 "r_mbytes_per_sec": 0, 00:30:08.924 "w_mbytes_per_sec": 0 00:30:08.924 }, 00:30:08.924 "claimed": false, 00:30:08.924 "zoned": false, 00:30:08.924 "supported_io_types": { 00:30:08.924 "read": true, 00:30:08.924 "write": true, 00:30:08.924 "unmap": true, 00:30:08.924 "flush": true, 00:30:08.924 "reset": true, 00:30:08.924 "nvme_admin": false, 00:30:08.924 "nvme_io": false, 00:30:08.924 "nvme_io_md": false, 00:30:08.924 "write_zeroes": true, 00:30:08.924 "zcopy": true, 00:30:08.924 "get_zone_info": false, 00:30:08.924 "zone_management": false, 00:30:08.924 "zone_append": false, 00:30:08.924 "compare": false, 00:30:08.924 "compare_and_write": false, 00:30:08.924 "abort": true, 00:30:08.924 "seek_hole": false, 00:30:08.924 "seek_data": false, 00:30:08.924 "copy": true, 00:30:08.924 "nvme_iov_md": false 00:30:08.924 }, 00:30:08.924 "memory_domains": [ 00:30:08.924 { 00:30:08.924 "dma_device_id": "system", 00:30:08.924 "dma_device_type": 1 00:30:08.924 }, 00:30:08.924 { 00:30:08.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:08.924 "dma_device_type": 2 00:30:08.924 } 00:30:08.924 ], 00:30:08.924 "driver_specific": {} 00:30:08.924 } 00:30:08.924 ] 00:30:08.924 00:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:08.924 00:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:30:08.924 00:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:08.924 00:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:30:09.183 BaseBdev4 00:30:09.183 00:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:30:09.183 00:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:30:09.183 00:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:09.183 00:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:09.183 00:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:09.183 00:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:09.183 00:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:09.442 00:14:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:09.701 [ 00:30:09.701 { 00:30:09.701 "name": "BaseBdev4", 00:30:09.701 "aliases": [ 00:30:09.701 "7a89dc31-f4e2-4452-8db1-89f9231381a6" 00:30:09.701 ], 00:30:09.701 "product_name": "Malloc disk", 00:30:09.701 "block_size": 512, 00:30:09.701 "num_blocks": 65536, 00:30:09.701 "uuid": "7a89dc31-f4e2-4452-8db1-89f9231381a6", 00:30:09.701 "assigned_rate_limits": { 00:30:09.701 "rw_ios_per_sec": 0, 00:30:09.701 "rw_mbytes_per_sec": 0, 00:30:09.701 "r_mbytes_per_sec": 0, 00:30:09.701 "w_mbytes_per_sec": 0 00:30:09.701 }, 00:30:09.701 "claimed": false, 00:30:09.701 "zoned": false, 00:30:09.701 "supported_io_types": { 00:30:09.701 "read": true, 00:30:09.701 "write": true, 00:30:09.701 "unmap": true, 00:30:09.701 "flush": true, 00:30:09.701 "reset": true, 00:30:09.701 "nvme_admin": false, 00:30:09.701 "nvme_io": false, 00:30:09.701 "nvme_io_md": false, 00:30:09.701 "write_zeroes": true, 00:30:09.701 "zcopy": true, 00:30:09.701 "get_zone_info": false, 00:30:09.701 "zone_management": false, 00:30:09.701 "zone_append": false, 00:30:09.701 "compare": false, 00:30:09.701 "compare_and_write": false, 00:30:09.701 "abort": true, 00:30:09.701 "seek_hole": false, 00:30:09.701 "seek_data": false, 00:30:09.701 "copy": true, 00:30:09.701 "nvme_iov_md": false 00:30:09.701 }, 00:30:09.701 "memory_domains": [ 00:30:09.701 { 00:30:09.701 "dma_device_id": "system", 00:30:09.701 "dma_device_type": 1 00:30:09.701 }, 00:30:09.701 { 00:30:09.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:09.701 "dma_device_type": 2 00:30:09.701 } 00:30:09.701 ], 00:30:09.701 "driver_specific": {} 00:30:09.701 } 00:30:09.701 ] 00:30:09.701 00:14:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:09.701 00:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:30:09.701 00:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:09.701 00:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:30:09.701 [2024-07-25 00:14:05.522243] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:09.701 [2024-07-25 00:14:05.522293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:09.701 [2024-07-25 00:14:05.522327] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:09.701 [2024-07-25 00:14:05.524161] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:09.701 [2024-07-25 00:14:05.524221] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:09.701 00:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:09.701 00:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:09.701 00:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:09.701 00:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:09.701 00:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:09.701 00:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:09.701 00:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:09.701 00:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:09.701 00:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:09.701 00:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:09.701 00:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:09.701 00:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:09.960 00:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:09.960 "name": "Existed_Raid", 00:30:09.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.960 "strip_size_kb": 64, 00:30:09.960 "state": "configuring", 00:30:09.960 "raid_level": "raid5f", 00:30:09.960 "superblock": false, 00:30:09.960 "num_base_bdevs": 4, 00:30:09.960 "num_base_bdevs_discovered": 3, 00:30:09.960 "num_base_bdevs_operational": 4, 00:30:09.960 "base_bdevs_list": [ 00:30:09.960 { 00:30:09.960 "name": "BaseBdev1", 00:30:09.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.960 "is_configured": false, 00:30:09.960 "data_offset": 0, 00:30:09.960 "data_size": 0 00:30:09.960 }, 00:30:09.960 { 00:30:09.960 "name": "BaseBdev2", 00:30:09.960 "uuid": "cb910949-5f59-44ef-ac8b-ed20309a239c", 00:30:09.960 "is_configured": true, 00:30:09.960 "data_offset": 0, 00:30:09.960 "data_size": 65536 00:30:09.960 }, 00:30:09.960 { 00:30:09.960 "name": "BaseBdev3", 00:30:09.960 "uuid": "7f9d1846-95b2-46b6-b22d-01bfc6103470", 00:30:09.960 "is_configured": true, 00:30:09.960 "data_offset": 0, 00:30:09.960 "data_size": 65536 00:30:09.960 }, 00:30:09.960 { 00:30:09.960 "name": "BaseBdev4", 00:30:09.960 "uuid": "7a89dc31-f4e2-4452-8db1-89f9231381a6", 00:30:09.960 "is_configured": true, 00:30:09.960 "data_offset": 0, 00:30:09.960 "data_size": 65536 00:30:09.960 } 00:30:09.960 ] 00:30:09.960 }' 00:30:09.960 00:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:09.960 00:14:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:10.528 00:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:10.528 [2024-07-25 00:14:06.366493] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:10.528 00:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:10.528 00:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:10.528 00:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:10.528 00:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:10.528 00:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:10.528 00:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:10.528 00:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:10.528 00:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:10.528 00:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:10.528 00:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:10.528 00:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:10.528 00:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:10.798 00:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:10.798 "name": "Existed_Raid", 00:30:10.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.798 "strip_size_kb": 64, 00:30:10.798 "state": "configuring", 00:30:10.798 "raid_level": "raid5f", 00:30:10.798 "superblock": false, 00:30:10.798 "num_base_bdevs": 4, 00:30:10.798 "num_base_bdevs_discovered": 2, 00:30:10.798 "num_base_bdevs_operational": 4, 00:30:10.798 "base_bdevs_list": [ 00:30:10.798 { 00:30:10.798 "name": "BaseBdev1", 00:30:10.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.798 "is_configured": false, 00:30:10.798 "data_offset": 0, 00:30:10.798 "data_size": 0 00:30:10.798 }, 00:30:10.798 { 00:30:10.798 "name": null, 00:30:10.798 "uuid": "cb910949-5f59-44ef-ac8b-ed20309a239c", 00:30:10.798 "is_configured": false, 00:30:10.798 "data_offset": 0, 00:30:10.798 "data_size": 65536 00:30:10.798 }, 00:30:10.798 { 00:30:10.798 "name": "BaseBdev3", 00:30:10.798 "uuid": "7f9d1846-95b2-46b6-b22d-01bfc6103470", 00:30:10.798 "is_configured": true, 00:30:10.798 "data_offset": 0, 00:30:10.798 "data_size": 65536 00:30:10.798 }, 00:30:10.798 { 00:30:10.798 "name": "BaseBdev4", 00:30:10.798 "uuid": "7a89dc31-f4e2-4452-8db1-89f9231381a6", 00:30:10.798 "is_configured": true, 00:30:10.798 "data_offset": 0, 00:30:10.798 "data_size": 65536 00:30:10.798 } 00:30:10.798 ] 00:30:10.798 }' 00:30:10.798 00:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:10.798 00:14:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.072 00:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:11.072 00:14:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:11.331 00:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:30:11.331 00:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:11.590 [2024-07-25 00:14:07.334207] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:11.590 BaseBdev1 00:30:11.590 00:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:30:11.590 00:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:30:11.590 00:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:11.590 00:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:11.590 00:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:11.590 00:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:11.590 00:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:11.855 00:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:12.120 [ 00:30:12.120 { 00:30:12.120 "name": "BaseBdev1", 00:30:12.120 "aliases": [ 00:30:12.120 "41a4fc99-71c9-454a-9bc1-88e05a98a274" 00:30:12.120 ], 00:30:12.120 "product_name": "Malloc disk", 00:30:12.120 "block_size": 512, 00:30:12.120 "num_blocks": 65536, 00:30:12.120 "uuid": "41a4fc99-71c9-454a-9bc1-88e05a98a274", 00:30:12.120 "assigned_rate_limits": { 00:30:12.120 "rw_ios_per_sec": 0, 00:30:12.120 "rw_mbytes_per_sec": 0, 00:30:12.120 "r_mbytes_per_sec": 0, 00:30:12.120 "w_mbytes_per_sec": 0 00:30:12.120 }, 00:30:12.120 "claimed": true, 00:30:12.120 "claim_type": "exclusive_write", 00:30:12.120 "zoned": false, 00:30:12.120 "supported_io_types": { 00:30:12.120 "read": true, 00:30:12.120 "write": true, 00:30:12.120 "unmap": true, 00:30:12.120 "flush": true, 00:30:12.120 "reset": true, 00:30:12.120 "nvme_admin": false, 00:30:12.120 "nvme_io": false, 00:30:12.120 "nvme_io_md": false, 00:30:12.120 "write_zeroes": true, 00:30:12.120 "zcopy": true, 00:30:12.120 "get_zone_info": false, 00:30:12.120 "zone_management": false, 00:30:12.120 "zone_append": false, 00:30:12.120 "compare": false, 00:30:12.120 "compare_and_write": false, 00:30:12.120 "abort": true, 00:30:12.120 "seek_hole": false, 00:30:12.120 "seek_data": false, 00:30:12.120 "copy": true, 00:30:12.120 "nvme_iov_md": false 00:30:12.120 }, 00:30:12.120 "memory_domains": [ 00:30:12.120 { 00:30:12.120 "dma_device_id": "system", 00:30:12.120 "dma_device_type": 1 00:30:12.120 }, 00:30:12.120 { 00:30:12.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:12.120 "dma_device_type": 2 00:30:12.120 } 00:30:12.120 ], 00:30:12.120 "driver_specific": {} 00:30:12.120 } 00:30:12.120 ] 00:30:12.120 00:14:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:12.120 00:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:12.120 00:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:12.120 00:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:12.120 00:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:12.120 00:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:12.120 00:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:12.120 00:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:12.120 00:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:12.120 00:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:12.120 00:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:12.120 00:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.120 00:14:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:12.378 00:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:12.378 "name": "Existed_Raid", 00:30:12.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.378 "strip_size_kb": 64, 00:30:12.378 "state": "configuring", 00:30:12.378 "raid_level": "raid5f", 00:30:12.378 "superblock": false, 00:30:12.378 "num_base_bdevs": 4, 00:30:12.378 "num_base_bdevs_discovered": 3, 00:30:12.378 "num_base_bdevs_operational": 4, 00:30:12.378 "base_bdevs_list": [ 00:30:12.378 { 00:30:12.378 "name": "BaseBdev1", 00:30:12.378 "uuid": "41a4fc99-71c9-454a-9bc1-88e05a98a274", 00:30:12.378 "is_configured": true, 00:30:12.378 "data_offset": 0, 00:30:12.378 "data_size": 65536 00:30:12.378 }, 00:30:12.378 { 00:30:12.378 "name": null, 00:30:12.378 "uuid": "cb910949-5f59-44ef-ac8b-ed20309a239c", 00:30:12.378 "is_configured": false, 00:30:12.378 "data_offset": 0, 00:30:12.378 "data_size": 65536 00:30:12.378 }, 00:30:12.378 { 00:30:12.378 "name": "BaseBdev3", 00:30:12.378 "uuid": "7f9d1846-95b2-46b6-b22d-01bfc6103470", 00:30:12.378 "is_configured": true, 00:30:12.378 "data_offset": 0, 00:30:12.378 "data_size": 65536 00:30:12.378 }, 00:30:12.378 { 00:30:12.378 "name": "BaseBdev4", 00:30:12.378 "uuid": "7a89dc31-f4e2-4452-8db1-89f9231381a6", 00:30:12.378 "is_configured": true, 00:30:12.378 "data_offset": 0, 00:30:12.378 "data_size": 65536 00:30:12.378 } 00:30:12.378 ] 00:30:12.378 }' 00:30:12.378 00:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:12.378 00:14:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.636 00:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.636 00:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:12.894 00:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:30:12.894 00:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:30:13.152 [2024-07-25 00:14:08.798644] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:13.152 00:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:13.152 00:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:13.152 00:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:13.152 00:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:13.152 00:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:13.152 00:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:13.152 00:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:13.152 00:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:13.152 00:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:13.152 00:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:13.152 00:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:13.152 00:14:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:13.410 00:14:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:13.410 "name": "Existed_Raid", 00:30:13.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:13.410 "strip_size_kb": 64, 00:30:13.410 "state": "configuring", 00:30:13.410 "raid_level": "raid5f", 00:30:13.410 "superblock": false, 00:30:13.410 "num_base_bdevs": 4, 00:30:13.410 "num_base_bdevs_discovered": 2, 00:30:13.410 "num_base_bdevs_operational": 4, 00:30:13.410 "base_bdevs_list": [ 00:30:13.410 { 00:30:13.410 "name": "BaseBdev1", 00:30:13.410 "uuid": "41a4fc99-71c9-454a-9bc1-88e05a98a274", 00:30:13.410 "is_configured": true, 00:30:13.410 "data_offset": 0, 00:30:13.410 "data_size": 65536 00:30:13.410 }, 00:30:13.410 { 00:30:13.410 "name": null, 00:30:13.410 "uuid": "cb910949-5f59-44ef-ac8b-ed20309a239c", 00:30:13.410 "is_configured": false, 00:30:13.410 "data_offset": 0, 00:30:13.410 "data_size": 65536 00:30:13.410 }, 00:30:13.410 { 00:30:13.410 "name": null, 00:30:13.410 "uuid": "7f9d1846-95b2-46b6-b22d-01bfc6103470", 00:30:13.410 "is_configured": false, 00:30:13.410 "data_offset": 0, 00:30:13.410 "data_size": 65536 00:30:13.410 }, 00:30:13.410 { 00:30:13.410 "name": "BaseBdev4", 00:30:13.410 "uuid": "7a89dc31-f4e2-4452-8db1-89f9231381a6", 00:30:13.410 "is_configured": true, 00:30:13.410 "data_offset": 0, 00:30:13.410 "data_size": 65536 00:30:13.410 } 00:30:13.410 ] 00:30:13.410 }' 00:30:13.410 00:14:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:13.410 00:14:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.668 00:14:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:13.668 00:14:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:13.926 00:14:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:30:13.926 00:14:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:13.926 [2024-07-25 00:14:09.794941] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:14.184 00:14:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:14.184 00:14:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:14.184 00:14:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:14.184 00:14:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:14.184 00:14:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:14.184 00:14:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:14.184 00:14:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:14.184 00:14:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:14.184 00:14:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:14.184 00:14:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:14.184 00:14:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:14.184 00:14:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:14.184 00:14:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:14.184 "name": "Existed_Raid", 00:30:14.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:14.184 "strip_size_kb": 64, 00:30:14.184 "state": "configuring", 00:30:14.184 "raid_level": "raid5f", 00:30:14.184 "superblock": false, 00:30:14.184 "num_base_bdevs": 4, 00:30:14.184 "num_base_bdevs_discovered": 3, 00:30:14.184 "num_base_bdevs_operational": 4, 00:30:14.184 "base_bdevs_list": [ 00:30:14.184 { 00:30:14.184 "name": "BaseBdev1", 00:30:14.184 "uuid": "41a4fc99-71c9-454a-9bc1-88e05a98a274", 00:30:14.184 "is_configured": true, 00:30:14.184 "data_offset": 0, 00:30:14.184 "data_size": 65536 00:30:14.184 }, 00:30:14.184 { 00:30:14.184 "name": null, 00:30:14.184 "uuid": "cb910949-5f59-44ef-ac8b-ed20309a239c", 00:30:14.184 "is_configured": false, 00:30:14.184 "data_offset": 0, 00:30:14.184 "data_size": 65536 00:30:14.184 }, 00:30:14.184 { 00:30:14.184 "name": "BaseBdev3", 00:30:14.184 "uuid": "7f9d1846-95b2-46b6-b22d-01bfc6103470", 00:30:14.184 "is_configured": true, 00:30:14.184 "data_offset": 0, 00:30:14.184 "data_size": 65536 00:30:14.184 }, 00:30:14.184 { 00:30:14.184 "name": "BaseBdev4", 00:30:14.184 "uuid": "7a89dc31-f4e2-4452-8db1-89f9231381a6", 00:30:14.184 "is_configured": true, 00:30:14.184 "data_offset": 0, 00:30:14.184 "data_size": 65536 00:30:14.184 } 00:30:14.184 ] 00:30:14.184 }' 00:30:14.184 00:14:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:14.184 00:14:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:14.750 00:14:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:14.750 00:14:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:15.009 00:14:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:30:15.009 00:14:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:15.009 [2024-07-25 00:14:10.787162] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:15.009 00:14:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:15.009 00:14:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:15.009 00:14:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:15.009 00:14:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:15.009 00:14:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:15.009 00:14:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:15.009 00:14:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:15.009 00:14:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:15.009 00:14:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:15.009 00:14:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:15.009 00:14:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:15.009 00:14:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:15.267 00:14:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:15.267 "name": "Existed_Raid", 00:30:15.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:15.267 "strip_size_kb": 64, 00:30:15.267 "state": "configuring", 00:30:15.267 "raid_level": "raid5f", 00:30:15.267 "superblock": false, 00:30:15.267 "num_base_bdevs": 4, 00:30:15.267 "num_base_bdevs_discovered": 2, 00:30:15.267 "num_base_bdevs_operational": 4, 00:30:15.267 "base_bdevs_list": [ 00:30:15.267 { 00:30:15.267 "name": null, 00:30:15.267 "uuid": "41a4fc99-71c9-454a-9bc1-88e05a98a274", 00:30:15.267 "is_configured": false, 00:30:15.267 "data_offset": 0, 00:30:15.267 "data_size": 65536 00:30:15.267 }, 00:30:15.267 { 00:30:15.267 "name": null, 00:30:15.267 "uuid": "cb910949-5f59-44ef-ac8b-ed20309a239c", 00:30:15.267 "is_configured": false, 00:30:15.267 "data_offset": 0, 00:30:15.267 "data_size": 65536 00:30:15.267 }, 00:30:15.268 { 00:30:15.268 "name": "BaseBdev3", 00:30:15.268 "uuid": "7f9d1846-95b2-46b6-b22d-01bfc6103470", 00:30:15.268 "is_configured": true, 00:30:15.268 "data_offset": 0, 00:30:15.268 "data_size": 65536 00:30:15.268 }, 00:30:15.268 { 00:30:15.268 "name": "BaseBdev4", 00:30:15.268 "uuid": "7a89dc31-f4e2-4452-8db1-89f9231381a6", 00:30:15.268 "is_configured": true, 00:30:15.268 "data_offset": 0, 00:30:15.268 "data_size": 65536 00:30:15.268 } 00:30:15.268 ] 00:30:15.268 }' 00:30:15.268 00:14:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:15.268 00:14:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:15.835 00:14:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:15.835 00:14:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:15.835 00:14:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:30:15.835 00:14:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:16.094 [2024-07-25 00:14:11.917014] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:16.094 00:14:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:16.094 00:14:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:16.094 00:14:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:16.094 00:14:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:16.094 00:14:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:16.094 00:14:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:16.094 00:14:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:16.094 00:14:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:16.094 00:14:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:16.094 00:14:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:16.094 00:14:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:16.094 00:14:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:16.353 00:14:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:16.353 "name": "Existed_Raid", 00:30:16.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:16.353 "strip_size_kb": 64, 00:30:16.353 "state": "configuring", 00:30:16.353 "raid_level": "raid5f", 00:30:16.353 "superblock": false, 00:30:16.353 "num_base_bdevs": 4, 00:30:16.353 "num_base_bdevs_discovered": 3, 00:30:16.353 "num_base_bdevs_operational": 4, 00:30:16.353 "base_bdevs_list": [ 00:30:16.353 { 00:30:16.353 "name": null, 00:30:16.353 "uuid": "41a4fc99-71c9-454a-9bc1-88e05a98a274", 00:30:16.353 "is_configured": false, 00:30:16.353 "data_offset": 0, 00:30:16.353 "data_size": 65536 00:30:16.353 }, 00:30:16.353 { 00:30:16.353 "name": "BaseBdev2", 00:30:16.353 "uuid": "cb910949-5f59-44ef-ac8b-ed20309a239c", 00:30:16.353 "is_configured": true, 00:30:16.353 "data_offset": 0, 00:30:16.353 "data_size": 65536 00:30:16.353 }, 00:30:16.353 { 00:30:16.353 "name": "BaseBdev3", 00:30:16.353 "uuid": "7f9d1846-95b2-46b6-b22d-01bfc6103470", 00:30:16.353 "is_configured": true, 00:30:16.353 "data_offset": 0, 00:30:16.353 "data_size": 65536 00:30:16.353 }, 00:30:16.353 { 00:30:16.353 "name": "BaseBdev4", 00:30:16.353 "uuid": "7a89dc31-f4e2-4452-8db1-89f9231381a6", 00:30:16.353 "is_configured": true, 00:30:16.353 "data_offset": 0, 00:30:16.353 "data_size": 65536 00:30:16.353 } 00:30:16.353 ] 00:30:16.353 }' 00:30:16.353 00:14:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:16.353 00:14:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:16.613 00:14:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:16.613 00:14:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:16.871 00:14:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:30:16.871 00:14:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:16.871 00:14:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:17.130 00:14:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 41a4fc99-71c9-454a-9bc1-88e05a98a274 00:30:17.389 [2024-07-25 00:14:13.073000] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:17.389 [2024-07-25 00:14:13.073052] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:30:17.389 [2024-07-25 00:14:13.073064] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:30:17.389 [2024-07-25 00:14:13.073224] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ee0 00:30:17.389 [2024-07-25 00:14:13.078968] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:30:17.389 [2024-07-25 00:14:13.078998] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000009380 00:30:17.389 [2024-07-25 00:14:13.079306] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:17.389 NewBaseBdev 00:30:17.389 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:30:17.389 00:14:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:30:17.389 00:14:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:17.389 00:14:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:17.389 00:14:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:17.389 00:14:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:17.389 00:14:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:17.649 00:14:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:17.649 [ 00:30:17.649 { 00:30:17.649 "name": "NewBaseBdev", 00:30:17.649 "aliases": [ 00:30:17.649 "41a4fc99-71c9-454a-9bc1-88e05a98a274" 00:30:17.649 ], 00:30:17.649 "product_name": "Malloc disk", 00:30:17.649 "block_size": 512, 00:30:17.649 "num_blocks": 65536, 00:30:17.649 "uuid": "41a4fc99-71c9-454a-9bc1-88e05a98a274", 00:30:17.649 "assigned_rate_limits": { 00:30:17.649 "rw_ios_per_sec": 0, 00:30:17.649 "rw_mbytes_per_sec": 0, 00:30:17.649 "r_mbytes_per_sec": 0, 00:30:17.649 "w_mbytes_per_sec": 0 00:30:17.649 }, 00:30:17.649 "claimed": true, 00:30:17.649 "claim_type": "exclusive_write", 00:30:17.649 "zoned": false, 00:30:17.649 "supported_io_types": { 00:30:17.649 "read": true, 00:30:17.649 "write": true, 00:30:17.649 "unmap": true, 00:30:17.649 "flush": true, 00:30:17.649 "reset": true, 00:30:17.649 "nvme_admin": false, 00:30:17.650 "nvme_io": false, 00:30:17.650 "nvme_io_md": false, 00:30:17.650 "write_zeroes": true, 00:30:17.650 "zcopy": true, 00:30:17.650 "get_zone_info": false, 00:30:17.650 "zone_management": false, 00:30:17.650 "zone_append": false, 00:30:17.650 "compare": false, 00:30:17.650 "compare_and_write": false, 00:30:17.650 "abort": true, 00:30:17.650 "seek_hole": false, 00:30:17.650 "seek_data": false, 00:30:17.650 "copy": true, 00:30:17.650 "nvme_iov_md": false 00:30:17.650 }, 00:30:17.650 "memory_domains": [ 00:30:17.650 { 00:30:17.650 "dma_device_id": "system", 00:30:17.650 "dma_device_type": 1 00:30:17.650 }, 00:30:17.650 { 00:30:17.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:17.650 "dma_device_type": 2 00:30:17.650 } 00:30:17.650 ], 00:30:17.650 "driver_specific": {} 00:30:17.650 } 00:30:17.650 ] 00:30:17.650 00:14:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:17.650 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:30:17.650 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:17.650 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:17.650 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:17.650 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:17.650 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:17.650 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:17.650 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:17.650 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:17.650 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:17.650 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:17.650 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:17.909 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:17.909 "name": "Existed_Raid", 00:30:17.909 "uuid": "041cdf94-29f9-47ed-bf04-054babc7b0d0", 00:30:17.909 "strip_size_kb": 64, 00:30:17.909 "state": "online", 00:30:17.909 "raid_level": "raid5f", 00:30:17.909 "superblock": false, 00:30:17.909 "num_base_bdevs": 4, 00:30:17.909 "num_base_bdevs_discovered": 4, 00:30:17.909 "num_base_bdevs_operational": 4, 00:30:17.909 "base_bdevs_list": [ 00:30:17.909 { 00:30:17.909 "name": "NewBaseBdev", 00:30:17.909 "uuid": "41a4fc99-71c9-454a-9bc1-88e05a98a274", 00:30:17.909 "is_configured": true, 00:30:17.909 "data_offset": 0, 00:30:17.909 "data_size": 65536 00:30:17.909 }, 00:30:17.909 { 00:30:17.909 "name": "BaseBdev2", 00:30:17.909 "uuid": "cb910949-5f59-44ef-ac8b-ed20309a239c", 00:30:17.909 "is_configured": true, 00:30:17.909 "data_offset": 0, 00:30:17.909 "data_size": 65536 00:30:17.909 }, 00:30:17.909 { 00:30:17.909 "name": "BaseBdev3", 00:30:17.909 "uuid": "7f9d1846-95b2-46b6-b22d-01bfc6103470", 00:30:17.909 "is_configured": true, 00:30:17.909 "data_offset": 0, 00:30:17.909 "data_size": 65536 00:30:17.909 }, 00:30:17.909 { 00:30:17.909 "name": "BaseBdev4", 00:30:17.909 "uuid": "7a89dc31-f4e2-4452-8db1-89f9231381a6", 00:30:17.909 "is_configured": true, 00:30:17.909 "data_offset": 0, 00:30:17.909 "data_size": 65536 00:30:17.909 } 00:30:17.909 ] 00:30:17.909 }' 00:30:17.909 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:17.909 00:14:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:18.167 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:30:18.167 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:30:18.167 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:18.167 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:18.167 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:18.167 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:30:18.167 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:30:18.167 00:14:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:18.426 [2024-07-25 00:14:14.161556] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:18.426 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:18.426 "name": "Existed_Raid", 00:30:18.426 "aliases": [ 00:30:18.426 "041cdf94-29f9-47ed-bf04-054babc7b0d0" 00:30:18.426 ], 00:30:18.426 "product_name": "Raid Volume", 00:30:18.426 "block_size": 512, 00:30:18.426 "num_blocks": 196608, 00:30:18.426 "uuid": "041cdf94-29f9-47ed-bf04-054babc7b0d0", 00:30:18.426 "assigned_rate_limits": { 00:30:18.426 "rw_ios_per_sec": 0, 00:30:18.426 "rw_mbytes_per_sec": 0, 00:30:18.426 "r_mbytes_per_sec": 0, 00:30:18.426 "w_mbytes_per_sec": 0 00:30:18.426 }, 00:30:18.426 "claimed": false, 00:30:18.426 "zoned": false, 00:30:18.426 "supported_io_types": { 00:30:18.426 "read": true, 00:30:18.426 "write": true, 00:30:18.426 "unmap": false, 00:30:18.426 "flush": false, 00:30:18.426 "reset": true, 00:30:18.426 "nvme_admin": false, 00:30:18.426 "nvme_io": false, 00:30:18.426 "nvme_io_md": false, 00:30:18.426 "write_zeroes": true, 00:30:18.426 "zcopy": false, 00:30:18.426 "get_zone_info": false, 00:30:18.426 "zone_management": false, 00:30:18.426 "zone_append": false, 00:30:18.426 "compare": false, 00:30:18.426 "compare_and_write": false, 00:30:18.426 "abort": false, 00:30:18.426 "seek_hole": false, 00:30:18.426 "seek_data": false, 00:30:18.426 "copy": false, 00:30:18.426 "nvme_iov_md": false 00:30:18.426 }, 00:30:18.426 "driver_specific": { 00:30:18.426 "raid": { 00:30:18.426 "uuid": "041cdf94-29f9-47ed-bf04-054babc7b0d0", 00:30:18.426 "strip_size_kb": 64, 00:30:18.426 "state": "online", 00:30:18.426 "raid_level": "raid5f", 00:30:18.426 "superblock": false, 00:30:18.426 "num_base_bdevs": 4, 00:30:18.426 "num_base_bdevs_discovered": 4, 00:30:18.426 "num_base_bdevs_operational": 4, 00:30:18.426 "base_bdevs_list": [ 00:30:18.426 { 00:30:18.426 "name": "NewBaseBdev", 00:30:18.426 "uuid": "41a4fc99-71c9-454a-9bc1-88e05a98a274", 00:30:18.426 "is_configured": true, 00:30:18.426 "data_offset": 0, 00:30:18.426 "data_size": 65536 00:30:18.426 }, 00:30:18.426 { 00:30:18.426 "name": "BaseBdev2", 00:30:18.426 "uuid": "cb910949-5f59-44ef-ac8b-ed20309a239c", 00:30:18.426 "is_configured": true, 00:30:18.426 "data_offset": 0, 00:30:18.426 "data_size": 65536 00:30:18.426 }, 00:30:18.426 { 00:30:18.426 "name": "BaseBdev3", 00:30:18.426 "uuid": "7f9d1846-95b2-46b6-b22d-01bfc6103470", 00:30:18.426 "is_configured": true, 00:30:18.426 "data_offset": 0, 00:30:18.426 "data_size": 65536 00:30:18.426 }, 00:30:18.426 { 00:30:18.426 "name": "BaseBdev4", 00:30:18.426 "uuid": "7a89dc31-f4e2-4452-8db1-89f9231381a6", 00:30:18.426 "is_configured": true, 00:30:18.426 "data_offset": 0, 00:30:18.426 "data_size": 65536 00:30:18.426 } 00:30:18.426 ] 00:30:18.426 } 00:30:18.426 } 00:30:18.426 }' 00:30:18.426 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:18.426 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:30:18.426 BaseBdev2 00:30:18.426 BaseBdev3 00:30:18.426 BaseBdev4' 00:30:18.426 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:18.426 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:18.426 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:30:18.685 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:18.685 "name": "NewBaseBdev", 00:30:18.685 "aliases": [ 00:30:18.685 "41a4fc99-71c9-454a-9bc1-88e05a98a274" 00:30:18.685 ], 00:30:18.685 "product_name": "Malloc disk", 00:30:18.685 "block_size": 512, 00:30:18.685 "num_blocks": 65536, 00:30:18.685 "uuid": "41a4fc99-71c9-454a-9bc1-88e05a98a274", 00:30:18.685 "assigned_rate_limits": { 00:30:18.685 "rw_ios_per_sec": 0, 00:30:18.686 "rw_mbytes_per_sec": 0, 00:30:18.686 "r_mbytes_per_sec": 0, 00:30:18.686 "w_mbytes_per_sec": 0 00:30:18.686 }, 00:30:18.686 "claimed": true, 00:30:18.686 "claim_type": "exclusive_write", 00:30:18.686 "zoned": false, 00:30:18.686 "supported_io_types": { 00:30:18.686 "read": true, 00:30:18.686 "write": true, 00:30:18.686 "unmap": true, 00:30:18.686 "flush": true, 00:30:18.686 "reset": true, 00:30:18.686 "nvme_admin": false, 00:30:18.686 "nvme_io": false, 00:30:18.686 "nvme_io_md": false, 00:30:18.686 "write_zeroes": true, 00:30:18.686 "zcopy": true, 00:30:18.686 "get_zone_info": false, 00:30:18.686 "zone_management": false, 00:30:18.686 "zone_append": false, 00:30:18.686 "compare": false, 00:30:18.686 "compare_and_write": false, 00:30:18.686 "abort": true, 00:30:18.686 "seek_hole": false, 00:30:18.686 "seek_data": false, 00:30:18.686 "copy": true, 00:30:18.686 "nvme_iov_md": false 00:30:18.686 }, 00:30:18.686 "memory_domains": [ 00:30:18.686 { 00:30:18.686 "dma_device_id": "system", 00:30:18.686 "dma_device_type": 1 00:30:18.686 }, 00:30:18.686 { 00:30:18.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:18.686 "dma_device_type": 2 00:30:18.686 } 00:30:18.686 ], 00:30:18.686 "driver_specific": {} 00:30:18.686 }' 00:30:18.686 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:18.686 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:18.686 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:18.686 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:18.686 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:18.686 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:18.686 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:18.686 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:18.686 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:18.686 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:18.686 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:18.686 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:18.686 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:18.686 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:18.686 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:30:18.944 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:18.944 "name": "BaseBdev2", 00:30:18.944 "aliases": [ 00:30:18.944 "cb910949-5f59-44ef-ac8b-ed20309a239c" 00:30:18.944 ], 00:30:18.944 "product_name": "Malloc disk", 00:30:18.944 "block_size": 512, 00:30:18.944 "num_blocks": 65536, 00:30:18.944 "uuid": "cb910949-5f59-44ef-ac8b-ed20309a239c", 00:30:18.944 "assigned_rate_limits": { 00:30:18.944 "rw_ios_per_sec": 0, 00:30:18.944 "rw_mbytes_per_sec": 0, 00:30:18.944 "r_mbytes_per_sec": 0, 00:30:18.944 "w_mbytes_per_sec": 0 00:30:18.944 }, 00:30:18.944 "claimed": true, 00:30:18.944 "claim_type": "exclusive_write", 00:30:18.944 "zoned": false, 00:30:18.944 "supported_io_types": { 00:30:18.944 "read": true, 00:30:18.944 "write": true, 00:30:18.944 "unmap": true, 00:30:18.945 "flush": true, 00:30:18.945 "reset": true, 00:30:18.945 "nvme_admin": false, 00:30:18.945 "nvme_io": false, 00:30:18.945 "nvme_io_md": false, 00:30:18.945 "write_zeroes": true, 00:30:18.945 "zcopy": true, 00:30:18.945 "get_zone_info": false, 00:30:18.945 "zone_management": false, 00:30:18.945 "zone_append": false, 00:30:18.945 "compare": false, 00:30:18.945 "compare_and_write": false, 00:30:18.945 "abort": true, 00:30:18.945 "seek_hole": false, 00:30:18.945 "seek_data": false, 00:30:18.945 "copy": true, 00:30:18.945 "nvme_iov_md": false 00:30:18.945 }, 00:30:18.945 "memory_domains": [ 00:30:18.945 { 00:30:18.945 "dma_device_id": "system", 00:30:18.945 "dma_device_type": 1 00:30:18.945 }, 00:30:18.945 { 00:30:18.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:18.945 "dma_device_type": 2 00:30:18.945 } 00:30:18.945 ], 00:30:18.945 "driver_specific": {} 00:30:18.945 }' 00:30:18.945 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:18.945 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:18.945 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:18.945 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:18.945 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:18.945 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:18.945 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:18.945 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:18.945 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:18.945 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:18.945 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:18.945 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:18.945 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:18.945 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:30:18.945 00:14:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:19.513 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:19.513 "name": "BaseBdev3", 00:30:19.513 "aliases": [ 00:30:19.513 "7f9d1846-95b2-46b6-b22d-01bfc6103470" 00:30:19.513 ], 00:30:19.513 "product_name": "Malloc disk", 00:30:19.513 "block_size": 512, 00:30:19.513 "num_blocks": 65536, 00:30:19.513 "uuid": "7f9d1846-95b2-46b6-b22d-01bfc6103470", 00:30:19.513 "assigned_rate_limits": { 00:30:19.513 "rw_ios_per_sec": 0, 00:30:19.513 "rw_mbytes_per_sec": 0, 00:30:19.513 "r_mbytes_per_sec": 0, 00:30:19.513 "w_mbytes_per_sec": 0 00:30:19.513 }, 00:30:19.513 "claimed": true, 00:30:19.513 "claim_type": "exclusive_write", 00:30:19.513 "zoned": false, 00:30:19.513 "supported_io_types": { 00:30:19.513 "read": true, 00:30:19.513 "write": true, 00:30:19.513 "unmap": true, 00:30:19.513 "flush": true, 00:30:19.513 "reset": true, 00:30:19.513 "nvme_admin": false, 00:30:19.513 "nvme_io": false, 00:30:19.513 "nvme_io_md": false, 00:30:19.513 "write_zeroes": true, 00:30:19.513 "zcopy": true, 00:30:19.513 "get_zone_info": false, 00:30:19.513 "zone_management": false, 00:30:19.513 "zone_append": false, 00:30:19.513 "compare": false, 00:30:19.513 "compare_and_write": false, 00:30:19.513 "abort": true, 00:30:19.513 "seek_hole": false, 00:30:19.513 "seek_data": false, 00:30:19.513 "copy": true, 00:30:19.513 "nvme_iov_md": false 00:30:19.513 }, 00:30:19.513 "memory_domains": [ 00:30:19.513 { 00:30:19.513 "dma_device_id": "system", 00:30:19.513 "dma_device_type": 1 00:30:19.513 }, 00:30:19.513 { 00:30:19.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:19.513 "dma_device_type": 2 00:30:19.513 } 00:30:19.513 ], 00:30:19.513 "driver_specific": {} 00:30:19.513 }' 00:30:19.513 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:19.513 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:19.513 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:19.513 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:19.513 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:19.513 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:19.513 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:19.513 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:19.513 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:19.513 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:19.513 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:19.513 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:19.513 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:19.513 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:30:19.513 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:19.772 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:19.772 "name": "BaseBdev4", 00:30:19.772 "aliases": [ 00:30:19.772 "7a89dc31-f4e2-4452-8db1-89f9231381a6" 00:30:19.772 ], 00:30:19.772 "product_name": "Malloc disk", 00:30:19.772 "block_size": 512, 00:30:19.772 "num_blocks": 65536, 00:30:19.772 "uuid": "7a89dc31-f4e2-4452-8db1-89f9231381a6", 00:30:19.772 "assigned_rate_limits": { 00:30:19.772 "rw_ios_per_sec": 0, 00:30:19.772 "rw_mbytes_per_sec": 0, 00:30:19.772 "r_mbytes_per_sec": 0, 00:30:19.772 "w_mbytes_per_sec": 0 00:30:19.772 }, 00:30:19.772 "claimed": true, 00:30:19.772 "claim_type": "exclusive_write", 00:30:19.772 "zoned": false, 00:30:19.772 "supported_io_types": { 00:30:19.772 "read": true, 00:30:19.772 "write": true, 00:30:19.772 "unmap": true, 00:30:19.772 "flush": true, 00:30:19.772 "reset": true, 00:30:19.772 "nvme_admin": false, 00:30:19.772 "nvme_io": false, 00:30:19.772 "nvme_io_md": false, 00:30:19.772 "write_zeroes": true, 00:30:19.772 "zcopy": true, 00:30:19.772 "get_zone_info": false, 00:30:19.772 "zone_management": false, 00:30:19.772 "zone_append": false, 00:30:19.772 "compare": false, 00:30:19.772 "compare_and_write": false, 00:30:19.772 "abort": true, 00:30:19.772 "seek_hole": false, 00:30:19.772 "seek_data": false, 00:30:19.772 "copy": true, 00:30:19.772 "nvme_iov_md": false 00:30:19.772 }, 00:30:19.772 "memory_domains": [ 00:30:19.772 { 00:30:19.772 "dma_device_id": "system", 00:30:19.772 "dma_device_type": 1 00:30:19.772 }, 00:30:19.772 { 00:30:19.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:19.772 "dma_device_type": 2 00:30:19.772 } 00:30:19.772 ], 00:30:19.772 "driver_specific": {} 00:30:19.772 }' 00:30:19.772 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:19.772 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:19.772 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:19.772 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:19.772 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:19.772 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:19.772 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:19.772 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:19.772 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:19.772 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:19.772 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:19.772 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:19.772 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:20.032 [2024-07-25 00:14:15.757750] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:20.032 [2024-07-25 00:14:15.757784] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:20.032 [2024-07-25 00:14:15.757895] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:20.032 [2024-07-25 00:14:15.758287] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:20.032 [2024-07-25 00:14:15.758309] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name Existed_Raid, state offline 00:30:20.032 00:14:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 105941 00:30:20.032 00:14:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 105941 ']' 00:30:20.032 00:14:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 105941 00:30:20.032 00:14:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:30:20.032 00:14:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:20.032 00:14:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105941 00:30:20.032 killing process with pid 105941 00:30:20.032 00:14:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:20.032 00:14:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:20.032 00:14:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105941' 00:30:20.032 00:14:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 105941 00:30:20.032 [2024-07-25 00:14:15.809569] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:20.032 00:14:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 105941 00:30:20.291 [2024-07-25 00:14:16.058859] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:21.227 00:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:30:21.227 00:30:21.228 real 0m25.664s 00:30:21.228 user 0m45.020s 00:30:21.228 sys 0m4.073s 00:30:21.228 00:14:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:21.228 00:14:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:21.228 ************************************ 00:30:21.228 END TEST raid5f_state_function_test 00:30:21.228 ************************************ 00:30:21.228 00:14:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:30:21.228 00:14:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:21.228 00:14:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:21.228 00:14:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:21.228 ************************************ 00:30:21.228 START TEST raid5f_state_function_test_sb 00:30:21.228 ************************************ 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=106904 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 106904' 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:30:21.228 Process raid pid: 106904 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 106904 /var/tmp/spdk-raid.sock 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 106904 ']' 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:21.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:21.228 00:14:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:21.487 [2024-07-25 00:14:17.119859] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:30:21.487 [2024-07-25 00:14:17.120211] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.487 [2024-07-25 00:14:17.292702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.746 [2024-07-25 00:14:17.446269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.746 [2024-07-25 00:14:17.594740] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:22.314 00:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:22.314 00:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:30:22.314 00:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:30:22.574 [2024-07-25 00:14:18.303245] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:22.574 [2024-07-25 00:14:18.303319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:22.574 [2024-07-25 00:14:18.303335] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:22.574 [2024-07-25 00:14:18.303349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:22.574 [2024-07-25 00:14:18.303358] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:22.574 [2024-07-25 00:14:18.303370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:22.574 [2024-07-25 00:14:18.303378] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:22.574 [2024-07-25 00:14:18.303390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:22.574 00:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:22.574 00:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:22.574 00:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:22.574 00:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:22.574 00:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:22.574 00:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:22.574 00:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:22.574 00:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:22.574 00:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:22.574 00:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:22.574 00:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:22.574 00:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:22.833 00:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:22.833 "name": "Existed_Raid", 00:30:22.833 "uuid": "30fc9fec-776b-4d9d-a59e-f6db8f6feeb8", 00:30:22.833 "strip_size_kb": 64, 00:30:22.833 "state": "configuring", 00:30:22.833 "raid_level": "raid5f", 00:30:22.833 "superblock": true, 00:30:22.833 "num_base_bdevs": 4, 00:30:22.833 "num_base_bdevs_discovered": 0, 00:30:22.833 "num_base_bdevs_operational": 4, 00:30:22.833 "base_bdevs_list": [ 00:30:22.833 { 00:30:22.833 "name": "BaseBdev1", 00:30:22.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:22.833 "is_configured": false, 00:30:22.833 "data_offset": 0, 00:30:22.833 "data_size": 0 00:30:22.833 }, 00:30:22.833 { 00:30:22.833 "name": "BaseBdev2", 00:30:22.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:22.833 "is_configured": false, 00:30:22.833 "data_offset": 0, 00:30:22.833 "data_size": 0 00:30:22.833 }, 00:30:22.833 { 00:30:22.833 "name": "BaseBdev3", 00:30:22.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:22.833 "is_configured": false, 00:30:22.833 "data_offset": 0, 00:30:22.833 "data_size": 0 00:30:22.833 }, 00:30:22.833 { 00:30:22.833 "name": "BaseBdev4", 00:30:22.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:22.833 "is_configured": false, 00:30:22.833 "data_offset": 0, 00:30:22.833 "data_size": 0 00:30:22.833 } 00:30:22.833 ] 00:30:22.833 }' 00:30:22.833 00:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:22.833 00:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:23.092 00:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:23.351 [2024-07-25 00:14:19.075400] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:23.351 [2024-07-25 00:14:19.075443] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:30:23.351 00:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:30:23.609 [2024-07-25 00:14:19.335510] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:23.609 [2024-07-25 00:14:19.335563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:23.609 [2024-07-25 00:14:19.335577] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:23.609 [2024-07-25 00:14:19.335590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:23.609 [2024-07-25 00:14:19.335598] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:23.609 [2024-07-25 00:14:19.335609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:23.609 [2024-07-25 00:14:19.335617] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:23.609 [2024-07-25 00:14:19.335629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:23.609 00:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:23.868 [2024-07-25 00:14:19.608209] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:23.868 BaseBdev1 00:30:23.868 00:14:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:30:23.868 00:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:30:23.868 00:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:23.868 00:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:30:23.868 00:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:23.868 00:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:23.868 00:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:24.127 00:14:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:24.385 [ 00:30:24.385 { 00:30:24.385 "name": "BaseBdev1", 00:30:24.385 "aliases": [ 00:30:24.385 "afebf04d-2c51-438d-90c6-305ff0cb6f1a" 00:30:24.385 ], 00:30:24.385 "product_name": "Malloc disk", 00:30:24.385 "block_size": 512, 00:30:24.385 "num_blocks": 65536, 00:30:24.385 "uuid": "afebf04d-2c51-438d-90c6-305ff0cb6f1a", 00:30:24.385 "assigned_rate_limits": { 00:30:24.385 "rw_ios_per_sec": 0, 00:30:24.385 "rw_mbytes_per_sec": 0, 00:30:24.385 "r_mbytes_per_sec": 0, 00:30:24.385 "w_mbytes_per_sec": 0 00:30:24.385 }, 00:30:24.385 "claimed": true, 00:30:24.385 "claim_type": "exclusive_write", 00:30:24.385 "zoned": false, 00:30:24.385 "supported_io_types": { 00:30:24.385 "read": true, 00:30:24.385 "write": true, 00:30:24.385 "unmap": true, 00:30:24.385 "flush": true, 00:30:24.385 "reset": true, 00:30:24.385 "nvme_admin": false, 00:30:24.385 "nvme_io": false, 00:30:24.385 "nvme_io_md": false, 00:30:24.385 "write_zeroes": true, 00:30:24.385 "zcopy": true, 00:30:24.385 "get_zone_info": false, 00:30:24.385 "zone_management": false, 00:30:24.385 "zone_append": false, 00:30:24.385 "compare": false, 00:30:24.385 "compare_and_write": false, 00:30:24.385 "abort": true, 00:30:24.385 "seek_hole": false, 00:30:24.385 "seek_data": false, 00:30:24.385 "copy": true, 00:30:24.385 "nvme_iov_md": false 00:30:24.385 }, 00:30:24.385 "memory_domains": [ 00:30:24.385 { 00:30:24.385 "dma_device_id": "system", 00:30:24.385 "dma_device_type": 1 00:30:24.385 }, 00:30:24.385 { 00:30:24.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:24.385 "dma_device_type": 2 00:30:24.385 } 00:30:24.385 ], 00:30:24.385 "driver_specific": {} 00:30:24.385 } 00:30:24.385 ] 00:30:24.385 00:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:30:24.385 00:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:24.385 00:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:24.385 00:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:24.385 00:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:24.385 00:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:24.385 00:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:24.385 00:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:24.386 00:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:24.386 00:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:24.386 00:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:24.386 00:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:24.386 00:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:24.645 00:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:24.645 "name": "Existed_Raid", 00:30:24.645 "uuid": "ae2b786d-ed65-4c83-919f-ad13b4c8f502", 00:30:24.645 "strip_size_kb": 64, 00:30:24.645 "state": "configuring", 00:30:24.645 "raid_level": "raid5f", 00:30:24.645 "superblock": true, 00:30:24.645 "num_base_bdevs": 4, 00:30:24.645 "num_base_bdevs_discovered": 1, 00:30:24.645 "num_base_bdevs_operational": 4, 00:30:24.645 "base_bdevs_list": [ 00:30:24.645 { 00:30:24.645 "name": "BaseBdev1", 00:30:24.645 "uuid": "afebf04d-2c51-438d-90c6-305ff0cb6f1a", 00:30:24.645 "is_configured": true, 00:30:24.645 "data_offset": 2048, 00:30:24.645 "data_size": 63488 00:30:24.645 }, 00:30:24.645 { 00:30:24.645 "name": "BaseBdev2", 00:30:24.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.645 "is_configured": false, 00:30:24.645 "data_offset": 0, 00:30:24.645 "data_size": 0 00:30:24.645 }, 00:30:24.645 { 00:30:24.645 "name": "BaseBdev3", 00:30:24.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.645 "is_configured": false, 00:30:24.645 "data_offset": 0, 00:30:24.645 "data_size": 0 00:30:24.645 }, 00:30:24.645 { 00:30:24.645 "name": "BaseBdev4", 00:30:24.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.645 "is_configured": false, 00:30:24.645 "data_offset": 0, 00:30:24.645 "data_size": 0 00:30:24.645 } 00:30:24.645 ] 00:30:24.645 }' 00:30:24.645 00:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:24.645 00:14:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:24.904 00:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:25.163 [2024-07-25 00:14:20.800501] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:25.163 [2024-07-25 00:14:20.800680] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:30:25.163 00:14:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:30:25.163 [2024-07-25 00:14:20.988590] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:25.163 [2024-07-25 00:14:20.990381] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:25.163 [2024-07-25 00:14:20.990565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:25.163 [2024-07-25 00:14:20.990700] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:25.163 [2024-07-25 00:14:20.990758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:25.163 [2024-07-25 00:14:20.990889] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:25.163 [2024-07-25 00:14:20.991026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:25.163 00:14:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:30:25.163 00:14:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:25.163 00:14:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:25.163 00:14:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:25.163 00:14:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:25.163 00:14:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:25.163 00:14:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:25.163 00:14:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:25.163 00:14:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:25.163 00:14:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:25.163 00:14:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:25.163 00:14:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:25.163 00:14:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:25.163 00:14:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:25.422 00:14:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:25.422 "name": "Existed_Raid", 00:30:25.422 "uuid": "ad1f099e-1e7a-407f-9fa3-dec04a270f8f", 00:30:25.422 "strip_size_kb": 64, 00:30:25.422 "state": "configuring", 00:30:25.422 "raid_level": "raid5f", 00:30:25.422 "superblock": true, 00:30:25.422 "num_base_bdevs": 4, 00:30:25.422 "num_base_bdevs_discovered": 1, 00:30:25.422 "num_base_bdevs_operational": 4, 00:30:25.422 "base_bdevs_list": [ 00:30:25.422 { 00:30:25.422 "name": "BaseBdev1", 00:30:25.422 "uuid": "afebf04d-2c51-438d-90c6-305ff0cb6f1a", 00:30:25.422 "is_configured": true, 00:30:25.422 "data_offset": 2048, 00:30:25.422 "data_size": 63488 00:30:25.422 }, 00:30:25.422 { 00:30:25.422 "name": "BaseBdev2", 00:30:25.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.422 "is_configured": false, 00:30:25.422 "data_offset": 0, 00:30:25.422 "data_size": 0 00:30:25.422 }, 00:30:25.422 { 00:30:25.422 "name": "BaseBdev3", 00:30:25.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.422 "is_configured": false, 00:30:25.422 "data_offset": 0, 00:30:25.422 "data_size": 0 00:30:25.422 }, 00:30:25.422 { 00:30:25.422 "name": "BaseBdev4", 00:30:25.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.422 "is_configured": false, 00:30:25.422 "data_offset": 0, 00:30:25.422 "data_size": 0 00:30:25.422 } 00:30:25.422 ] 00:30:25.422 }' 00:30:25.422 00:14:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:25.422 00:14:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:25.682 00:14:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:25.941 [2024-07-25 00:14:21.793635] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:25.941 BaseBdev2 00:30:26.200 00:14:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:30:26.200 00:14:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:30:26.200 00:14:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:26.200 00:14:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:30:26.200 00:14:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:26.200 00:14:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:26.200 00:14:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:26.200 00:14:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:26.459 [ 00:30:26.459 { 00:30:26.459 "name": "BaseBdev2", 00:30:26.459 "aliases": [ 00:30:26.459 "17265d49-cf95-42eb-9397-dcb8499cd64b" 00:30:26.459 ], 00:30:26.459 "product_name": "Malloc disk", 00:30:26.459 "block_size": 512, 00:30:26.459 "num_blocks": 65536, 00:30:26.459 "uuid": "17265d49-cf95-42eb-9397-dcb8499cd64b", 00:30:26.459 "assigned_rate_limits": { 00:30:26.459 "rw_ios_per_sec": 0, 00:30:26.459 "rw_mbytes_per_sec": 0, 00:30:26.459 "r_mbytes_per_sec": 0, 00:30:26.459 "w_mbytes_per_sec": 0 00:30:26.459 }, 00:30:26.459 "claimed": true, 00:30:26.459 "claim_type": "exclusive_write", 00:30:26.459 "zoned": false, 00:30:26.459 "supported_io_types": { 00:30:26.459 "read": true, 00:30:26.459 "write": true, 00:30:26.459 "unmap": true, 00:30:26.459 "flush": true, 00:30:26.459 "reset": true, 00:30:26.459 "nvme_admin": false, 00:30:26.459 "nvme_io": false, 00:30:26.459 "nvme_io_md": false, 00:30:26.459 "write_zeroes": true, 00:30:26.459 "zcopy": true, 00:30:26.459 "get_zone_info": false, 00:30:26.459 "zone_management": false, 00:30:26.459 "zone_append": false, 00:30:26.459 "compare": false, 00:30:26.459 "compare_and_write": false, 00:30:26.459 "abort": true, 00:30:26.459 "seek_hole": false, 00:30:26.459 "seek_data": false, 00:30:26.459 "copy": true, 00:30:26.459 "nvme_iov_md": false 00:30:26.459 }, 00:30:26.459 "memory_domains": [ 00:30:26.459 { 00:30:26.459 "dma_device_id": "system", 00:30:26.459 "dma_device_type": 1 00:30:26.459 }, 00:30:26.459 { 00:30:26.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:26.459 "dma_device_type": 2 00:30:26.459 } 00:30:26.459 ], 00:30:26.459 "driver_specific": {} 00:30:26.459 } 00:30:26.459 ] 00:30:26.459 00:14:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:30:26.459 00:14:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:30:26.459 00:14:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:26.459 00:14:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:26.459 00:14:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:26.459 00:14:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:26.459 00:14:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:26.459 00:14:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:26.459 00:14:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:26.459 00:14:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:26.459 00:14:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:26.459 00:14:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:26.459 00:14:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:26.459 00:14:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:26.459 00:14:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.723 00:14:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:26.723 "name": "Existed_Raid", 00:30:26.723 "uuid": "ad1f099e-1e7a-407f-9fa3-dec04a270f8f", 00:30:26.723 "strip_size_kb": 64, 00:30:26.723 "state": "configuring", 00:30:26.723 "raid_level": "raid5f", 00:30:26.723 "superblock": true, 00:30:26.723 "num_base_bdevs": 4, 00:30:26.723 "num_base_bdevs_discovered": 2, 00:30:26.723 "num_base_bdevs_operational": 4, 00:30:26.723 "base_bdevs_list": [ 00:30:26.723 { 00:30:26.723 "name": "BaseBdev1", 00:30:26.723 "uuid": "afebf04d-2c51-438d-90c6-305ff0cb6f1a", 00:30:26.724 "is_configured": true, 00:30:26.724 "data_offset": 2048, 00:30:26.724 "data_size": 63488 00:30:26.724 }, 00:30:26.724 { 00:30:26.724 "name": "BaseBdev2", 00:30:26.724 "uuid": "17265d49-cf95-42eb-9397-dcb8499cd64b", 00:30:26.724 "is_configured": true, 00:30:26.724 "data_offset": 2048, 00:30:26.724 "data_size": 63488 00:30:26.724 }, 00:30:26.724 { 00:30:26.724 "name": "BaseBdev3", 00:30:26.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.724 "is_configured": false, 00:30:26.724 "data_offset": 0, 00:30:26.724 "data_size": 0 00:30:26.724 }, 00:30:26.724 { 00:30:26.724 "name": "BaseBdev4", 00:30:26.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.724 "is_configured": false, 00:30:26.724 "data_offset": 0, 00:30:26.724 "data_size": 0 00:30:26.724 } 00:30:26.724 ] 00:30:26.724 }' 00:30:26.724 00:14:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:26.724 00:14:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:27.034 00:14:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:30:27.307 BaseBdev3 00:30:27.307 [2024-07-25 00:14:23.049895] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:27.307 00:14:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:30:27.307 00:14:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:30:27.307 00:14:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:27.307 00:14:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:30:27.307 00:14:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:27.307 00:14:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:27.307 00:14:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:27.564 00:14:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:27.821 [ 00:30:27.821 { 00:30:27.821 "name": "BaseBdev3", 00:30:27.821 "aliases": [ 00:30:27.821 "2d080536-4a34-446b-b645-afa75cc0dc88" 00:30:27.821 ], 00:30:27.821 "product_name": "Malloc disk", 00:30:27.821 "block_size": 512, 00:30:27.821 "num_blocks": 65536, 00:30:27.821 "uuid": "2d080536-4a34-446b-b645-afa75cc0dc88", 00:30:27.821 "assigned_rate_limits": { 00:30:27.821 "rw_ios_per_sec": 0, 00:30:27.821 "rw_mbytes_per_sec": 0, 00:30:27.821 "r_mbytes_per_sec": 0, 00:30:27.821 "w_mbytes_per_sec": 0 00:30:27.821 }, 00:30:27.821 "claimed": true, 00:30:27.821 "claim_type": "exclusive_write", 00:30:27.821 "zoned": false, 00:30:27.821 "supported_io_types": { 00:30:27.821 "read": true, 00:30:27.821 "write": true, 00:30:27.821 "unmap": true, 00:30:27.821 "flush": true, 00:30:27.821 "reset": true, 00:30:27.821 "nvme_admin": false, 00:30:27.821 "nvme_io": false, 00:30:27.821 "nvme_io_md": false, 00:30:27.821 "write_zeroes": true, 00:30:27.821 "zcopy": true, 00:30:27.821 "get_zone_info": false, 00:30:27.821 "zone_management": false, 00:30:27.821 "zone_append": false, 00:30:27.821 "compare": false, 00:30:27.821 "compare_and_write": false, 00:30:27.821 "abort": true, 00:30:27.821 "seek_hole": false, 00:30:27.821 "seek_data": false, 00:30:27.821 "copy": true, 00:30:27.821 "nvme_iov_md": false 00:30:27.821 }, 00:30:27.821 "memory_domains": [ 00:30:27.821 { 00:30:27.821 "dma_device_id": "system", 00:30:27.821 "dma_device_type": 1 00:30:27.821 }, 00:30:27.821 { 00:30:27.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:27.821 "dma_device_type": 2 00:30:27.821 } 00:30:27.821 ], 00:30:27.821 "driver_specific": {} 00:30:27.821 } 00:30:27.821 ] 00:30:27.821 00:14:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:30:27.821 00:14:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:30:27.821 00:14:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:27.821 00:14:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:27.821 00:14:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:27.821 00:14:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:27.821 00:14:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:27.821 00:14:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:27.821 00:14:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:27.821 00:14:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:27.821 00:14:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:27.821 00:14:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:27.821 00:14:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:27.821 00:14:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:27.821 00:14:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:28.083 00:14:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:28.083 "name": "Existed_Raid", 00:30:28.083 "uuid": "ad1f099e-1e7a-407f-9fa3-dec04a270f8f", 00:30:28.083 "strip_size_kb": 64, 00:30:28.083 "state": "configuring", 00:30:28.083 "raid_level": "raid5f", 00:30:28.083 "superblock": true, 00:30:28.083 "num_base_bdevs": 4, 00:30:28.083 "num_base_bdevs_discovered": 3, 00:30:28.083 "num_base_bdevs_operational": 4, 00:30:28.083 "base_bdevs_list": [ 00:30:28.083 { 00:30:28.083 "name": "BaseBdev1", 00:30:28.083 "uuid": "afebf04d-2c51-438d-90c6-305ff0cb6f1a", 00:30:28.083 "is_configured": true, 00:30:28.083 "data_offset": 2048, 00:30:28.083 "data_size": 63488 00:30:28.083 }, 00:30:28.083 { 00:30:28.083 "name": "BaseBdev2", 00:30:28.084 "uuid": "17265d49-cf95-42eb-9397-dcb8499cd64b", 00:30:28.084 "is_configured": true, 00:30:28.084 "data_offset": 2048, 00:30:28.084 "data_size": 63488 00:30:28.084 }, 00:30:28.084 { 00:30:28.084 "name": "BaseBdev3", 00:30:28.084 "uuid": "2d080536-4a34-446b-b645-afa75cc0dc88", 00:30:28.084 "is_configured": true, 00:30:28.084 "data_offset": 2048, 00:30:28.084 "data_size": 63488 00:30:28.084 }, 00:30:28.084 { 00:30:28.084 "name": "BaseBdev4", 00:30:28.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:28.084 "is_configured": false, 00:30:28.084 "data_offset": 0, 00:30:28.084 "data_size": 0 00:30:28.084 } 00:30:28.084 ] 00:30:28.084 }' 00:30:28.084 00:14:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:28.084 00:14:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:28.341 00:14:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:30:28.598 BaseBdev4 00:30:28.598 [2024-07-25 00:14:24.373949] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:28.598 [2024-07-25 00:14:24.374214] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:30:28.598 [2024-07-25 00:14:24.374232] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:28.598 [2024-07-25 00:14:24.374332] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:30:28.598 [2024-07-25 00:14:24.380282] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:30:28.598 [2024-07-25 00:14:24.380310] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:30:28.598 [2024-07-25 00:14:24.380467] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:28.598 00:14:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:30:28.598 00:14:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:30:28.598 00:14:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:28.598 00:14:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:30:28.598 00:14:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:28.598 00:14:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:28.598 00:14:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:28.856 00:14:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:29.114 [ 00:30:29.114 { 00:30:29.114 "name": "BaseBdev4", 00:30:29.114 "aliases": [ 00:30:29.114 "4fc72ae4-6417-40df-93ff-a9d9a0ac6ffe" 00:30:29.114 ], 00:30:29.114 "product_name": "Malloc disk", 00:30:29.114 "block_size": 512, 00:30:29.114 "num_blocks": 65536, 00:30:29.114 "uuid": "4fc72ae4-6417-40df-93ff-a9d9a0ac6ffe", 00:30:29.114 "assigned_rate_limits": { 00:30:29.114 "rw_ios_per_sec": 0, 00:30:29.114 "rw_mbytes_per_sec": 0, 00:30:29.114 "r_mbytes_per_sec": 0, 00:30:29.114 "w_mbytes_per_sec": 0 00:30:29.114 }, 00:30:29.114 "claimed": true, 00:30:29.114 "claim_type": "exclusive_write", 00:30:29.114 "zoned": false, 00:30:29.114 "supported_io_types": { 00:30:29.114 "read": true, 00:30:29.114 "write": true, 00:30:29.114 "unmap": true, 00:30:29.114 "flush": true, 00:30:29.114 "reset": true, 00:30:29.114 "nvme_admin": false, 00:30:29.114 "nvme_io": false, 00:30:29.114 "nvme_io_md": false, 00:30:29.114 "write_zeroes": true, 00:30:29.114 "zcopy": true, 00:30:29.114 "get_zone_info": false, 00:30:29.114 "zone_management": false, 00:30:29.114 "zone_append": false, 00:30:29.114 "compare": false, 00:30:29.114 "compare_and_write": false, 00:30:29.114 "abort": true, 00:30:29.114 "seek_hole": false, 00:30:29.114 "seek_data": false, 00:30:29.114 "copy": true, 00:30:29.114 "nvme_iov_md": false 00:30:29.114 }, 00:30:29.114 "memory_domains": [ 00:30:29.114 { 00:30:29.114 "dma_device_id": "system", 00:30:29.114 "dma_device_type": 1 00:30:29.114 }, 00:30:29.114 { 00:30:29.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:29.114 "dma_device_type": 2 00:30:29.114 } 00:30:29.114 ], 00:30:29.114 "driver_specific": {} 00:30:29.114 } 00:30:29.114 ] 00:30:29.114 00:14:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:30:29.114 00:14:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:30:29.114 00:14:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:29.114 00:14:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:30:29.114 00:14:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:29.114 00:14:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:29.114 00:14:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:29.114 00:14:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:29.114 00:14:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:29.114 00:14:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:29.114 00:14:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:29.114 00:14:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:29.114 00:14:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:29.114 00:14:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.114 00:14:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:29.372 00:14:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:29.372 "name": "Existed_Raid", 00:30:29.372 "uuid": "ad1f099e-1e7a-407f-9fa3-dec04a270f8f", 00:30:29.372 "strip_size_kb": 64, 00:30:29.372 "state": "online", 00:30:29.372 "raid_level": "raid5f", 00:30:29.372 "superblock": true, 00:30:29.372 "num_base_bdevs": 4, 00:30:29.372 "num_base_bdevs_discovered": 4, 00:30:29.372 "num_base_bdevs_operational": 4, 00:30:29.372 "base_bdevs_list": [ 00:30:29.372 { 00:30:29.372 "name": "BaseBdev1", 00:30:29.372 "uuid": "afebf04d-2c51-438d-90c6-305ff0cb6f1a", 00:30:29.372 "is_configured": true, 00:30:29.372 "data_offset": 2048, 00:30:29.372 "data_size": 63488 00:30:29.372 }, 00:30:29.372 { 00:30:29.372 "name": "BaseBdev2", 00:30:29.372 "uuid": "17265d49-cf95-42eb-9397-dcb8499cd64b", 00:30:29.372 "is_configured": true, 00:30:29.372 "data_offset": 2048, 00:30:29.372 "data_size": 63488 00:30:29.372 }, 00:30:29.372 { 00:30:29.372 "name": "BaseBdev3", 00:30:29.372 "uuid": "2d080536-4a34-446b-b645-afa75cc0dc88", 00:30:29.372 "is_configured": true, 00:30:29.372 "data_offset": 2048, 00:30:29.372 "data_size": 63488 00:30:29.372 }, 00:30:29.372 { 00:30:29.372 "name": "BaseBdev4", 00:30:29.372 "uuid": "4fc72ae4-6417-40df-93ff-a9d9a0ac6ffe", 00:30:29.372 "is_configured": true, 00:30:29.372 "data_offset": 2048, 00:30:29.372 "data_size": 63488 00:30:29.372 } 00:30:29.372 ] 00:30:29.372 }' 00:30:29.372 00:14:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:29.372 00:14:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:29.629 00:14:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:30:29.629 00:14:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:30:29.629 00:14:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:29.629 00:14:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:29.629 00:14:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:29.629 00:14:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:30:29.629 00:14:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:30:29.629 00:14:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:29.887 [2024-07-25 00:14:25.718666] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:29.887 00:14:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:29.887 "name": "Existed_Raid", 00:30:29.887 "aliases": [ 00:30:29.887 "ad1f099e-1e7a-407f-9fa3-dec04a270f8f" 00:30:29.887 ], 00:30:29.887 "product_name": "Raid Volume", 00:30:29.887 "block_size": 512, 00:30:29.887 "num_blocks": 190464, 00:30:29.887 "uuid": "ad1f099e-1e7a-407f-9fa3-dec04a270f8f", 00:30:29.887 "assigned_rate_limits": { 00:30:29.887 "rw_ios_per_sec": 0, 00:30:29.887 "rw_mbytes_per_sec": 0, 00:30:29.887 "r_mbytes_per_sec": 0, 00:30:29.887 "w_mbytes_per_sec": 0 00:30:29.887 }, 00:30:29.887 "claimed": false, 00:30:29.887 "zoned": false, 00:30:29.887 "supported_io_types": { 00:30:29.887 "read": true, 00:30:29.887 "write": true, 00:30:29.887 "unmap": false, 00:30:29.887 "flush": false, 00:30:29.887 "reset": true, 00:30:29.887 "nvme_admin": false, 00:30:29.887 "nvme_io": false, 00:30:29.887 "nvme_io_md": false, 00:30:29.887 "write_zeroes": true, 00:30:29.887 "zcopy": false, 00:30:29.887 "get_zone_info": false, 00:30:29.887 "zone_management": false, 00:30:29.887 "zone_append": false, 00:30:29.887 "compare": false, 00:30:29.887 "compare_and_write": false, 00:30:29.887 "abort": false, 00:30:29.887 "seek_hole": false, 00:30:29.887 "seek_data": false, 00:30:29.887 "copy": false, 00:30:29.887 "nvme_iov_md": false 00:30:29.887 }, 00:30:29.887 "driver_specific": { 00:30:29.887 "raid": { 00:30:29.887 "uuid": "ad1f099e-1e7a-407f-9fa3-dec04a270f8f", 00:30:29.887 "strip_size_kb": 64, 00:30:29.887 "state": "online", 00:30:29.887 "raid_level": "raid5f", 00:30:29.887 "superblock": true, 00:30:29.887 "num_base_bdevs": 4, 00:30:29.887 "num_base_bdevs_discovered": 4, 00:30:29.887 "num_base_bdevs_operational": 4, 00:30:29.887 "base_bdevs_list": [ 00:30:29.887 { 00:30:29.887 "name": "BaseBdev1", 00:30:29.887 "uuid": "afebf04d-2c51-438d-90c6-305ff0cb6f1a", 00:30:29.887 "is_configured": true, 00:30:29.887 "data_offset": 2048, 00:30:29.887 "data_size": 63488 00:30:29.887 }, 00:30:29.887 { 00:30:29.887 "name": "BaseBdev2", 00:30:29.887 "uuid": "17265d49-cf95-42eb-9397-dcb8499cd64b", 00:30:29.887 "is_configured": true, 00:30:29.887 "data_offset": 2048, 00:30:29.887 "data_size": 63488 00:30:29.887 }, 00:30:29.887 { 00:30:29.887 "name": "BaseBdev3", 00:30:29.887 "uuid": "2d080536-4a34-446b-b645-afa75cc0dc88", 00:30:29.887 "is_configured": true, 00:30:29.887 "data_offset": 2048, 00:30:29.887 "data_size": 63488 00:30:29.887 }, 00:30:29.887 { 00:30:29.887 "name": "BaseBdev4", 00:30:29.887 "uuid": "4fc72ae4-6417-40df-93ff-a9d9a0ac6ffe", 00:30:29.887 "is_configured": true, 00:30:29.887 "data_offset": 2048, 00:30:29.887 "data_size": 63488 00:30:29.887 } 00:30:29.887 ] 00:30:29.887 } 00:30:29.887 } 00:30:29.887 }' 00:30:29.887 00:14:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:29.887 00:14:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:30:29.887 BaseBdev2 00:30:29.887 BaseBdev3 00:30:29.887 BaseBdev4' 00:30:29.887 00:14:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:29.887 00:14:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:29.887 00:14:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:30:30.146 00:14:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:30.146 "name": "BaseBdev1", 00:30:30.146 "aliases": [ 00:30:30.146 "afebf04d-2c51-438d-90c6-305ff0cb6f1a" 00:30:30.146 ], 00:30:30.146 "product_name": "Malloc disk", 00:30:30.146 "block_size": 512, 00:30:30.146 "num_blocks": 65536, 00:30:30.146 "uuid": "afebf04d-2c51-438d-90c6-305ff0cb6f1a", 00:30:30.146 "assigned_rate_limits": { 00:30:30.146 "rw_ios_per_sec": 0, 00:30:30.146 "rw_mbytes_per_sec": 0, 00:30:30.146 "r_mbytes_per_sec": 0, 00:30:30.146 "w_mbytes_per_sec": 0 00:30:30.146 }, 00:30:30.146 "claimed": true, 00:30:30.146 "claim_type": "exclusive_write", 00:30:30.146 "zoned": false, 00:30:30.146 "supported_io_types": { 00:30:30.146 "read": true, 00:30:30.146 "write": true, 00:30:30.146 "unmap": true, 00:30:30.146 "flush": true, 00:30:30.146 "reset": true, 00:30:30.146 "nvme_admin": false, 00:30:30.146 "nvme_io": false, 00:30:30.146 "nvme_io_md": false, 00:30:30.146 "write_zeroes": true, 00:30:30.146 "zcopy": true, 00:30:30.146 "get_zone_info": false, 00:30:30.146 "zone_management": false, 00:30:30.146 "zone_append": false, 00:30:30.146 "compare": false, 00:30:30.146 "compare_and_write": false, 00:30:30.146 "abort": true, 00:30:30.146 "seek_hole": false, 00:30:30.146 "seek_data": false, 00:30:30.146 "copy": true, 00:30:30.146 "nvme_iov_md": false 00:30:30.146 }, 00:30:30.146 "memory_domains": [ 00:30:30.146 { 00:30:30.146 "dma_device_id": "system", 00:30:30.146 "dma_device_type": 1 00:30:30.146 }, 00:30:30.146 { 00:30:30.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:30.146 "dma_device_type": 2 00:30:30.146 } 00:30:30.146 ], 00:30:30.146 "driver_specific": {} 00:30:30.146 }' 00:30:30.146 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:30.146 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:30.404 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:30.404 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:30.404 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:30.404 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:30.404 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:30.404 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:30.404 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:30.404 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:30.404 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:30.404 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:30.404 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:30.404 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:30:30.404 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:30.661 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:30.661 "name": "BaseBdev2", 00:30:30.661 "aliases": [ 00:30:30.661 "17265d49-cf95-42eb-9397-dcb8499cd64b" 00:30:30.661 ], 00:30:30.661 "product_name": "Malloc disk", 00:30:30.661 "block_size": 512, 00:30:30.661 "num_blocks": 65536, 00:30:30.661 "uuid": "17265d49-cf95-42eb-9397-dcb8499cd64b", 00:30:30.661 "assigned_rate_limits": { 00:30:30.661 "rw_ios_per_sec": 0, 00:30:30.661 "rw_mbytes_per_sec": 0, 00:30:30.661 "r_mbytes_per_sec": 0, 00:30:30.661 "w_mbytes_per_sec": 0 00:30:30.661 }, 00:30:30.661 "claimed": true, 00:30:30.661 "claim_type": "exclusive_write", 00:30:30.661 "zoned": false, 00:30:30.661 "supported_io_types": { 00:30:30.661 "read": true, 00:30:30.661 "write": true, 00:30:30.661 "unmap": true, 00:30:30.661 "flush": true, 00:30:30.661 "reset": true, 00:30:30.661 "nvme_admin": false, 00:30:30.661 "nvme_io": false, 00:30:30.661 "nvme_io_md": false, 00:30:30.661 "write_zeroes": true, 00:30:30.661 "zcopy": true, 00:30:30.661 "get_zone_info": false, 00:30:30.662 "zone_management": false, 00:30:30.662 "zone_append": false, 00:30:30.662 "compare": false, 00:30:30.662 "compare_and_write": false, 00:30:30.662 "abort": true, 00:30:30.662 "seek_hole": false, 00:30:30.662 "seek_data": false, 00:30:30.662 "copy": true, 00:30:30.662 "nvme_iov_md": false 00:30:30.662 }, 00:30:30.662 "memory_domains": [ 00:30:30.662 { 00:30:30.662 "dma_device_id": "system", 00:30:30.662 "dma_device_type": 1 00:30:30.662 }, 00:30:30.662 { 00:30:30.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:30.662 "dma_device_type": 2 00:30:30.662 } 00:30:30.662 ], 00:30:30.662 "driver_specific": {} 00:30:30.662 }' 00:30:30.662 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:30.662 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:30.662 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:30.662 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:30.662 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:30.662 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:30.662 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:30.662 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:30.662 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:30.662 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:30.662 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:30.662 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:30.662 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:30.662 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:30:30.662 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:30.920 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:30.920 "name": "BaseBdev3", 00:30:30.920 "aliases": [ 00:30:30.920 "2d080536-4a34-446b-b645-afa75cc0dc88" 00:30:30.920 ], 00:30:30.920 "product_name": "Malloc disk", 00:30:30.920 "block_size": 512, 00:30:30.920 "num_blocks": 65536, 00:30:30.920 "uuid": "2d080536-4a34-446b-b645-afa75cc0dc88", 00:30:30.920 "assigned_rate_limits": { 00:30:30.920 "rw_ios_per_sec": 0, 00:30:30.920 "rw_mbytes_per_sec": 0, 00:30:30.920 "r_mbytes_per_sec": 0, 00:30:30.920 "w_mbytes_per_sec": 0 00:30:30.920 }, 00:30:30.920 "claimed": true, 00:30:30.920 "claim_type": "exclusive_write", 00:30:30.920 "zoned": false, 00:30:30.920 "supported_io_types": { 00:30:30.920 "read": true, 00:30:30.920 "write": true, 00:30:30.920 "unmap": true, 00:30:30.920 "flush": true, 00:30:30.920 "reset": true, 00:30:30.920 "nvme_admin": false, 00:30:30.920 "nvme_io": false, 00:30:30.920 "nvme_io_md": false, 00:30:30.920 "write_zeroes": true, 00:30:30.920 "zcopy": true, 00:30:30.920 "get_zone_info": false, 00:30:30.920 "zone_management": false, 00:30:30.920 "zone_append": false, 00:30:30.920 "compare": false, 00:30:30.920 "compare_and_write": false, 00:30:30.920 "abort": true, 00:30:30.920 "seek_hole": false, 00:30:30.920 "seek_data": false, 00:30:30.920 "copy": true, 00:30:30.920 "nvme_iov_md": false 00:30:30.920 }, 00:30:30.920 "memory_domains": [ 00:30:30.920 { 00:30:30.920 "dma_device_id": "system", 00:30:30.920 "dma_device_type": 1 00:30:30.920 }, 00:30:30.920 { 00:30:30.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:30.920 "dma_device_type": 2 00:30:30.920 } 00:30:30.920 ], 00:30:30.920 "driver_specific": {} 00:30:30.920 }' 00:30:30.920 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:30.920 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:30.920 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:30.920 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:30.920 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:30.920 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:30.920 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:30.920 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:30.920 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:30.920 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:30.920 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:30.920 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:30.920 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:30.920 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:30:30.920 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:31.179 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:31.179 "name": "BaseBdev4", 00:30:31.179 "aliases": [ 00:30:31.179 "4fc72ae4-6417-40df-93ff-a9d9a0ac6ffe" 00:30:31.179 ], 00:30:31.179 "product_name": "Malloc disk", 00:30:31.179 "block_size": 512, 00:30:31.179 "num_blocks": 65536, 00:30:31.179 "uuid": "4fc72ae4-6417-40df-93ff-a9d9a0ac6ffe", 00:30:31.179 "assigned_rate_limits": { 00:30:31.179 "rw_ios_per_sec": 0, 00:30:31.179 "rw_mbytes_per_sec": 0, 00:30:31.179 "r_mbytes_per_sec": 0, 00:30:31.179 "w_mbytes_per_sec": 0 00:30:31.179 }, 00:30:31.179 "claimed": true, 00:30:31.179 "claim_type": "exclusive_write", 00:30:31.179 "zoned": false, 00:30:31.179 "supported_io_types": { 00:30:31.179 "read": true, 00:30:31.179 "write": true, 00:30:31.179 "unmap": true, 00:30:31.179 "flush": true, 00:30:31.179 "reset": true, 00:30:31.179 "nvme_admin": false, 00:30:31.179 "nvme_io": false, 00:30:31.179 "nvme_io_md": false, 00:30:31.179 "write_zeroes": true, 00:30:31.179 "zcopy": true, 00:30:31.179 "get_zone_info": false, 00:30:31.179 "zone_management": false, 00:30:31.179 "zone_append": false, 00:30:31.179 "compare": false, 00:30:31.179 "compare_and_write": false, 00:30:31.179 "abort": true, 00:30:31.179 "seek_hole": false, 00:30:31.179 "seek_data": false, 00:30:31.179 "copy": true, 00:30:31.179 "nvme_iov_md": false 00:30:31.179 }, 00:30:31.179 "memory_domains": [ 00:30:31.179 { 00:30:31.179 "dma_device_id": "system", 00:30:31.179 "dma_device_type": 1 00:30:31.179 }, 00:30:31.179 { 00:30:31.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:31.179 "dma_device_type": 2 00:30:31.179 } 00:30:31.179 ], 00:30:31.179 "driver_specific": {} 00:30:31.179 }' 00:30:31.179 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:31.179 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:31.179 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:31.179 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:31.179 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:31.179 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:31.179 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:31.179 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:31.179 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:31.179 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:31.179 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:31.179 00:14:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:31.179 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:31.438 [2024-07-25 00:14:27.246928] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:31.696 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:30:31.696 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:30:31.696 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:30:31.696 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:30:31.696 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:30:31.696 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:30:31.696 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:31.696 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:31.696 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:31.696 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:31.696 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:31.696 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:31.696 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:31.696 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:31.696 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:31.696 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:31.696 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:31.954 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:31.954 "name": "Existed_Raid", 00:30:31.954 "uuid": "ad1f099e-1e7a-407f-9fa3-dec04a270f8f", 00:30:31.954 "strip_size_kb": 64, 00:30:31.954 "state": "online", 00:30:31.954 "raid_level": "raid5f", 00:30:31.954 "superblock": true, 00:30:31.954 "num_base_bdevs": 4, 00:30:31.954 "num_base_bdevs_discovered": 3, 00:30:31.954 "num_base_bdevs_operational": 3, 00:30:31.954 "base_bdevs_list": [ 00:30:31.954 { 00:30:31.954 "name": null, 00:30:31.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:31.954 "is_configured": false, 00:30:31.954 "data_offset": 2048, 00:30:31.954 "data_size": 63488 00:30:31.954 }, 00:30:31.954 { 00:30:31.954 "name": "BaseBdev2", 00:30:31.954 "uuid": "17265d49-cf95-42eb-9397-dcb8499cd64b", 00:30:31.954 "is_configured": true, 00:30:31.954 "data_offset": 2048, 00:30:31.954 "data_size": 63488 00:30:31.954 }, 00:30:31.954 { 00:30:31.954 "name": "BaseBdev3", 00:30:31.954 "uuid": "2d080536-4a34-446b-b645-afa75cc0dc88", 00:30:31.954 "is_configured": true, 00:30:31.954 "data_offset": 2048, 00:30:31.954 "data_size": 63488 00:30:31.954 }, 00:30:31.954 { 00:30:31.954 "name": "BaseBdev4", 00:30:31.954 "uuid": "4fc72ae4-6417-40df-93ff-a9d9a0ac6ffe", 00:30:31.954 "is_configured": true, 00:30:31.954 "data_offset": 2048, 00:30:31.954 "data_size": 63488 00:30:31.954 } 00:30:31.954 ] 00:30:31.954 }' 00:30:31.954 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:31.954 00:14:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:32.213 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:30:32.213 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:32.213 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:32.213 00:14:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:30:32.471 00:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:30:32.471 00:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:32.471 00:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:30:32.730 [2024-07-25 00:14:28.345230] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:32.730 [2024-07-25 00:14:28.345380] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:32.730 [2024-07-25 00:14:28.416306] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:32.730 00:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:30:32.730 00:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:32.730 00:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:32.730 00:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:30:32.988 00:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:30:32.988 00:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:32.988 00:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:30:33.246 [2024-07-25 00:14:28.864469] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:33.246 00:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:30:33.246 00:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:33.246 00:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:33.246 00:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:30:33.505 00:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:30:33.505 00:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:33.505 00:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:30:33.763 [2024-07-25 00:14:29.418589] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:30:33.763 [2024-07-25 00:14:29.418642] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:30:33.763 00:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:30:33.763 00:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:33.763 00:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:30:33.763 00:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:34.022 00:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:30:34.022 00:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:30:34.022 00:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:30:34.022 00:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:30:34.022 00:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:34.022 00:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:34.281 BaseBdev2 00:30:34.281 00:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:30:34.281 00:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:30:34.281 00:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:34.281 00:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:30:34.281 00:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:34.281 00:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:34.281 00:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:34.281 00:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:34.539 [ 00:30:34.539 { 00:30:34.539 "name": "BaseBdev2", 00:30:34.539 "aliases": [ 00:30:34.539 "58982565-3114-4246-8df6-63ca9a84de42" 00:30:34.539 ], 00:30:34.539 "product_name": "Malloc disk", 00:30:34.539 "block_size": 512, 00:30:34.539 "num_blocks": 65536, 00:30:34.539 "uuid": "58982565-3114-4246-8df6-63ca9a84de42", 00:30:34.539 "assigned_rate_limits": { 00:30:34.539 "rw_ios_per_sec": 0, 00:30:34.539 "rw_mbytes_per_sec": 0, 00:30:34.539 "r_mbytes_per_sec": 0, 00:30:34.539 "w_mbytes_per_sec": 0 00:30:34.539 }, 00:30:34.539 "claimed": false, 00:30:34.539 "zoned": false, 00:30:34.539 "supported_io_types": { 00:30:34.539 "read": true, 00:30:34.539 "write": true, 00:30:34.539 "unmap": true, 00:30:34.539 "flush": true, 00:30:34.539 "reset": true, 00:30:34.539 "nvme_admin": false, 00:30:34.539 "nvme_io": false, 00:30:34.539 "nvme_io_md": false, 00:30:34.539 "write_zeroes": true, 00:30:34.539 "zcopy": true, 00:30:34.539 "get_zone_info": false, 00:30:34.539 "zone_management": false, 00:30:34.539 "zone_append": false, 00:30:34.539 "compare": false, 00:30:34.539 "compare_and_write": false, 00:30:34.539 "abort": true, 00:30:34.539 "seek_hole": false, 00:30:34.539 "seek_data": false, 00:30:34.539 "copy": true, 00:30:34.539 "nvme_iov_md": false 00:30:34.539 }, 00:30:34.539 "memory_domains": [ 00:30:34.539 { 00:30:34.539 "dma_device_id": "system", 00:30:34.539 "dma_device_type": 1 00:30:34.539 }, 00:30:34.539 { 00:30:34.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:34.539 "dma_device_type": 2 00:30:34.539 } 00:30:34.539 ], 00:30:34.539 "driver_specific": {} 00:30:34.539 } 00:30:34.539 ] 00:30:34.539 00:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:30:34.539 00:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:30:34.540 00:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:34.540 00:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:30:34.798 BaseBdev3 00:30:34.798 00:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:30:34.798 00:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:30:34.798 00:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:34.798 00:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:30:34.798 00:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:34.798 00:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:34.798 00:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:35.057 00:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:35.057 [ 00:30:35.057 { 00:30:35.057 "name": "BaseBdev3", 00:30:35.057 "aliases": [ 00:30:35.057 "c5581d02-f660-4daf-bee7-98e7dcf41851" 00:30:35.057 ], 00:30:35.057 "product_name": "Malloc disk", 00:30:35.057 "block_size": 512, 00:30:35.057 "num_blocks": 65536, 00:30:35.057 "uuid": "c5581d02-f660-4daf-bee7-98e7dcf41851", 00:30:35.057 "assigned_rate_limits": { 00:30:35.057 "rw_ios_per_sec": 0, 00:30:35.057 "rw_mbytes_per_sec": 0, 00:30:35.057 "r_mbytes_per_sec": 0, 00:30:35.057 "w_mbytes_per_sec": 0 00:30:35.057 }, 00:30:35.057 "claimed": false, 00:30:35.057 "zoned": false, 00:30:35.057 "supported_io_types": { 00:30:35.057 "read": true, 00:30:35.057 "write": true, 00:30:35.057 "unmap": true, 00:30:35.057 "flush": true, 00:30:35.057 "reset": true, 00:30:35.057 "nvme_admin": false, 00:30:35.057 "nvme_io": false, 00:30:35.057 "nvme_io_md": false, 00:30:35.057 "write_zeroes": true, 00:30:35.057 "zcopy": true, 00:30:35.057 "get_zone_info": false, 00:30:35.057 "zone_management": false, 00:30:35.057 "zone_append": false, 00:30:35.057 "compare": false, 00:30:35.057 "compare_and_write": false, 00:30:35.057 "abort": true, 00:30:35.057 "seek_hole": false, 00:30:35.057 "seek_data": false, 00:30:35.057 "copy": true, 00:30:35.057 "nvme_iov_md": false 00:30:35.057 }, 00:30:35.057 "memory_domains": [ 00:30:35.057 { 00:30:35.057 "dma_device_id": "system", 00:30:35.057 "dma_device_type": 1 00:30:35.057 }, 00:30:35.057 { 00:30:35.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:35.057 "dma_device_type": 2 00:30:35.057 } 00:30:35.057 ], 00:30:35.057 "driver_specific": {} 00:30:35.057 } 00:30:35.057 ] 00:30:35.057 00:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:30:35.058 00:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:30:35.058 00:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:35.058 00:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:30:35.316 BaseBdev4 00:30:35.316 00:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:30:35.316 00:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:30:35.316 00:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:35.316 00:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:30:35.316 00:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:35.316 00:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:35.316 00:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:35.575 00:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:35.833 [ 00:30:35.833 { 00:30:35.833 "name": "BaseBdev4", 00:30:35.833 "aliases": [ 00:30:35.833 "7c28534d-27da-4346-9d9e-f5fdc903731e" 00:30:35.833 ], 00:30:35.833 "product_name": "Malloc disk", 00:30:35.833 "block_size": 512, 00:30:35.833 "num_blocks": 65536, 00:30:35.833 "uuid": "7c28534d-27da-4346-9d9e-f5fdc903731e", 00:30:35.833 "assigned_rate_limits": { 00:30:35.833 "rw_ios_per_sec": 0, 00:30:35.833 "rw_mbytes_per_sec": 0, 00:30:35.833 "r_mbytes_per_sec": 0, 00:30:35.833 "w_mbytes_per_sec": 0 00:30:35.833 }, 00:30:35.833 "claimed": false, 00:30:35.833 "zoned": false, 00:30:35.833 "supported_io_types": { 00:30:35.833 "read": true, 00:30:35.833 "write": true, 00:30:35.833 "unmap": true, 00:30:35.833 "flush": true, 00:30:35.833 "reset": true, 00:30:35.833 "nvme_admin": false, 00:30:35.833 "nvme_io": false, 00:30:35.833 "nvme_io_md": false, 00:30:35.833 "write_zeroes": true, 00:30:35.833 "zcopy": true, 00:30:35.833 "get_zone_info": false, 00:30:35.834 "zone_management": false, 00:30:35.834 "zone_append": false, 00:30:35.834 "compare": false, 00:30:35.834 "compare_and_write": false, 00:30:35.834 "abort": true, 00:30:35.834 "seek_hole": false, 00:30:35.834 "seek_data": false, 00:30:35.834 "copy": true, 00:30:35.834 "nvme_iov_md": false 00:30:35.834 }, 00:30:35.834 "memory_domains": [ 00:30:35.834 { 00:30:35.834 "dma_device_id": "system", 00:30:35.834 "dma_device_type": 1 00:30:35.834 }, 00:30:35.834 { 00:30:35.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:35.834 "dma_device_type": 2 00:30:35.834 } 00:30:35.834 ], 00:30:35.834 "driver_specific": {} 00:30:35.834 } 00:30:35.834 ] 00:30:35.834 00:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:30:35.834 00:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:30:35.834 00:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:35.834 00:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:30:36.093 [2024-07-25 00:14:31.796890] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:36.093 [2024-07-25 00:14:31.796940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:36.093 [2024-07-25 00:14:31.796975] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:36.093 [2024-07-25 00:14:31.798774] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:36.093 [2024-07-25 00:14:31.798834] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:36.093 00:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:36.093 00:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:36.093 00:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:36.093 00:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:36.093 00:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:36.093 00:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:36.093 00:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:36.093 00:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:36.093 00:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:36.093 00:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:36.093 00:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:36.093 00:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:36.352 00:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:36.352 "name": "Existed_Raid", 00:30:36.352 "uuid": "8170c4b0-5412-4f44-bbac-b8778c442baf", 00:30:36.352 "strip_size_kb": 64, 00:30:36.352 "state": "configuring", 00:30:36.352 "raid_level": "raid5f", 00:30:36.352 "superblock": true, 00:30:36.352 "num_base_bdevs": 4, 00:30:36.352 "num_base_bdevs_discovered": 3, 00:30:36.352 "num_base_bdevs_operational": 4, 00:30:36.352 "base_bdevs_list": [ 00:30:36.352 { 00:30:36.352 "name": "BaseBdev1", 00:30:36.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.352 "is_configured": false, 00:30:36.352 "data_offset": 0, 00:30:36.352 "data_size": 0 00:30:36.352 }, 00:30:36.352 { 00:30:36.352 "name": "BaseBdev2", 00:30:36.352 "uuid": "58982565-3114-4246-8df6-63ca9a84de42", 00:30:36.352 "is_configured": true, 00:30:36.352 "data_offset": 2048, 00:30:36.352 "data_size": 63488 00:30:36.352 }, 00:30:36.352 { 00:30:36.352 "name": "BaseBdev3", 00:30:36.352 "uuid": "c5581d02-f660-4daf-bee7-98e7dcf41851", 00:30:36.352 "is_configured": true, 00:30:36.352 "data_offset": 2048, 00:30:36.352 "data_size": 63488 00:30:36.352 }, 00:30:36.352 { 00:30:36.352 "name": "BaseBdev4", 00:30:36.352 "uuid": "7c28534d-27da-4346-9d9e-f5fdc903731e", 00:30:36.352 "is_configured": true, 00:30:36.352 "data_offset": 2048, 00:30:36.352 "data_size": 63488 00:30:36.352 } 00:30:36.352 ] 00:30:36.352 }' 00:30:36.352 00:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:36.352 00:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.611 00:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:36.870 [2024-07-25 00:14:32.549039] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:36.870 00:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:36.870 00:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:36.870 00:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:36.870 00:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:36.870 00:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:36.870 00:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:36.870 00:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:36.870 00:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:36.870 00:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:36.870 00:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:36.870 00:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:36.870 00:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:37.128 00:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:37.128 "name": "Existed_Raid", 00:30:37.128 "uuid": "8170c4b0-5412-4f44-bbac-b8778c442baf", 00:30:37.128 "strip_size_kb": 64, 00:30:37.128 "state": "configuring", 00:30:37.128 "raid_level": "raid5f", 00:30:37.128 "superblock": true, 00:30:37.128 "num_base_bdevs": 4, 00:30:37.128 "num_base_bdevs_discovered": 2, 00:30:37.128 "num_base_bdevs_operational": 4, 00:30:37.129 "base_bdevs_list": [ 00:30:37.129 { 00:30:37.129 "name": "BaseBdev1", 00:30:37.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.129 "is_configured": false, 00:30:37.129 "data_offset": 0, 00:30:37.129 "data_size": 0 00:30:37.129 }, 00:30:37.129 { 00:30:37.129 "name": null, 00:30:37.129 "uuid": "58982565-3114-4246-8df6-63ca9a84de42", 00:30:37.129 "is_configured": false, 00:30:37.129 "data_offset": 2048, 00:30:37.129 "data_size": 63488 00:30:37.129 }, 00:30:37.129 { 00:30:37.129 "name": "BaseBdev3", 00:30:37.129 "uuid": "c5581d02-f660-4daf-bee7-98e7dcf41851", 00:30:37.129 "is_configured": true, 00:30:37.129 "data_offset": 2048, 00:30:37.129 "data_size": 63488 00:30:37.129 }, 00:30:37.129 { 00:30:37.129 "name": "BaseBdev4", 00:30:37.129 "uuid": "7c28534d-27da-4346-9d9e-f5fdc903731e", 00:30:37.129 "is_configured": true, 00:30:37.129 "data_offset": 2048, 00:30:37.129 "data_size": 63488 00:30:37.129 } 00:30:37.129 ] 00:30:37.129 }' 00:30:37.129 00:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:37.129 00:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.387 00:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:37.387 00:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:37.646 00:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:30:37.646 00:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:37.646 BaseBdev1 00:30:37.646 [2024-07-25 00:14:33.468904] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:37.646 00:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:30:37.646 00:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:30:37.646 00:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:37.646 00:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:30:37.646 00:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:37.646 00:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:37.646 00:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:37.904 00:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:38.162 [ 00:30:38.162 { 00:30:38.162 "name": "BaseBdev1", 00:30:38.162 "aliases": [ 00:30:38.162 "55a4570a-1d95-4da6-96bc-ef61fac13a9a" 00:30:38.162 ], 00:30:38.162 "product_name": "Malloc disk", 00:30:38.162 "block_size": 512, 00:30:38.162 "num_blocks": 65536, 00:30:38.162 "uuid": "55a4570a-1d95-4da6-96bc-ef61fac13a9a", 00:30:38.162 "assigned_rate_limits": { 00:30:38.162 "rw_ios_per_sec": 0, 00:30:38.162 "rw_mbytes_per_sec": 0, 00:30:38.162 "r_mbytes_per_sec": 0, 00:30:38.162 "w_mbytes_per_sec": 0 00:30:38.162 }, 00:30:38.162 "claimed": true, 00:30:38.162 "claim_type": "exclusive_write", 00:30:38.162 "zoned": false, 00:30:38.162 "supported_io_types": { 00:30:38.162 "read": true, 00:30:38.162 "write": true, 00:30:38.162 "unmap": true, 00:30:38.162 "flush": true, 00:30:38.162 "reset": true, 00:30:38.162 "nvme_admin": false, 00:30:38.162 "nvme_io": false, 00:30:38.162 "nvme_io_md": false, 00:30:38.162 "write_zeroes": true, 00:30:38.162 "zcopy": true, 00:30:38.162 "get_zone_info": false, 00:30:38.162 "zone_management": false, 00:30:38.162 "zone_append": false, 00:30:38.162 "compare": false, 00:30:38.162 "compare_and_write": false, 00:30:38.162 "abort": true, 00:30:38.162 "seek_hole": false, 00:30:38.162 "seek_data": false, 00:30:38.162 "copy": true, 00:30:38.162 "nvme_iov_md": false 00:30:38.162 }, 00:30:38.162 "memory_domains": [ 00:30:38.162 { 00:30:38.162 "dma_device_id": "system", 00:30:38.162 "dma_device_type": 1 00:30:38.162 }, 00:30:38.162 { 00:30:38.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:38.162 "dma_device_type": 2 00:30:38.162 } 00:30:38.162 ], 00:30:38.162 "driver_specific": {} 00:30:38.162 } 00:30:38.162 ] 00:30:38.162 00:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:30:38.162 00:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:38.162 00:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:38.162 00:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:38.162 00:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:38.162 00:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:38.162 00:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:38.162 00:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:38.162 00:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:38.162 00:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:38.162 00:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:38.162 00:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:38.162 00:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:38.420 00:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:38.420 "name": "Existed_Raid", 00:30:38.420 "uuid": "8170c4b0-5412-4f44-bbac-b8778c442baf", 00:30:38.420 "strip_size_kb": 64, 00:30:38.420 "state": "configuring", 00:30:38.420 "raid_level": "raid5f", 00:30:38.420 "superblock": true, 00:30:38.420 "num_base_bdevs": 4, 00:30:38.420 "num_base_bdevs_discovered": 3, 00:30:38.420 "num_base_bdevs_operational": 4, 00:30:38.420 "base_bdevs_list": [ 00:30:38.420 { 00:30:38.420 "name": "BaseBdev1", 00:30:38.420 "uuid": "55a4570a-1d95-4da6-96bc-ef61fac13a9a", 00:30:38.420 "is_configured": true, 00:30:38.420 "data_offset": 2048, 00:30:38.420 "data_size": 63488 00:30:38.420 }, 00:30:38.420 { 00:30:38.420 "name": null, 00:30:38.420 "uuid": "58982565-3114-4246-8df6-63ca9a84de42", 00:30:38.420 "is_configured": false, 00:30:38.420 "data_offset": 2048, 00:30:38.420 "data_size": 63488 00:30:38.420 }, 00:30:38.420 { 00:30:38.420 "name": "BaseBdev3", 00:30:38.420 "uuid": "c5581d02-f660-4daf-bee7-98e7dcf41851", 00:30:38.420 "is_configured": true, 00:30:38.420 "data_offset": 2048, 00:30:38.420 "data_size": 63488 00:30:38.420 }, 00:30:38.420 { 00:30:38.420 "name": "BaseBdev4", 00:30:38.420 "uuid": "7c28534d-27da-4346-9d9e-f5fdc903731e", 00:30:38.420 "is_configured": true, 00:30:38.420 "data_offset": 2048, 00:30:38.420 "data_size": 63488 00:30:38.420 } 00:30:38.420 ] 00:30:38.420 }' 00:30:38.420 00:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:38.420 00:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.986 00:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:38.986 00:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:38.986 00:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:30:38.986 00:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:30:39.245 [2024-07-25 00:14:35.005435] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:39.245 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:39.245 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:39.245 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:39.245 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:39.245 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:39.245 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:39.245 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:39.245 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:39.245 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:39.245 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:39.245 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:39.245 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:39.504 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:39.504 "name": "Existed_Raid", 00:30:39.504 "uuid": "8170c4b0-5412-4f44-bbac-b8778c442baf", 00:30:39.504 "strip_size_kb": 64, 00:30:39.504 "state": "configuring", 00:30:39.504 "raid_level": "raid5f", 00:30:39.504 "superblock": true, 00:30:39.504 "num_base_bdevs": 4, 00:30:39.504 "num_base_bdevs_discovered": 2, 00:30:39.504 "num_base_bdevs_operational": 4, 00:30:39.504 "base_bdevs_list": [ 00:30:39.504 { 00:30:39.504 "name": "BaseBdev1", 00:30:39.504 "uuid": "55a4570a-1d95-4da6-96bc-ef61fac13a9a", 00:30:39.504 "is_configured": true, 00:30:39.504 "data_offset": 2048, 00:30:39.504 "data_size": 63488 00:30:39.504 }, 00:30:39.504 { 00:30:39.504 "name": null, 00:30:39.504 "uuid": "58982565-3114-4246-8df6-63ca9a84de42", 00:30:39.504 "is_configured": false, 00:30:39.504 "data_offset": 2048, 00:30:39.504 "data_size": 63488 00:30:39.504 }, 00:30:39.504 { 00:30:39.504 "name": null, 00:30:39.504 "uuid": "c5581d02-f660-4daf-bee7-98e7dcf41851", 00:30:39.504 "is_configured": false, 00:30:39.504 "data_offset": 2048, 00:30:39.504 "data_size": 63488 00:30:39.504 }, 00:30:39.504 { 00:30:39.504 "name": "BaseBdev4", 00:30:39.504 "uuid": "7c28534d-27da-4346-9d9e-f5fdc903731e", 00:30:39.504 "is_configured": true, 00:30:39.504 "data_offset": 2048, 00:30:39.504 "data_size": 63488 00:30:39.504 } 00:30:39.504 ] 00:30:39.504 }' 00:30:39.504 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:39.504 00:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.763 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:39.763 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:40.021 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:30:40.021 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:40.280 [2024-07-25 00:14:35.925617] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:40.280 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:40.280 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:40.280 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:40.280 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:40.280 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:40.280 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:40.280 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:40.280 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:40.280 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:40.280 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:40.280 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:40.280 00:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:40.280 00:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:40.280 "name": "Existed_Raid", 00:30:40.280 "uuid": "8170c4b0-5412-4f44-bbac-b8778c442baf", 00:30:40.280 "strip_size_kb": 64, 00:30:40.280 "state": "configuring", 00:30:40.280 "raid_level": "raid5f", 00:30:40.280 "superblock": true, 00:30:40.280 "num_base_bdevs": 4, 00:30:40.280 "num_base_bdevs_discovered": 3, 00:30:40.280 "num_base_bdevs_operational": 4, 00:30:40.280 "base_bdevs_list": [ 00:30:40.280 { 00:30:40.280 "name": "BaseBdev1", 00:30:40.280 "uuid": "55a4570a-1d95-4da6-96bc-ef61fac13a9a", 00:30:40.280 "is_configured": true, 00:30:40.280 "data_offset": 2048, 00:30:40.280 "data_size": 63488 00:30:40.280 }, 00:30:40.280 { 00:30:40.280 "name": null, 00:30:40.280 "uuid": "58982565-3114-4246-8df6-63ca9a84de42", 00:30:40.280 "is_configured": false, 00:30:40.280 "data_offset": 2048, 00:30:40.280 "data_size": 63488 00:30:40.280 }, 00:30:40.280 { 00:30:40.280 "name": "BaseBdev3", 00:30:40.280 "uuid": "c5581d02-f660-4daf-bee7-98e7dcf41851", 00:30:40.280 "is_configured": true, 00:30:40.280 "data_offset": 2048, 00:30:40.280 "data_size": 63488 00:30:40.280 }, 00:30:40.280 { 00:30:40.280 "name": "BaseBdev4", 00:30:40.280 "uuid": "7c28534d-27da-4346-9d9e-f5fdc903731e", 00:30:40.280 "is_configured": true, 00:30:40.280 "data_offset": 2048, 00:30:40.280 "data_size": 63488 00:30:40.280 } 00:30:40.280 ] 00:30:40.280 }' 00:30:40.280 00:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:40.280 00:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.848 00:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:40.848 00:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:40.848 00:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:30:40.848 00:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:41.107 [2024-07-25 00:14:36.825830] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:41.107 00:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:41.107 00:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:41.107 00:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:41.107 00:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:41.107 00:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:41.107 00:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:41.107 00:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:41.107 00:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:41.107 00:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:41.107 00:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:41.107 00:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:41.107 00:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:41.365 00:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:41.365 "name": "Existed_Raid", 00:30:41.365 "uuid": "8170c4b0-5412-4f44-bbac-b8778c442baf", 00:30:41.365 "strip_size_kb": 64, 00:30:41.365 "state": "configuring", 00:30:41.365 "raid_level": "raid5f", 00:30:41.365 "superblock": true, 00:30:41.365 "num_base_bdevs": 4, 00:30:41.365 "num_base_bdevs_discovered": 2, 00:30:41.365 "num_base_bdevs_operational": 4, 00:30:41.365 "base_bdevs_list": [ 00:30:41.365 { 00:30:41.365 "name": null, 00:30:41.365 "uuid": "55a4570a-1d95-4da6-96bc-ef61fac13a9a", 00:30:41.365 "is_configured": false, 00:30:41.365 "data_offset": 2048, 00:30:41.365 "data_size": 63488 00:30:41.365 }, 00:30:41.365 { 00:30:41.365 "name": null, 00:30:41.365 "uuid": "58982565-3114-4246-8df6-63ca9a84de42", 00:30:41.365 "is_configured": false, 00:30:41.365 "data_offset": 2048, 00:30:41.365 "data_size": 63488 00:30:41.365 }, 00:30:41.365 { 00:30:41.365 "name": "BaseBdev3", 00:30:41.365 "uuid": "c5581d02-f660-4daf-bee7-98e7dcf41851", 00:30:41.365 "is_configured": true, 00:30:41.365 "data_offset": 2048, 00:30:41.365 "data_size": 63488 00:30:41.365 }, 00:30:41.365 { 00:30:41.365 "name": "BaseBdev4", 00:30:41.365 "uuid": "7c28534d-27da-4346-9d9e-f5fdc903731e", 00:30:41.365 "is_configured": true, 00:30:41.365 "data_offset": 2048, 00:30:41.365 "data_size": 63488 00:30:41.365 } 00:30:41.365 ] 00:30:41.365 }' 00:30:41.365 00:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:41.365 00:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.624 00:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:41.624 00:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:41.882 00:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:30:41.883 00:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:42.141 [2024-07-25 00:14:37.793250] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:42.141 00:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:42.141 00:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:42.141 00:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:42.141 00:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:42.141 00:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:42.141 00:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:42.141 00:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:42.141 00:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:42.141 00:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:42.141 00:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:42.141 00:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:42.141 00:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:42.141 00:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:42.141 "name": "Existed_Raid", 00:30:42.141 "uuid": "8170c4b0-5412-4f44-bbac-b8778c442baf", 00:30:42.141 "strip_size_kb": 64, 00:30:42.141 "state": "configuring", 00:30:42.141 "raid_level": "raid5f", 00:30:42.141 "superblock": true, 00:30:42.141 "num_base_bdevs": 4, 00:30:42.141 "num_base_bdevs_discovered": 3, 00:30:42.141 "num_base_bdevs_operational": 4, 00:30:42.141 "base_bdevs_list": [ 00:30:42.141 { 00:30:42.141 "name": null, 00:30:42.141 "uuid": "55a4570a-1d95-4da6-96bc-ef61fac13a9a", 00:30:42.141 "is_configured": false, 00:30:42.141 "data_offset": 2048, 00:30:42.142 "data_size": 63488 00:30:42.142 }, 00:30:42.142 { 00:30:42.142 "name": "BaseBdev2", 00:30:42.142 "uuid": "58982565-3114-4246-8df6-63ca9a84de42", 00:30:42.142 "is_configured": true, 00:30:42.142 "data_offset": 2048, 00:30:42.142 "data_size": 63488 00:30:42.142 }, 00:30:42.142 { 00:30:42.142 "name": "BaseBdev3", 00:30:42.142 "uuid": "c5581d02-f660-4daf-bee7-98e7dcf41851", 00:30:42.142 "is_configured": true, 00:30:42.142 "data_offset": 2048, 00:30:42.142 "data_size": 63488 00:30:42.142 }, 00:30:42.142 { 00:30:42.142 "name": "BaseBdev4", 00:30:42.142 "uuid": "7c28534d-27da-4346-9d9e-f5fdc903731e", 00:30:42.142 "is_configured": true, 00:30:42.142 "data_offset": 2048, 00:30:42.142 "data_size": 63488 00:30:42.142 } 00:30:42.142 ] 00:30:42.142 }' 00:30:42.142 00:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:42.142 00:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.709 00:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:42.709 00:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:42.709 00:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:30:42.709 00:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:42.709 00:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:42.980 00:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 55a4570a-1d95-4da6-96bc-ef61fac13a9a 00:30:43.270 [2024-07-25 00:14:38.965915] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:43.270 [2024-07-25 00:14:38.966153] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:30:43.270 [2024-07-25 00:14:38.966169] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:43.270 [2024-07-25 00:14:38.966271] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ee0 00:30:43.270 NewBaseBdev 00:30:43.270 [2024-07-25 00:14:38.972167] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:30:43.270 [2024-07-25 00:14:38.972355] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000009380 00:30:43.270 [2024-07-25 00:14:38.972645] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:43.270 00:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:30:43.270 00:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:30:43.270 00:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:43.270 00:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:30:43.270 00:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:43.270 00:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:43.270 00:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:43.529 00:14:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:43.529 [ 00:30:43.529 { 00:30:43.529 "name": "NewBaseBdev", 00:30:43.529 "aliases": [ 00:30:43.529 "55a4570a-1d95-4da6-96bc-ef61fac13a9a" 00:30:43.529 ], 00:30:43.529 "product_name": "Malloc disk", 00:30:43.529 "block_size": 512, 00:30:43.529 "num_blocks": 65536, 00:30:43.529 "uuid": "55a4570a-1d95-4da6-96bc-ef61fac13a9a", 00:30:43.529 "assigned_rate_limits": { 00:30:43.529 "rw_ios_per_sec": 0, 00:30:43.529 "rw_mbytes_per_sec": 0, 00:30:43.529 "r_mbytes_per_sec": 0, 00:30:43.529 "w_mbytes_per_sec": 0 00:30:43.529 }, 00:30:43.529 "claimed": true, 00:30:43.529 "claim_type": "exclusive_write", 00:30:43.529 "zoned": false, 00:30:43.529 "supported_io_types": { 00:30:43.529 "read": true, 00:30:43.529 "write": true, 00:30:43.529 "unmap": true, 00:30:43.529 "flush": true, 00:30:43.529 "reset": true, 00:30:43.529 "nvme_admin": false, 00:30:43.529 "nvme_io": false, 00:30:43.529 "nvme_io_md": false, 00:30:43.529 "write_zeroes": true, 00:30:43.529 "zcopy": true, 00:30:43.529 "get_zone_info": false, 00:30:43.529 "zone_management": false, 00:30:43.529 "zone_append": false, 00:30:43.529 "compare": false, 00:30:43.529 "compare_and_write": false, 00:30:43.529 "abort": true, 00:30:43.529 "seek_hole": false, 00:30:43.529 "seek_data": false, 00:30:43.529 "copy": true, 00:30:43.529 "nvme_iov_md": false 00:30:43.529 }, 00:30:43.529 "memory_domains": [ 00:30:43.529 { 00:30:43.529 "dma_device_id": "system", 00:30:43.529 "dma_device_type": 1 00:30:43.529 }, 00:30:43.529 { 00:30:43.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:43.529 "dma_device_type": 2 00:30:43.529 } 00:30:43.529 ], 00:30:43.529 "driver_specific": {} 00:30:43.529 } 00:30:43.529 ] 00:30:43.529 00:14:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:30:43.529 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:30:43.529 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:43.529 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:43.529 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:43.529 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:43.529 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:43.529 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:43.529 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:43.529 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:43.529 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:43.529 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:43.529 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:43.788 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:43.788 "name": "Existed_Raid", 00:30:43.788 "uuid": "8170c4b0-5412-4f44-bbac-b8778c442baf", 00:30:43.788 "strip_size_kb": 64, 00:30:43.788 "state": "online", 00:30:43.788 "raid_level": "raid5f", 00:30:43.788 "superblock": true, 00:30:43.788 "num_base_bdevs": 4, 00:30:43.788 "num_base_bdevs_discovered": 4, 00:30:43.788 "num_base_bdevs_operational": 4, 00:30:43.788 "base_bdevs_list": [ 00:30:43.788 { 00:30:43.788 "name": "NewBaseBdev", 00:30:43.788 "uuid": "55a4570a-1d95-4da6-96bc-ef61fac13a9a", 00:30:43.788 "is_configured": true, 00:30:43.788 "data_offset": 2048, 00:30:43.788 "data_size": 63488 00:30:43.788 }, 00:30:43.788 { 00:30:43.788 "name": "BaseBdev2", 00:30:43.788 "uuid": "58982565-3114-4246-8df6-63ca9a84de42", 00:30:43.788 "is_configured": true, 00:30:43.788 "data_offset": 2048, 00:30:43.788 "data_size": 63488 00:30:43.788 }, 00:30:43.788 { 00:30:43.788 "name": "BaseBdev3", 00:30:43.788 "uuid": "c5581d02-f660-4daf-bee7-98e7dcf41851", 00:30:43.788 "is_configured": true, 00:30:43.788 "data_offset": 2048, 00:30:43.788 "data_size": 63488 00:30:43.788 }, 00:30:43.788 { 00:30:43.788 "name": "BaseBdev4", 00:30:43.788 "uuid": "7c28534d-27da-4346-9d9e-f5fdc903731e", 00:30:43.788 "is_configured": true, 00:30:43.788 "data_offset": 2048, 00:30:43.788 "data_size": 63488 00:30:43.788 } 00:30:43.788 ] 00:30:43.788 }' 00:30:43.788 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:43.788 00:14:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.047 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:30:44.047 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:30:44.047 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:44.047 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:44.047 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:44.047 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:30:44.048 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:30:44.048 00:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:44.307 [2024-07-25 00:14:40.111160] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:44.307 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:44.307 "name": "Existed_Raid", 00:30:44.307 "aliases": [ 00:30:44.307 "8170c4b0-5412-4f44-bbac-b8778c442baf" 00:30:44.307 ], 00:30:44.307 "product_name": "Raid Volume", 00:30:44.307 "block_size": 512, 00:30:44.307 "num_blocks": 190464, 00:30:44.307 "uuid": "8170c4b0-5412-4f44-bbac-b8778c442baf", 00:30:44.307 "assigned_rate_limits": { 00:30:44.307 "rw_ios_per_sec": 0, 00:30:44.307 "rw_mbytes_per_sec": 0, 00:30:44.307 "r_mbytes_per_sec": 0, 00:30:44.307 "w_mbytes_per_sec": 0 00:30:44.307 }, 00:30:44.307 "claimed": false, 00:30:44.307 "zoned": false, 00:30:44.307 "supported_io_types": { 00:30:44.307 "read": true, 00:30:44.307 "write": true, 00:30:44.307 "unmap": false, 00:30:44.307 "flush": false, 00:30:44.307 "reset": true, 00:30:44.307 "nvme_admin": false, 00:30:44.307 "nvme_io": false, 00:30:44.307 "nvme_io_md": false, 00:30:44.307 "write_zeroes": true, 00:30:44.307 "zcopy": false, 00:30:44.307 "get_zone_info": false, 00:30:44.307 "zone_management": false, 00:30:44.307 "zone_append": false, 00:30:44.307 "compare": false, 00:30:44.307 "compare_and_write": false, 00:30:44.307 "abort": false, 00:30:44.307 "seek_hole": false, 00:30:44.307 "seek_data": false, 00:30:44.307 "copy": false, 00:30:44.307 "nvme_iov_md": false 00:30:44.307 }, 00:30:44.307 "driver_specific": { 00:30:44.307 "raid": { 00:30:44.307 "uuid": "8170c4b0-5412-4f44-bbac-b8778c442baf", 00:30:44.307 "strip_size_kb": 64, 00:30:44.307 "state": "online", 00:30:44.307 "raid_level": "raid5f", 00:30:44.307 "superblock": true, 00:30:44.307 "num_base_bdevs": 4, 00:30:44.307 "num_base_bdevs_discovered": 4, 00:30:44.307 "num_base_bdevs_operational": 4, 00:30:44.307 "base_bdevs_list": [ 00:30:44.307 { 00:30:44.307 "name": "NewBaseBdev", 00:30:44.307 "uuid": "55a4570a-1d95-4da6-96bc-ef61fac13a9a", 00:30:44.307 "is_configured": true, 00:30:44.307 "data_offset": 2048, 00:30:44.307 "data_size": 63488 00:30:44.307 }, 00:30:44.307 { 00:30:44.307 "name": "BaseBdev2", 00:30:44.307 "uuid": "58982565-3114-4246-8df6-63ca9a84de42", 00:30:44.307 "is_configured": true, 00:30:44.307 "data_offset": 2048, 00:30:44.307 "data_size": 63488 00:30:44.307 }, 00:30:44.307 { 00:30:44.307 "name": "BaseBdev3", 00:30:44.307 "uuid": "c5581d02-f660-4daf-bee7-98e7dcf41851", 00:30:44.307 "is_configured": true, 00:30:44.307 "data_offset": 2048, 00:30:44.307 "data_size": 63488 00:30:44.307 }, 00:30:44.307 { 00:30:44.307 "name": "BaseBdev4", 00:30:44.307 "uuid": "7c28534d-27da-4346-9d9e-f5fdc903731e", 00:30:44.307 "is_configured": true, 00:30:44.307 "data_offset": 2048, 00:30:44.307 "data_size": 63488 00:30:44.307 } 00:30:44.307 ] 00:30:44.307 } 00:30:44.307 } 00:30:44.307 }' 00:30:44.307 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:44.307 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:30:44.307 BaseBdev2 00:30:44.307 BaseBdev3 00:30:44.307 BaseBdev4' 00:30:44.307 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:44.307 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:30:44.307 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:44.567 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:44.567 "name": "NewBaseBdev", 00:30:44.567 "aliases": [ 00:30:44.567 "55a4570a-1d95-4da6-96bc-ef61fac13a9a" 00:30:44.567 ], 00:30:44.567 "product_name": "Malloc disk", 00:30:44.567 "block_size": 512, 00:30:44.567 "num_blocks": 65536, 00:30:44.567 "uuid": "55a4570a-1d95-4da6-96bc-ef61fac13a9a", 00:30:44.567 "assigned_rate_limits": { 00:30:44.567 "rw_ios_per_sec": 0, 00:30:44.567 "rw_mbytes_per_sec": 0, 00:30:44.567 "r_mbytes_per_sec": 0, 00:30:44.567 "w_mbytes_per_sec": 0 00:30:44.567 }, 00:30:44.567 "claimed": true, 00:30:44.567 "claim_type": "exclusive_write", 00:30:44.567 "zoned": false, 00:30:44.567 "supported_io_types": { 00:30:44.567 "read": true, 00:30:44.567 "write": true, 00:30:44.567 "unmap": true, 00:30:44.567 "flush": true, 00:30:44.567 "reset": true, 00:30:44.567 "nvme_admin": false, 00:30:44.567 "nvme_io": false, 00:30:44.567 "nvme_io_md": false, 00:30:44.567 "write_zeroes": true, 00:30:44.567 "zcopy": true, 00:30:44.567 "get_zone_info": false, 00:30:44.567 "zone_management": false, 00:30:44.567 "zone_append": false, 00:30:44.567 "compare": false, 00:30:44.567 "compare_and_write": false, 00:30:44.567 "abort": true, 00:30:44.567 "seek_hole": false, 00:30:44.567 "seek_data": false, 00:30:44.567 "copy": true, 00:30:44.567 "nvme_iov_md": false 00:30:44.567 }, 00:30:44.567 "memory_domains": [ 00:30:44.567 { 00:30:44.567 "dma_device_id": "system", 00:30:44.567 "dma_device_type": 1 00:30:44.567 }, 00:30:44.567 { 00:30:44.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:44.567 "dma_device_type": 2 00:30:44.567 } 00:30:44.567 ], 00:30:44.567 "driver_specific": {} 00:30:44.567 }' 00:30:44.567 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:44.567 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:44.567 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:44.567 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:44.567 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:44.825 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:44.825 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:44.826 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:44.826 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:44.826 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:44.826 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:44.826 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:44.826 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:44.826 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:30:44.826 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:45.084 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:45.084 "name": "BaseBdev2", 00:30:45.084 "aliases": [ 00:30:45.084 "58982565-3114-4246-8df6-63ca9a84de42" 00:30:45.084 ], 00:30:45.084 "product_name": "Malloc disk", 00:30:45.084 "block_size": 512, 00:30:45.084 "num_blocks": 65536, 00:30:45.084 "uuid": "58982565-3114-4246-8df6-63ca9a84de42", 00:30:45.084 "assigned_rate_limits": { 00:30:45.084 "rw_ios_per_sec": 0, 00:30:45.084 "rw_mbytes_per_sec": 0, 00:30:45.084 "r_mbytes_per_sec": 0, 00:30:45.084 "w_mbytes_per_sec": 0 00:30:45.084 }, 00:30:45.084 "claimed": true, 00:30:45.084 "claim_type": "exclusive_write", 00:30:45.084 "zoned": false, 00:30:45.084 "supported_io_types": { 00:30:45.084 "read": true, 00:30:45.084 "write": true, 00:30:45.084 "unmap": true, 00:30:45.084 "flush": true, 00:30:45.084 "reset": true, 00:30:45.084 "nvme_admin": false, 00:30:45.084 "nvme_io": false, 00:30:45.084 "nvme_io_md": false, 00:30:45.084 "write_zeroes": true, 00:30:45.084 "zcopy": true, 00:30:45.084 "get_zone_info": false, 00:30:45.084 "zone_management": false, 00:30:45.084 "zone_append": false, 00:30:45.084 "compare": false, 00:30:45.084 "compare_and_write": false, 00:30:45.084 "abort": true, 00:30:45.084 "seek_hole": false, 00:30:45.084 "seek_data": false, 00:30:45.084 "copy": true, 00:30:45.084 "nvme_iov_md": false 00:30:45.084 }, 00:30:45.084 "memory_domains": [ 00:30:45.084 { 00:30:45.084 "dma_device_id": "system", 00:30:45.084 "dma_device_type": 1 00:30:45.084 }, 00:30:45.084 { 00:30:45.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:45.084 "dma_device_type": 2 00:30:45.084 } 00:30:45.084 ], 00:30:45.084 "driver_specific": {} 00:30:45.084 }' 00:30:45.084 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:45.084 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:45.084 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:45.084 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:45.084 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:45.084 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:45.084 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:45.084 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:45.084 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:45.084 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:45.084 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:45.084 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:45.084 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:45.084 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:45.084 00:14:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:30:45.343 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:45.343 "name": "BaseBdev3", 00:30:45.343 "aliases": [ 00:30:45.343 "c5581d02-f660-4daf-bee7-98e7dcf41851" 00:30:45.343 ], 00:30:45.343 "product_name": "Malloc disk", 00:30:45.343 "block_size": 512, 00:30:45.343 "num_blocks": 65536, 00:30:45.343 "uuid": "c5581d02-f660-4daf-bee7-98e7dcf41851", 00:30:45.343 "assigned_rate_limits": { 00:30:45.343 "rw_ios_per_sec": 0, 00:30:45.343 "rw_mbytes_per_sec": 0, 00:30:45.343 "r_mbytes_per_sec": 0, 00:30:45.343 "w_mbytes_per_sec": 0 00:30:45.343 }, 00:30:45.343 "claimed": true, 00:30:45.343 "claim_type": "exclusive_write", 00:30:45.343 "zoned": false, 00:30:45.343 "supported_io_types": { 00:30:45.343 "read": true, 00:30:45.343 "write": true, 00:30:45.343 "unmap": true, 00:30:45.343 "flush": true, 00:30:45.343 "reset": true, 00:30:45.343 "nvme_admin": false, 00:30:45.343 "nvme_io": false, 00:30:45.343 "nvme_io_md": false, 00:30:45.343 "write_zeroes": true, 00:30:45.343 "zcopy": true, 00:30:45.343 "get_zone_info": false, 00:30:45.343 "zone_management": false, 00:30:45.343 "zone_append": false, 00:30:45.343 "compare": false, 00:30:45.343 "compare_and_write": false, 00:30:45.343 "abort": true, 00:30:45.343 "seek_hole": false, 00:30:45.343 "seek_data": false, 00:30:45.343 "copy": true, 00:30:45.343 "nvme_iov_md": false 00:30:45.343 }, 00:30:45.343 "memory_domains": [ 00:30:45.343 { 00:30:45.343 "dma_device_id": "system", 00:30:45.343 "dma_device_type": 1 00:30:45.343 }, 00:30:45.343 { 00:30:45.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:45.343 "dma_device_type": 2 00:30:45.343 } 00:30:45.343 ], 00:30:45.343 "driver_specific": {} 00:30:45.343 }' 00:30:45.343 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:45.343 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:45.343 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:45.343 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:45.343 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:45.343 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:45.343 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:45.343 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:45.343 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:45.343 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:45.343 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:45.343 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:45.343 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:45.343 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:30:45.343 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:45.602 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:45.602 "name": "BaseBdev4", 00:30:45.602 "aliases": [ 00:30:45.602 "7c28534d-27da-4346-9d9e-f5fdc903731e" 00:30:45.602 ], 00:30:45.602 "product_name": "Malloc disk", 00:30:45.602 "block_size": 512, 00:30:45.602 "num_blocks": 65536, 00:30:45.602 "uuid": "7c28534d-27da-4346-9d9e-f5fdc903731e", 00:30:45.602 "assigned_rate_limits": { 00:30:45.602 "rw_ios_per_sec": 0, 00:30:45.602 "rw_mbytes_per_sec": 0, 00:30:45.602 "r_mbytes_per_sec": 0, 00:30:45.602 "w_mbytes_per_sec": 0 00:30:45.602 }, 00:30:45.602 "claimed": true, 00:30:45.602 "claim_type": "exclusive_write", 00:30:45.602 "zoned": false, 00:30:45.602 "supported_io_types": { 00:30:45.602 "read": true, 00:30:45.602 "write": true, 00:30:45.602 "unmap": true, 00:30:45.602 "flush": true, 00:30:45.602 "reset": true, 00:30:45.602 "nvme_admin": false, 00:30:45.602 "nvme_io": false, 00:30:45.602 "nvme_io_md": false, 00:30:45.602 "write_zeroes": true, 00:30:45.602 "zcopy": true, 00:30:45.602 "get_zone_info": false, 00:30:45.602 "zone_management": false, 00:30:45.602 "zone_append": false, 00:30:45.602 "compare": false, 00:30:45.602 "compare_and_write": false, 00:30:45.602 "abort": true, 00:30:45.602 "seek_hole": false, 00:30:45.602 "seek_data": false, 00:30:45.602 "copy": true, 00:30:45.602 "nvme_iov_md": false 00:30:45.602 }, 00:30:45.602 "memory_domains": [ 00:30:45.602 { 00:30:45.602 "dma_device_id": "system", 00:30:45.602 "dma_device_type": 1 00:30:45.602 }, 00:30:45.602 { 00:30:45.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:45.602 "dma_device_type": 2 00:30:45.602 } 00:30:45.602 ], 00:30:45.602 "driver_specific": {} 00:30:45.602 }' 00:30:45.602 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:45.602 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:45.602 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:45.602 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:45.602 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:45.861 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:45.861 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:45.861 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:45.861 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:45.861 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:45.861 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:45.861 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:45.861 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:46.120 [2024-07-25 00:14:41.755473] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:46.120 [2024-07-25 00:14:41.755506] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:46.120 [2024-07-25 00:14:41.755582] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:46.120 [2024-07-25 00:14:41.755970] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:46.120 [2024-07-25 00:14:41.755990] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name Existed_Raid, state offline 00:30:46.120 00:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 106904 00:30:46.120 00:14:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 106904 ']' 00:30:46.120 00:14:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 106904 00:30:46.120 00:14:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:30:46.120 00:14:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:46.120 00:14:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 106904 00:30:46.120 killing process with pid 106904 00:30:46.120 00:14:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:46.120 00:14:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:46.120 00:14:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 106904' 00:30:46.120 00:14:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 106904 00:30:46.120 [2024-07-25 00:14:41.810170] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:46.120 00:14:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 106904 00:30:46.379 [2024-07-25 00:14:42.064310] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:47.316 ************************************ 00:30:47.316 END TEST raid5f_state_function_test_sb 00:30:47.316 ************************************ 00:30:47.316 00:14:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:30:47.316 00:30:47.316 real 0m25.930s 00:30:47.316 user 0m45.423s 00:30:47.316 sys 0m4.193s 00:30:47.316 00:14:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:47.316 00:14:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.316 00:14:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:30:47.316 00:14:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:47.316 00:14:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:47.316 00:14:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:47.316 ************************************ 00:30:47.316 START TEST raid5f_superblock_test 00:30:47.316 ************************************ 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid5f 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid5f '!=' raid1 ']' 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=107879 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 107879 /var/tmp/spdk-raid.sock 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 107879 ']' 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:47.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:47.316 00:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.316 [2024-07-25 00:14:43.108011] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:30:47.316 [2024-07-25 00:14:43.108211] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107879 ] 00:30:47.575 [2024-07-25 00:14:43.275151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.575 [2024-07-25 00:14:43.426176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.834 [2024-07-25 00:14:43.569046] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:48.401 00:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:48.401 00:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:30:48.401 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:30:48.401 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:30:48.401 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:30:48.401 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:30:48.401 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:30:48.401 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:48.401 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:30:48.401 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:48.401 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:30:48.401 malloc1 00:30:48.401 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:48.661 [2024-07-25 00:14:44.467448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:48.661 [2024-07-25 00:14:44.467561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:48.661 [2024-07-25 00:14:44.467595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:30:48.661 [2024-07-25 00:14:44.467609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:48.661 [2024-07-25 00:14:44.469876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:48.661 [2024-07-25 00:14:44.469933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:48.661 pt1 00:30:48.661 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:30:48.661 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:30:48.661 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:30:48.661 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:30:48.661 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:30:48.661 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:48.661 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:30:48.661 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:48.661 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:30:48.920 malloc2 00:30:48.920 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:49.178 [2024-07-25 00:14:44.947068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:49.178 [2024-07-25 00:14:44.947150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:49.179 [2024-07-25 00:14:44.947191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:30:49.179 [2024-07-25 00:14:44.947204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:49.179 [2024-07-25 00:14:44.949513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:49.179 [2024-07-25 00:14:44.949552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:49.179 pt2 00:30:49.179 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:30:49.179 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:30:49.179 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:30:49.179 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:30:49.179 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:30:49.179 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:49.179 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:30:49.179 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:49.179 00:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:30:49.438 malloc3 00:30:49.438 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:49.697 [2024-07-25 00:14:45.335275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:49.697 [2024-07-25 00:14:45.335350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:49.697 [2024-07-25 00:14:45.335376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:30:49.697 [2024-07-25 00:14:45.335389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:49.697 [2024-07-25 00:14:45.337579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:49.697 [2024-07-25 00:14:45.337619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:49.697 pt3 00:30:49.697 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:30:49.697 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:30:49.697 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:30:49.697 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:30:49.697 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:30:49.697 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:49.697 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:30:49.697 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:49.697 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:30:49.697 malloc4 00:30:49.697 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:30:49.956 [2024-07-25 00:14:45.735479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:30:49.956 [2024-07-25 00:14:45.735589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:49.956 [2024-07-25 00:14:45.735624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:30:49.956 [2024-07-25 00:14:45.735638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:49.956 [2024-07-25 00:14:45.737969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:49.956 [2024-07-25 00:14:45.738012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:30:49.956 pt4 00:30:49.956 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:30:49.956 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:30:49.956 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:30:50.214 [2024-07-25 00:14:45.923679] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:50.214 [2024-07-25 00:14:45.925541] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:50.214 [2024-07-25 00:14:45.925640] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:50.214 [2024-07-25 00:14:45.925722] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:30:50.214 [2024-07-25 00:14:45.926002] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009680 00:30:50.214 [2024-07-25 00:14:45.926029] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:50.214 [2024-07-25 00:14:45.926151] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:30:50.214 [2024-07-25 00:14:45.932025] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009680 00:30:50.214 [2024-07-25 00:14:45.932057] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009680 00:30:50.214 [2024-07-25 00:14:45.932311] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:50.215 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:30:50.215 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:50.215 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:50.215 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:50.215 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:50.215 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:50.215 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:50.215 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:50.215 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:50.215 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:50.215 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:50.215 00:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:50.473 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:50.473 "name": "raid_bdev1", 00:30:50.473 "uuid": "024460ef-7dc2-4ffa-8d3a-b7233a009666", 00:30:50.473 "strip_size_kb": 64, 00:30:50.473 "state": "online", 00:30:50.473 "raid_level": "raid5f", 00:30:50.473 "superblock": true, 00:30:50.473 "num_base_bdevs": 4, 00:30:50.473 "num_base_bdevs_discovered": 4, 00:30:50.474 "num_base_bdevs_operational": 4, 00:30:50.474 "base_bdevs_list": [ 00:30:50.474 { 00:30:50.474 "name": "pt1", 00:30:50.474 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:50.474 "is_configured": true, 00:30:50.474 "data_offset": 2048, 00:30:50.474 "data_size": 63488 00:30:50.474 }, 00:30:50.474 { 00:30:50.474 "name": "pt2", 00:30:50.474 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:50.474 "is_configured": true, 00:30:50.474 "data_offset": 2048, 00:30:50.474 "data_size": 63488 00:30:50.474 }, 00:30:50.474 { 00:30:50.474 "name": "pt3", 00:30:50.474 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:50.474 "is_configured": true, 00:30:50.474 "data_offset": 2048, 00:30:50.474 "data_size": 63488 00:30:50.474 }, 00:30:50.474 { 00:30:50.474 "name": "pt4", 00:30:50.474 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:50.474 "is_configured": true, 00:30:50.474 "data_offset": 2048, 00:30:50.474 "data_size": 63488 00:30:50.474 } 00:30:50.474 ] 00:30:50.474 }' 00:30:50.474 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:50.474 00:14:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.733 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:30:50.733 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:30:50.733 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:50.733 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:50.733 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:50.733 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:30:50.733 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:50.733 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:50.992 [2024-07-25 00:14:46.666376] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:50.992 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:50.992 "name": "raid_bdev1", 00:30:50.992 "aliases": [ 00:30:50.992 "024460ef-7dc2-4ffa-8d3a-b7233a009666" 00:30:50.992 ], 00:30:50.992 "product_name": "Raid Volume", 00:30:50.992 "block_size": 512, 00:30:50.992 "num_blocks": 190464, 00:30:50.992 "uuid": "024460ef-7dc2-4ffa-8d3a-b7233a009666", 00:30:50.992 "assigned_rate_limits": { 00:30:50.992 "rw_ios_per_sec": 0, 00:30:50.992 "rw_mbytes_per_sec": 0, 00:30:50.992 "r_mbytes_per_sec": 0, 00:30:50.992 "w_mbytes_per_sec": 0 00:30:50.992 }, 00:30:50.992 "claimed": false, 00:30:50.992 "zoned": false, 00:30:50.992 "supported_io_types": { 00:30:50.992 "read": true, 00:30:50.992 "write": true, 00:30:50.992 "unmap": false, 00:30:50.992 "flush": false, 00:30:50.992 "reset": true, 00:30:50.992 "nvme_admin": false, 00:30:50.992 "nvme_io": false, 00:30:50.992 "nvme_io_md": false, 00:30:50.992 "write_zeroes": true, 00:30:50.992 "zcopy": false, 00:30:50.992 "get_zone_info": false, 00:30:50.992 "zone_management": false, 00:30:50.992 "zone_append": false, 00:30:50.992 "compare": false, 00:30:50.992 "compare_and_write": false, 00:30:50.992 "abort": false, 00:30:50.992 "seek_hole": false, 00:30:50.992 "seek_data": false, 00:30:50.992 "copy": false, 00:30:50.992 "nvme_iov_md": false 00:30:50.992 }, 00:30:50.992 "driver_specific": { 00:30:50.992 "raid": { 00:30:50.992 "uuid": "024460ef-7dc2-4ffa-8d3a-b7233a009666", 00:30:50.992 "strip_size_kb": 64, 00:30:50.992 "state": "online", 00:30:50.992 "raid_level": "raid5f", 00:30:50.992 "superblock": true, 00:30:50.992 "num_base_bdevs": 4, 00:30:50.992 "num_base_bdevs_discovered": 4, 00:30:50.992 "num_base_bdevs_operational": 4, 00:30:50.992 "base_bdevs_list": [ 00:30:50.992 { 00:30:50.992 "name": "pt1", 00:30:50.992 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:50.992 "is_configured": true, 00:30:50.992 "data_offset": 2048, 00:30:50.992 "data_size": 63488 00:30:50.992 }, 00:30:50.992 { 00:30:50.992 "name": "pt2", 00:30:50.992 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:50.992 "is_configured": true, 00:30:50.992 "data_offset": 2048, 00:30:50.992 "data_size": 63488 00:30:50.992 }, 00:30:50.992 { 00:30:50.992 "name": "pt3", 00:30:50.992 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:50.992 "is_configured": true, 00:30:50.992 "data_offset": 2048, 00:30:50.992 "data_size": 63488 00:30:50.992 }, 00:30:50.992 { 00:30:50.992 "name": "pt4", 00:30:50.992 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:50.992 "is_configured": true, 00:30:50.992 "data_offset": 2048, 00:30:50.992 "data_size": 63488 00:30:50.992 } 00:30:50.992 ] 00:30:50.992 } 00:30:50.992 } 00:30:50.992 }' 00:30:50.992 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:50.992 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:30:50.992 pt2 00:30:50.992 pt3 00:30:50.992 pt4' 00:30:50.992 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:50.992 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:30:50.992 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:51.250 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:51.250 "name": "pt1", 00:30:51.250 "aliases": [ 00:30:51.250 "00000000-0000-0000-0000-000000000001" 00:30:51.250 ], 00:30:51.250 "product_name": "passthru", 00:30:51.250 "block_size": 512, 00:30:51.250 "num_blocks": 65536, 00:30:51.250 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:51.250 "assigned_rate_limits": { 00:30:51.250 "rw_ios_per_sec": 0, 00:30:51.250 "rw_mbytes_per_sec": 0, 00:30:51.250 "r_mbytes_per_sec": 0, 00:30:51.250 "w_mbytes_per_sec": 0 00:30:51.250 }, 00:30:51.250 "claimed": true, 00:30:51.250 "claim_type": "exclusive_write", 00:30:51.250 "zoned": false, 00:30:51.250 "supported_io_types": { 00:30:51.251 "read": true, 00:30:51.251 "write": true, 00:30:51.251 "unmap": true, 00:30:51.251 "flush": true, 00:30:51.251 "reset": true, 00:30:51.251 "nvme_admin": false, 00:30:51.251 "nvme_io": false, 00:30:51.251 "nvme_io_md": false, 00:30:51.251 "write_zeroes": true, 00:30:51.251 "zcopy": true, 00:30:51.251 "get_zone_info": false, 00:30:51.251 "zone_management": false, 00:30:51.251 "zone_append": false, 00:30:51.251 "compare": false, 00:30:51.251 "compare_and_write": false, 00:30:51.251 "abort": true, 00:30:51.251 "seek_hole": false, 00:30:51.251 "seek_data": false, 00:30:51.251 "copy": true, 00:30:51.251 "nvme_iov_md": false 00:30:51.251 }, 00:30:51.251 "memory_domains": [ 00:30:51.251 { 00:30:51.251 "dma_device_id": "system", 00:30:51.251 "dma_device_type": 1 00:30:51.251 }, 00:30:51.251 { 00:30:51.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:51.251 "dma_device_type": 2 00:30:51.251 } 00:30:51.251 ], 00:30:51.251 "driver_specific": { 00:30:51.251 "passthru": { 00:30:51.251 "name": "pt1", 00:30:51.251 "base_bdev_name": "malloc1" 00:30:51.251 } 00:30:51.251 } 00:30:51.251 }' 00:30:51.251 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:51.251 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:51.251 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:51.251 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:51.251 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:51.251 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:51.251 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:51.251 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:51.251 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:51.251 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:51.251 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:51.251 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:51.251 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:51.251 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:51.251 00:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:30:51.509 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:51.510 "name": "pt2", 00:30:51.510 "aliases": [ 00:30:51.510 "00000000-0000-0000-0000-000000000002" 00:30:51.510 ], 00:30:51.510 "product_name": "passthru", 00:30:51.510 "block_size": 512, 00:30:51.510 "num_blocks": 65536, 00:30:51.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:51.510 "assigned_rate_limits": { 00:30:51.510 "rw_ios_per_sec": 0, 00:30:51.510 "rw_mbytes_per_sec": 0, 00:30:51.510 "r_mbytes_per_sec": 0, 00:30:51.510 "w_mbytes_per_sec": 0 00:30:51.510 }, 00:30:51.510 "claimed": true, 00:30:51.510 "claim_type": "exclusive_write", 00:30:51.510 "zoned": false, 00:30:51.510 "supported_io_types": { 00:30:51.510 "read": true, 00:30:51.510 "write": true, 00:30:51.510 "unmap": true, 00:30:51.510 "flush": true, 00:30:51.510 "reset": true, 00:30:51.510 "nvme_admin": false, 00:30:51.510 "nvme_io": false, 00:30:51.510 "nvme_io_md": false, 00:30:51.510 "write_zeroes": true, 00:30:51.510 "zcopy": true, 00:30:51.510 "get_zone_info": false, 00:30:51.510 "zone_management": false, 00:30:51.510 "zone_append": false, 00:30:51.510 "compare": false, 00:30:51.510 "compare_and_write": false, 00:30:51.510 "abort": true, 00:30:51.510 "seek_hole": false, 00:30:51.510 "seek_data": false, 00:30:51.510 "copy": true, 00:30:51.510 "nvme_iov_md": false 00:30:51.510 }, 00:30:51.510 "memory_domains": [ 00:30:51.510 { 00:30:51.510 "dma_device_id": "system", 00:30:51.510 "dma_device_type": 1 00:30:51.510 }, 00:30:51.510 { 00:30:51.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:51.510 "dma_device_type": 2 00:30:51.510 } 00:30:51.510 ], 00:30:51.510 "driver_specific": { 00:30:51.510 "passthru": { 00:30:51.510 "name": "pt2", 00:30:51.510 "base_bdev_name": "malloc2" 00:30:51.510 } 00:30:51.510 } 00:30:51.510 }' 00:30:51.510 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:51.510 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:51.510 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:51.510 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:51.510 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:51.510 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:51.510 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:51.510 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:51.510 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:51.510 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:51.510 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:51.510 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:51.510 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:51.510 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:30:51.510 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:51.769 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:51.769 "name": "pt3", 00:30:51.769 "aliases": [ 00:30:51.769 "00000000-0000-0000-0000-000000000003" 00:30:51.769 ], 00:30:51.769 "product_name": "passthru", 00:30:51.769 "block_size": 512, 00:30:51.769 "num_blocks": 65536, 00:30:51.769 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:51.769 "assigned_rate_limits": { 00:30:51.769 "rw_ios_per_sec": 0, 00:30:51.769 "rw_mbytes_per_sec": 0, 00:30:51.769 "r_mbytes_per_sec": 0, 00:30:51.769 "w_mbytes_per_sec": 0 00:30:51.769 }, 00:30:51.769 "claimed": true, 00:30:51.769 "claim_type": "exclusive_write", 00:30:51.769 "zoned": false, 00:30:51.769 "supported_io_types": { 00:30:51.769 "read": true, 00:30:51.769 "write": true, 00:30:51.769 "unmap": true, 00:30:51.769 "flush": true, 00:30:51.769 "reset": true, 00:30:51.769 "nvme_admin": false, 00:30:51.769 "nvme_io": false, 00:30:51.769 "nvme_io_md": false, 00:30:51.769 "write_zeroes": true, 00:30:51.769 "zcopy": true, 00:30:51.769 "get_zone_info": false, 00:30:51.769 "zone_management": false, 00:30:51.769 "zone_append": false, 00:30:51.769 "compare": false, 00:30:51.769 "compare_and_write": false, 00:30:51.769 "abort": true, 00:30:51.769 "seek_hole": false, 00:30:51.769 "seek_data": false, 00:30:51.769 "copy": true, 00:30:51.769 "nvme_iov_md": false 00:30:51.769 }, 00:30:51.769 "memory_domains": [ 00:30:51.769 { 00:30:51.769 "dma_device_id": "system", 00:30:51.769 "dma_device_type": 1 00:30:51.769 }, 00:30:51.769 { 00:30:51.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:51.769 "dma_device_type": 2 00:30:51.769 } 00:30:51.769 ], 00:30:51.769 "driver_specific": { 00:30:51.769 "passthru": { 00:30:51.769 "name": "pt3", 00:30:51.769 "base_bdev_name": "malloc3" 00:30:51.769 } 00:30:51.769 } 00:30:51.769 }' 00:30:51.769 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:51.769 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:51.769 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:51.769 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:51.769 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:51.769 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:51.769 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:51.769 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:51.769 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:51.769 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:52.028 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:52.028 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:52.028 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:52.028 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:30:52.028 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:52.286 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:52.286 "name": "pt4", 00:30:52.286 "aliases": [ 00:30:52.286 "00000000-0000-0000-0000-000000000004" 00:30:52.286 ], 00:30:52.286 "product_name": "passthru", 00:30:52.286 "block_size": 512, 00:30:52.286 "num_blocks": 65536, 00:30:52.286 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:52.286 "assigned_rate_limits": { 00:30:52.286 "rw_ios_per_sec": 0, 00:30:52.286 "rw_mbytes_per_sec": 0, 00:30:52.286 "r_mbytes_per_sec": 0, 00:30:52.286 "w_mbytes_per_sec": 0 00:30:52.286 }, 00:30:52.286 "claimed": true, 00:30:52.286 "claim_type": "exclusive_write", 00:30:52.287 "zoned": false, 00:30:52.287 "supported_io_types": { 00:30:52.287 "read": true, 00:30:52.287 "write": true, 00:30:52.287 "unmap": true, 00:30:52.287 "flush": true, 00:30:52.287 "reset": true, 00:30:52.287 "nvme_admin": false, 00:30:52.287 "nvme_io": false, 00:30:52.287 "nvme_io_md": false, 00:30:52.287 "write_zeroes": true, 00:30:52.287 "zcopy": true, 00:30:52.287 "get_zone_info": false, 00:30:52.287 "zone_management": false, 00:30:52.287 "zone_append": false, 00:30:52.287 "compare": false, 00:30:52.287 "compare_and_write": false, 00:30:52.287 "abort": true, 00:30:52.287 "seek_hole": false, 00:30:52.287 "seek_data": false, 00:30:52.287 "copy": true, 00:30:52.287 "nvme_iov_md": false 00:30:52.287 }, 00:30:52.287 "memory_domains": [ 00:30:52.287 { 00:30:52.287 "dma_device_id": "system", 00:30:52.287 "dma_device_type": 1 00:30:52.287 }, 00:30:52.287 { 00:30:52.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:52.287 "dma_device_type": 2 00:30:52.287 } 00:30:52.287 ], 00:30:52.287 "driver_specific": { 00:30:52.287 "passthru": { 00:30:52.287 "name": "pt4", 00:30:52.287 "base_bdev_name": "malloc4" 00:30:52.287 } 00:30:52.287 } 00:30:52.287 }' 00:30:52.287 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:52.287 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:52.287 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:52.287 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:52.287 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:52.287 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:52.287 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:52.287 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:52.287 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:52.287 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:52.287 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:52.287 00:14:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:52.287 00:14:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:52.287 00:14:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:30:52.546 [2024-07-25 00:14:48.246762] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:52.546 00:14:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=024460ef-7dc2-4ffa-8d3a-b7233a009666 00:30:52.546 00:14:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 024460ef-7dc2-4ffa-8d3a-b7233a009666 ']' 00:30:52.546 00:14:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:52.804 [2024-07-25 00:14:48.498698] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:52.804 [2024-07-25 00:14:48.498750] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:52.804 [2024-07-25 00:14:48.498848] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:52.804 [2024-07-25 00:14:48.498938] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:52.804 [2024-07-25 00:14:48.498955] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009680 name raid_bdev1, state offline 00:30:52.804 00:14:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:52.804 00:14:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:30:53.063 00:14:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:30:53.063 00:14:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:30:53.063 00:14:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:30:53.063 00:14:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:30:53.322 00:14:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:30:53.322 00:14:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:30:53.580 00:14:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:30:53.580 00:14:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:30:53.580 00:14:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:30:53.580 00:14:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:30:53.839 00:14:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:30:53.839 00:14:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:30:54.098 00:14:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:30:54.098 00:14:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:30:54.098 00:14:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:30:54.098 00:14:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:30:54.098 00:14:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:54.098 00:14:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:54.098 00:14:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:54.098 00:14:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:54.098 00:14:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:54.098 00:14:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:54.098 00:14:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:54.098 00:14:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:54.098 00:14:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:30:54.098 [2024-07-25 00:14:49.943052] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:30:54.098 [2024-07-25 00:14:49.944934] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:30:54.098 [2024-07-25 00:14:49.945021] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:30:54.098 [2024-07-25 00:14:49.945070] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:30:54.099 [2024-07-25 00:14:49.945128] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:30:54.099 [2024-07-25 00:14:49.945221] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:30:54.099 [2024-07-25 00:14:49.945268] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:30:54.099 [2024-07-25 00:14:49.945297] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:30:54.099 [2024-07-25 00:14:49.945317] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:54.099 [2024-07-25 00:14:49.945335] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009c80 name raid_bdev1, state configuring 00:30:54.099 request: 00:30:54.099 { 00:30:54.099 "name": "raid_bdev1", 00:30:54.099 "raid_level": "raid5f", 00:30:54.099 "base_bdevs": [ 00:30:54.099 "malloc1", 00:30:54.099 "malloc2", 00:30:54.099 "malloc3", 00:30:54.099 "malloc4" 00:30:54.099 ], 00:30:54.099 "strip_size_kb": 64, 00:30:54.099 "superblock": false, 00:30:54.099 "method": "bdev_raid_create", 00:30:54.099 "req_id": 1 00:30:54.099 } 00:30:54.099 Got JSON-RPC error response 00:30:54.099 response: 00:30:54.099 { 00:30:54.099 "code": -17, 00:30:54.099 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:30:54.099 } 00:30:54.099 00:14:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:30:54.099 00:14:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:54.099 00:14:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:54.099 00:14:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:54.099 00:14:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:54.099 00:14:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:30:54.357 00:14:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:30:54.357 00:14:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:30:54.357 00:14:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:54.616 [2024-07-25 00:14:50.375054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:54.616 [2024-07-25 00:14:50.375133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:54.616 [2024-07-25 00:14:50.375156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:30:54.616 [2024-07-25 00:14:50.375170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:54.616 [2024-07-25 00:14:50.377301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:54.616 [2024-07-25 00:14:50.377345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:54.616 [2024-07-25 00:14:50.377448] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:30:54.616 [2024-07-25 00:14:50.377515] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:54.616 pt1 00:30:54.616 00:14:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:30:54.616 00:14:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:54.616 00:14:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:54.616 00:14:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:54.616 00:14:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:54.616 00:14:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:54.616 00:14:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:54.616 00:14:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:54.616 00:14:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:54.616 00:14:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:54.616 00:14:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:54.616 00:14:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:54.875 00:14:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:54.875 "name": "raid_bdev1", 00:30:54.875 "uuid": "024460ef-7dc2-4ffa-8d3a-b7233a009666", 00:30:54.875 "strip_size_kb": 64, 00:30:54.875 "state": "configuring", 00:30:54.875 "raid_level": "raid5f", 00:30:54.875 "superblock": true, 00:30:54.875 "num_base_bdevs": 4, 00:30:54.875 "num_base_bdevs_discovered": 1, 00:30:54.875 "num_base_bdevs_operational": 4, 00:30:54.875 "base_bdevs_list": [ 00:30:54.875 { 00:30:54.875 "name": "pt1", 00:30:54.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:54.875 "is_configured": true, 00:30:54.875 "data_offset": 2048, 00:30:54.875 "data_size": 63488 00:30:54.875 }, 00:30:54.875 { 00:30:54.875 "name": null, 00:30:54.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:54.875 "is_configured": false, 00:30:54.875 "data_offset": 2048, 00:30:54.875 "data_size": 63488 00:30:54.875 }, 00:30:54.875 { 00:30:54.875 "name": null, 00:30:54.875 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:54.875 "is_configured": false, 00:30:54.875 "data_offset": 2048, 00:30:54.875 "data_size": 63488 00:30:54.875 }, 00:30:54.875 { 00:30:54.875 "name": null, 00:30:54.875 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:54.875 "is_configured": false, 00:30:54.875 "data_offset": 2048, 00:30:54.875 "data_size": 63488 00:30:54.875 } 00:30:54.875 ] 00:30:54.875 }' 00:30:54.875 00:14:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:54.875 00:14:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.134 00:14:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:30:55.134 00:14:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:55.393 [2024-07-25 00:14:51.011204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:55.393 [2024-07-25 00:14:51.011285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:55.393 [2024-07-25 00:14:51.011311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:30:55.393 [2024-07-25 00:14:51.011325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:55.393 [2024-07-25 00:14:51.011847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:55.393 [2024-07-25 00:14:51.011901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:55.393 [2024-07-25 00:14:51.011998] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:55.393 [2024-07-25 00:14:51.012044] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:55.393 pt2 00:30:55.393 00:14:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:30:55.652 [2024-07-25 00:14:51.267335] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:30:55.652 00:14:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:30:55.652 00:14:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:55.652 00:14:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:55.652 00:14:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:55.652 00:14:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:55.652 00:14:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:55.652 00:14:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:55.652 00:14:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:55.652 00:14:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:55.652 00:14:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:55.652 00:14:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:55.652 00:14:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:55.911 00:14:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:55.911 "name": "raid_bdev1", 00:30:55.911 "uuid": "024460ef-7dc2-4ffa-8d3a-b7233a009666", 00:30:55.911 "strip_size_kb": 64, 00:30:55.911 "state": "configuring", 00:30:55.911 "raid_level": "raid5f", 00:30:55.911 "superblock": true, 00:30:55.911 "num_base_bdevs": 4, 00:30:55.911 "num_base_bdevs_discovered": 1, 00:30:55.911 "num_base_bdevs_operational": 4, 00:30:55.911 "base_bdevs_list": [ 00:30:55.911 { 00:30:55.911 "name": "pt1", 00:30:55.911 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:55.911 "is_configured": true, 00:30:55.911 "data_offset": 2048, 00:30:55.911 "data_size": 63488 00:30:55.911 }, 00:30:55.911 { 00:30:55.911 "name": null, 00:30:55.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:55.911 "is_configured": false, 00:30:55.911 "data_offset": 2048, 00:30:55.911 "data_size": 63488 00:30:55.911 }, 00:30:55.911 { 00:30:55.911 "name": null, 00:30:55.911 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:55.911 "is_configured": false, 00:30:55.911 "data_offset": 2048, 00:30:55.911 "data_size": 63488 00:30:55.911 }, 00:30:55.911 { 00:30:55.911 "name": null, 00:30:55.911 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:55.911 "is_configured": false, 00:30:55.911 "data_offset": 2048, 00:30:55.911 "data_size": 63488 00:30:55.911 } 00:30:55.911 ] 00:30:55.911 }' 00:30:55.911 00:14:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:55.911 00:14:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.170 00:14:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:30:56.170 00:14:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:30:56.170 00:14:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:56.429 [2024-07-25 00:14:52.083507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:56.429 [2024-07-25 00:14:52.083589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:56.429 [2024-07-25 00:14:52.083615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:30:56.429 [2024-07-25 00:14:52.083627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:56.429 [2024-07-25 00:14:52.084226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:56.429 [2024-07-25 00:14:52.084290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:56.429 [2024-07-25 00:14:52.084415] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:56.429 [2024-07-25 00:14:52.084441] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:56.429 pt2 00:30:56.429 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:30:56.429 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:30:56.429 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:56.688 [2024-07-25 00:14:52.339542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:56.688 [2024-07-25 00:14:52.339608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:56.688 [2024-07-25 00:14:52.339634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:30:56.688 [2024-07-25 00:14:52.339645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:56.688 [2024-07-25 00:14:52.340188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:56.688 [2024-07-25 00:14:52.340223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:56.688 [2024-07-25 00:14:52.340356] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:30:56.688 [2024-07-25 00:14:52.340397] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:56.688 pt3 00:30:56.688 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:30:56.688 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:30:56.688 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:30:56.947 [2024-07-25 00:14:52.583636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:30:56.947 [2024-07-25 00:14:52.583734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:56.947 [2024-07-25 00:14:52.583766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:30:56.947 [2024-07-25 00:14:52.583778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:56.947 [2024-07-25 00:14:52.584324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:56.947 [2024-07-25 00:14:52.584360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:30:56.947 [2024-07-25 00:14:52.584475] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:30:56.947 [2024-07-25 00:14:52.584502] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:30:56.947 [2024-07-25 00:14:52.584707] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a880 00:30:56.947 [2024-07-25 00:14:52.584732] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:56.947 [2024-07-25 00:14:52.584851] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:30:56.947 [2024-07-25 00:14:52.590508] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a880 00:30:56.947 [2024-07-25 00:14:52.590536] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a880 00:30:56.947 [2024-07-25 00:14:52.590722] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:56.947 pt4 00:30:56.947 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:30:56.947 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:30:56.947 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:30:56.947 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:56.947 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:56.947 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:56.947 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:56.947 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:56.947 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:56.947 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:56.947 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:56.947 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:56.947 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:56.947 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:57.206 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:57.206 "name": "raid_bdev1", 00:30:57.206 "uuid": "024460ef-7dc2-4ffa-8d3a-b7233a009666", 00:30:57.206 "strip_size_kb": 64, 00:30:57.206 "state": "online", 00:30:57.206 "raid_level": "raid5f", 00:30:57.206 "superblock": true, 00:30:57.206 "num_base_bdevs": 4, 00:30:57.206 "num_base_bdevs_discovered": 4, 00:30:57.206 "num_base_bdevs_operational": 4, 00:30:57.206 "base_bdevs_list": [ 00:30:57.206 { 00:30:57.206 "name": "pt1", 00:30:57.206 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:57.206 "is_configured": true, 00:30:57.206 "data_offset": 2048, 00:30:57.206 "data_size": 63488 00:30:57.206 }, 00:30:57.206 { 00:30:57.206 "name": "pt2", 00:30:57.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:57.206 "is_configured": true, 00:30:57.206 "data_offset": 2048, 00:30:57.206 "data_size": 63488 00:30:57.206 }, 00:30:57.206 { 00:30:57.206 "name": "pt3", 00:30:57.206 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:57.206 "is_configured": true, 00:30:57.206 "data_offset": 2048, 00:30:57.206 "data_size": 63488 00:30:57.206 }, 00:30:57.206 { 00:30:57.206 "name": "pt4", 00:30:57.206 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:57.206 "is_configured": true, 00:30:57.206 "data_offset": 2048, 00:30:57.206 "data_size": 63488 00:30:57.206 } 00:30:57.206 ] 00:30:57.206 }' 00:30:57.206 00:14:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:57.206 00:14:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.464 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:30:57.464 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:30:57.464 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:57.464 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:57.464 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:57.464 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:30:57.464 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:57.464 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:57.722 [2024-07-25 00:14:53.369116] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:57.722 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:57.722 "name": "raid_bdev1", 00:30:57.722 "aliases": [ 00:30:57.722 "024460ef-7dc2-4ffa-8d3a-b7233a009666" 00:30:57.722 ], 00:30:57.722 "product_name": "Raid Volume", 00:30:57.722 "block_size": 512, 00:30:57.722 "num_blocks": 190464, 00:30:57.722 "uuid": "024460ef-7dc2-4ffa-8d3a-b7233a009666", 00:30:57.722 "assigned_rate_limits": { 00:30:57.722 "rw_ios_per_sec": 0, 00:30:57.722 "rw_mbytes_per_sec": 0, 00:30:57.722 "r_mbytes_per_sec": 0, 00:30:57.722 "w_mbytes_per_sec": 0 00:30:57.722 }, 00:30:57.722 "claimed": false, 00:30:57.722 "zoned": false, 00:30:57.722 "supported_io_types": { 00:30:57.722 "read": true, 00:30:57.722 "write": true, 00:30:57.722 "unmap": false, 00:30:57.722 "flush": false, 00:30:57.722 "reset": true, 00:30:57.722 "nvme_admin": false, 00:30:57.722 "nvme_io": false, 00:30:57.722 "nvme_io_md": false, 00:30:57.722 "write_zeroes": true, 00:30:57.722 "zcopy": false, 00:30:57.722 "get_zone_info": false, 00:30:57.722 "zone_management": false, 00:30:57.722 "zone_append": false, 00:30:57.722 "compare": false, 00:30:57.722 "compare_and_write": false, 00:30:57.722 "abort": false, 00:30:57.722 "seek_hole": false, 00:30:57.722 "seek_data": false, 00:30:57.722 "copy": false, 00:30:57.722 "nvme_iov_md": false 00:30:57.722 }, 00:30:57.722 "driver_specific": { 00:30:57.722 "raid": { 00:30:57.722 "uuid": "024460ef-7dc2-4ffa-8d3a-b7233a009666", 00:30:57.722 "strip_size_kb": 64, 00:30:57.722 "state": "online", 00:30:57.722 "raid_level": "raid5f", 00:30:57.722 "superblock": true, 00:30:57.722 "num_base_bdevs": 4, 00:30:57.722 "num_base_bdevs_discovered": 4, 00:30:57.722 "num_base_bdevs_operational": 4, 00:30:57.722 "base_bdevs_list": [ 00:30:57.722 { 00:30:57.722 "name": "pt1", 00:30:57.722 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:57.722 "is_configured": true, 00:30:57.722 "data_offset": 2048, 00:30:57.722 "data_size": 63488 00:30:57.722 }, 00:30:57.722 { 00:30:57.722 "name": "pt2", 00:30:57.722 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:57.722 "is_configured": true, 00:30:57.722 "data_offset": 2048, 00:30:57.722 "data_size": 63488 00:30:57.722 }, 00:30:57.722 { 00:30:57.722 "name": "pt3", 00:30:57.722 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:57.722 "is_configured": true, 00:30:57.722 "data_offset": 2048, 00:30:57.722 "data_size": 63488 00:30:57.722 }, 00:30:57.722 { 00:30:57.722 "name": "pt4", 00:30:57.722 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:57.722 "is_configured": true, 00:30:57.722 "data_offset": 2048, 00:30:57.722 "data_size": 63488 00:30:57.722 } 00:30:57.722 ] 00:30:57.722 } 00:30:57.722 } 00:30:57.722 }' 00:30:57.722 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:57.722 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:30:57.722 pt2 00:30:57.722 pt3 00:30:57.722 pt4' 00:30:57.722 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:57.722 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:30:57.722 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:57.980 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:57.980 "name": "pt1", 00:30:57.980 "aliases": [ 00:30:57.980 "00000000-0000-0000-0000-000000000001" 00:30:57.980 ], 00:30:57.980 "product_name": "passthru", 00:30:57.980 "block_size": 512, 00:30:57.980 "num_blocks": 65536, 00:30:57.980 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:57.980 "assigned_rate_limits": { 00:30:57.980 "rw_ios_per_sec": 0, 00:30:57.980 "rw_mbytes_per_sec": 0, 00:30:57.980 "r_mbytes_per_sec": 0, 00:30:57.980 "w_mbytes_per_sec": 0 00:30:57.980 }, 00:30:57.980 "claimed": true, 00:30:57.980 "claim_type": "exclusive_write", 00:30:57.980 "zoned": false, 00:30:57.980 "supported_io_types": { 00:30:57.980 "read": true, 00:30:57.980 "write": true, 00:30:57.980 "unmap": true, 00:30:57.980 "flush": true, 00:30:57.980 "reset": true, 00:30:57.980 "nvme_admin": false, 00:30:57.980 "nvme_io": false, 00:30:57.980 "nvme_io_md": false, 00:30:57.980 "write_zeroes": true, 00:30:57.980 "zcopy": true, 00:30:57.980 "get_zone_info": false, 00:30:57.980 "zone_management": false, 00:30:57.980 "zone_append": false, 00:30:57.980 "compare": false, 00:30:57.980 "compare_and_write": false, 00:30:57.980 "abort": true, 00:30:57.980 "seek_hole": false, 00:30:57.980 "seek_data": false, 00:30:57.980 "copy": true, 00:30:57.980 "nvme_iov_md": false 00:30:57.980 }, 00:30:57.980 "memory_domains": [ 00:30:57.980 { 00:30:57.980 "dma_device_id": "system", 00:30:57.980 "dma_device_type": 1 00:30:57.980 }, 00:30:57.980 { 00:30:57.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:57.980 "dma_device_type": 2 00:30:57.980 } 00:30:57.980 ], 00:30:57.980 "driver_specific": { 00:30:57.980 "passthru": { 00:30:57.980 "name": "pt1", 00:30:57.980 "base_bdev_name": "malloc1" 00:30:57.980 } 00:30:57.980 } 00:30:57.980 }' 00:30:57.980 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:57.980 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:57.980 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:57.980 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:57.980 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:57.980 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:57.980 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:57.980 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:57.980 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:57.980 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:57.980 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:57.980 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:57.980 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:57.980 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:30:57.980 00:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:58.239 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:58.239 "name": "pt2", 00:30:58.239 "aliases": [ 00:30:58.239 "00000000-0000-0000-0000-000000000002" 00:30:58.239 ], 00:30:58.239 "product_name": "passthru", 00:30:58.239 "block_size": 512, 00:30:58.239 "num_blocks": 65536, 00:30:58.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:58.239 "assigned_rate_limits": { 00:30:58.239 "rw_ios_per_sec": 0, 00:30:58.239 "rw_mbytes_per_sec": 0, 00:30:58.239 "r_mbytes_per_sec": 0, 00:30:58.239 "w_mbytes_per_sec": 0 00:30:58.239 }, 00:30:58.239 "claimed": true, 00:30:58.239 "claim_type": "exclusive_write", 00:30:58.239 "zoned": false, 00:30:58.239 "supported_io_types": { 00:30:58.239 "read": true, 00:30:58.239 "write": true, 00:30:58.239 "unmap": true, 00:30:58.239 "flush": true, 00:30:58.239 "reset": true, 00:30:58.239 "nvme_admin": false, 00:30:58.239 "nvme_io": false, 00:30:58.239 "nvme_io_md": false, 00:30:58.239 "write_zeroes": true, 00:30:58.239 "zcopy": true, 00:30:58.239 "get_zone_info": false, 00:30:58.239 "zone_management": false, 00:30:58.239 "zone_append": false, 00:30:58.239 "compare": false, 00:30:58.239 "compare_and_write": false, 00:30:58.239 "abort": true, 00:30:58.239 "seek_hole": false, 00:30:58.239 "seek_data": false, 00:30:58.239 "copy": true, 00:30:58.239 "nvme_iov_md": false 00:30:58.239 }, 00:30:58.239 "memory_domains": [ 00:30:58.239 { 00:30:58.239 "dma_device_id": "system", 00:30:58.239 "dma_device_type": 1 00:30:58.239 }, 00:30:58.239 { 00:30:58.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:58.239 "dma_device_type": 2 00:30:58.239 } 00:30:58.239 ], 00:30:58.239 "driver_specific": { 00:30:58.239 "passthru": { 00:30:58.239 "name": "pt2", 00:30:58.239 "base_bdev_name": "malloc2" 00:30:58.239 } 00:30:58.239 } 00:30:58.239 }' 00:30:58.239 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:58.239 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:58.239 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:58.239 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:58.239 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:58.239 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:58.239 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:58.239 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:58.239 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:58.239 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:58.239 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:58.239 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:58.239 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:58.239 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:58.239 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:30:58.497 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:58.497 "name": "pt3", 00:30:58.497 "aliases": [ 00:30:58.497 "00000000-0000-0000-0000-000000000003" 00:30:58.497 ], 00:30:58.497 "product_name": "passthru", 00:30:58.497 "block_size": 512, 00:30:58.497 "num_blocks": 65536, 00:30:58.497 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:58.497 "assigned_rate_limits": { 00:30:58.497 "rw_ios_per_sec": 0, 00:30:58.498 "rw_mbytes_per_sec": 0, 00:30:58.498 "r_mbytes_per_sec": 0, 00:30:58.498 "w_mbytes_per_sec": 0 00:30:58.498 }, 00:30:58.498 "claimed": true, 00:30:58.498 "claim_type": "exclusive_write", 00:30:58.498 "zoned": false, 00:30:58.498 "supported_io_types": { 00:30:58.498 "read": true, 00:30:58.498 "write": true, 00:30:58.498 "unmap": true, 00:30:58.498 "flush": true, 00:30:58.498 "reset": true, 00:30:58.498 "nvme_admin": false, 00:30:58.498 "nvme_io": false, 00:30:58.498 "nvme_io_md": false, 00:30:58.498 "write_zeroes": true, 00:30:58.498 "zcopy": true, 00:30:58.498 "get_zone_info": false, 00:30:58.498 "zone_management": false, 00:30:58.498 "zone_append": false, 00:30:58.498 "compare": false, 00:30:58.498 "compare_and_write": false, 00:30:58.498 "abort": true, 00:30:58.498 "seek_hole": false, 00:30:58.498 "seek_data": false, 00:30:58.498 "copy": true, 00:30:58.498 "nvme_iov_md": false 00:30:58.498 }, 00:30:58.498 "memory_domains": [ 00:30:58.498 { 00:30:58.498 "dma_device_id": "system", 00:30:58.498 "dma_device_type": 1 00:30:58.498 }, 00:30:58.498 { 00:30:58.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:58.498 "dma_device_type": 2 00:30:58.498 } 00:30:58.498 ], 00:30:58.498 "driver_specific": { 00:30:58.498 "passthru": { 00:30:58.498 "name": "pt3", 00:30:58.498 "base_bdev_name": "malloc3" 00:30:58.498 } 00:30:58.498 } 00:30:58.498 }' 00:30:58.498 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:58.498 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:58.498 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:58.498 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:58.498 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:58.498 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:58.498 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:58.755 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:58.755 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:58.755 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:58.755 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:58.755 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:58.755 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:58.755 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:30:58.755 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:59.018 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:59.018 "name": "pt4", 00:30:59.018 "aliases": [ 00:30:59.018 "00000000-0000-0000-0000-000000000004" 00:30:59.018 ], 00:30:59.018 "product_name": "passthru", 00:30:59.018 "block_size": 512, 00:30:59.018 "num_blocks": 65536, 00:30:59.018 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:59.018 "assigned_rate_limits": { 00:30:59.018 "rw_ios_per_sec": 0, 00:30:59.018 "rw_mbytes_per_sec": 0, 00:30:59.018 "r_mbytes_per_sec": 0, 00:30:59.018 "w_mbytes_per_sec": 0 00:30:59.018 }, 00:30:59.018 "claimed": true, 00:30:59.018 "claim_type": "exclusive_write", 00:30:59.018 "zoned": false, 00:30:59.018 "supported_io_types": { 00:30:59.018 "read": true, 00:30:59.018 "write": true, 00:30:59.018 "unmap": true, 00:30:59.018 "flush": true, 00:30:59.018 "reset": true, 00:30:59.018 "nvme_admin": false, 00:30:59.018 "nvme_io": false, 00:30:59.018 "nvme_io_md": false, 00:30:59.018 "write_zeroes": true, 00:30:59.018 "zcopy": true, 00:30:59.018 "get_zone_info": false, 00:30:59.018 "zone_management": false, 00:30:59.018 "zone_append": false, 00:30:59.018 "compare": false, 00:30:59.018 "compare_and_write": false, 00:30:59.018 "abort": true, 00:30:59.018 "seek_hole": false, 00:30:59.018 "seek_data": false, 00:30:59.018 "copy": true, 00:30:59.018 "nvme_iov_md": false 00:30:59.018 }, 00:30:59.018 "memory_domains": [ 00:30:59.018 { 00:30:59.018 "dma_device_id": "system", 00:30:59.018 "dma_device_type": 1 00:30:59.018 }, 00:30:59.018 { 00:30:59.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:59.018 "dma_device_type": 2 00:30:59.018 } 00:30:59.018 ], 00:30:59.018 "driver_specific": { 00:30:59.018 "passthru": { 00:30:59.018 "name": "pt4", 00:30:59.018 "base_bdev_name": "malloc4" 00:30:59.018 } 00:30:59.018 } 00:30:59.018 }' 00:30:59.018 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:59.018 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:59.018 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:59.018 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:59.018 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:59.018 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:59.018 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:59.019 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:59.019 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:59.019 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:59.019 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:59.019 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:59.019 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:59.019 00:14:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:30:59.295 [2024-07-25 00:14:54.997588] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:59.295 00:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 024460ef-7dc2-4ffa-8d3a-b7233a009666 '!=' 024460ef-7dc2-4ffa-8d3a-b7233a009666 ']' 00:30:59.295 00:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid5f 00:30:59.295 00:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:30:59.295 00:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:30:59.295 00:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:30:59.568 [2024-07-25 00:14:55.270186] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:30:59.568 00:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:59.568 00:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:59.568 00:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:59.568 00:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:59.568 00:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:59.568 00:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:59.568 00:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:59.568 00:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:59.568 00:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:59.568 00:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:59.568 00:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:59.568 00:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.826 00:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:59.826 "name": "raid_bdev1", 00:30:59.826 "uuid": "024460ef-7dc2-4ffa-8d3a-b7233a009666", 00:30:59.826 "strip_size_kb": 64, 00:30:59.826 "state": "online", 00:30:59.826 "raid_level": "raid5f", 00:30:59.826 "superblock": true, 00:30:59.826 "num_base_bdevs": 4, 00:30:59.826 "num_base_bdevs_discovered": 3, 00:30:59.826 "num_base_bdevs_operational": 3, 00:30:59.826 "base_bdevs_list": [ 00:30:59.826 { 00:30:59.826 "name": null, 00:30:59.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:59.826 "is_configured": false, 00:30:59.826 "data_offset": 2048, 00:30:59.826 "data_size": 63488 00:30:59.826 }, 00:30:59.826 { 00:30:59.826 "name": "pt2", 00:30:59.826 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:59.826 "is_configured": true, 00:30:59.826 "data_offset": 2048, 00:30:59.826 "data_size": 63488 00:30:59.826 }, 00:30:59.826 { 00:30:59.826 "name": "pt3", 00:30:59.826 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:59.826 "is_configured": true, 00:30:59.826 "data_offset": 2048, 00:30:59.826 "data_size": 63488 00:30:59.826 }, 00:30:59.826 { 00:30:59.826 "name": "pt4", 00:30:59.826 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:59.826 "is_configured": true, 00:30:59.826 "data_offset": 2048, 00:30:59.826 "data_size": 63488 00:30:59.826 } 00:30:59.826 ] 00:30:59.826 }' 00:30:59.826 00:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:59.826 00:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.085 00:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:00.343 [2024-07-25 00:14:56.050397] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:00.343 [2024-07-25 00:14:56.050430] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:00.343 [2024-07-25 00:14:56.050499] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:00.343 [2024-07-25 00:14:56.050576] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:00.343 [2024-07-25 00:14:56.050589] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state offline 00:31:00.343 00:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:00.343 00:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:31:00.601 00:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:31:00.602 00:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:31:00.602 00:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:31:00.602 00:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:31:00.602 00:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:00.859 00:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:31:00.859 00:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:31:00.859 00:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:31:01.118 00:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:31:01.118 00:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:31:01.118 00:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:31:01.118 00:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:31:01.118 00:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:31:01.118 00:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:31:01.118 00:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:31:01.118 00:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:01.376 [2024-07-25 00:14:57.134585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:01.376 [2024-07-25 00:14:57.134648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:01.376 [2024-07-25 00:14:57.134673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b780 00:31:01.376 [2024-07-25 00:14:57.134685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:01.376 [2024-07-25 00:14:57.137113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:01.376 [2024-07-25 00:14:57.137143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:01.376 [2024-07-25 00:14:57.137236] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:01.376 [2024-07-25 00:14:57.137283] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:01.376 pt2 00:31:01.376 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:31:01.376 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:01.376 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:01.376 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:01.376 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:01.376 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:01.376 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:01.376 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:01.376 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:01.376 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:01.376 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:01.376 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:01.634 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:01.635 "name": "raid_bdev1", 00:31:01.635 "uuid": "024460ef-7dc2-4ffa-8d3a-b7233a009666", 00:31:01.635 "strip_size_kb": 64, 00:31:01.635 "state": "configuring", 00:31:01.635 "raid_level": "raid5f", 00:31:01.635 "superblock": true, 00:31:01.635 "num_base_bdevs": 4, 00:31:01.635 "num_base_bdevs_discovered": 1, 00:31:01.635 "num_base_bdevs_operational": 3, 00:31:01.635 "base_bdevs_list": [ 00:31:01.635 { 00:31:01.635 "name": null, 00:31:01.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:01.635 "is_configured": false, 00:31:01.635 "data_offset": 2048, 00:31:01.635 "data_size": 63488 00:31:01.635 }, 00:31:01.635 { 00:31:01.635 "name": "pt2", 00:31:01.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:01.635 "is_configured": true, 00:31:01.635 "data_offset": 2048, 00:31:01.635 "data_size": 63488 00:31:01.635 }, 00:31:01.635 { 00:31:01.635 "name": null, 00:31:01.635 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:01.635 "is_configured": false, 00:31:01.635 "data_offset": 2048, 00:31:01.635 "data_size": 63488 00:31:01.635 }, 00:31:01.635 { 00:31:01.635 "name": null, 00:31:01.635 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:01.635 "is_configured": false, 00:31:01.635 "data_offset": 2048, 00:31:01.635 "data_size": 63488 00:31:01.635 } 00:31:01.635 ] 00:31:01.635 }' 00:31:01.635 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:01.635 00:14:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.893 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:31:01.893 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:31:01.893 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:02.150 [2024-07-25 00:14:57.922770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:02.150 [2024-07-25 00:14:57.922880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:02.150 [2024-07-25 00:14:57.922919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:31:02.150 [2024-07-25 00:14:57.922931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:02.150 [2024-07-25 00:14:57.923447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:02.150 [2024-07-25 00:14:57.923493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:02.150 [2024-07-25 00:14:57.923598] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:02.151 [2024-07-25 00:14:57.923630] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:02.151 pt3 00:31:02.151 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:31:02.151 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:02.151 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:02.151 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:02.151 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:02.151 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:02.151 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:02.151 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:02.151 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:02.151 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:02.151 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:02.151 00:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:02.407 00:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:02.407 "name": "raid_bdev1", 00:31:02.407 "uuid": "024460ef-7dc2-4ffa-8d3a-b7233a009666", 00:31:02.407 "strip_size_kb": 64, 00:31:02.407 "state": "configuring", 00:31:02.407 "raid_level": "raid5f", 00:31:02.407 "superblock": true, 00:31:02.407 "num_base_bdevs": 4, 00:31:02.407 "num_base_bdevs_discovered": 2, 00:31:02.407 "num_base_bdevs_operational": 3, 00:31:02.407 "base_bdevs_list": [ 00:31:02.407 { 00:31:02.407 "name": null, 00:31:02.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:02.407 "is_configured": false, 00:31:02.407 "data_offset": 2048, 00:31:02.407 "data_size": 63488 00:31:02.407 }, 00:31:02.407 { 00:31:02.408 "name": "pt2", 00:31:02.408 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:02.408 "is_configured": true, 00:31:02.408 "data_offset": 2048, 00:31:02.408 "data_size": 63488 00:31:02.408 }, 00:31:02.408 { 00:31:02.408 "name": "pt3", 00:31:02.408 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:02.408 "is_configured": true, 00:31:02.408 "data_offset": 2048, 00:31:02.408 "data_size": 63488 00:31:02.408 }, 00:31:02.408 { 00:31:02.408 "name": null, 00:31:02.408 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:02.408 "is_configured": false, 00:31:02.408 "data_offset": 2048, 00:31:02.408 "data_size": 63488 00:31:02.408 } 00:31:02.408 ] 00:31:02.408 }' 00:31:02.408 00:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:02.408 00:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.665 00:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:31:02.665 00:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:31:02.665 00:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:31:02.665 00:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:31:02.922 [2024-07-25 00:14:58.640186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:31:02.922 [2024-07-25 00:14:58.640383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:02.922 [2024-07-25 00:14:58.640425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:31:02.922 [2024-07-25 00:14:58.640438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:02.922 [2024-07-25 00:14:58.640952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:02.922 [2024-07-25 00:14:58.640976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:31:02.922 [2024-07-25 00:14:58.641067] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:31:02.922 [2024-07-25 00:14:58.641093] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:31:02.923 [2024-07-25 00:14:58.641277] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000bd80 00:31:02.923 [2024-07-25 00:14:58.641291] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:31:02.923 [2024-07-25 00:14:58.641380] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ba0 00:31:02.923 [2024-07-25 00:14:58.646649] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000bd80 00:31:02.923 [2024-07-25 00:14:58.646675] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000bd80 00:31:02.923 [2024-07-25 00:14:58.646994] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:02.923 pt4 00:31:02.923 00:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:02.923 00:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:02.923 00:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:02.923 00:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:02.923 00:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:02.923 00:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:02.923 00:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:02.923 00:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:02.923 00:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:02.923 00:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:02.923 00:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:02.923 00:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:03.181 00:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:03.181 "name": "raid_bdev1", 00:31:03.181 "uuid": "024460ef-7dc2-4ffa-8d3a-b7233a009666", 00:31:03.181 "strip_size_kb": 64, 00:31:03.181 "state": "online", 00:31:03.181 "raid_level": "raid5f", 00:31:03.181 "superblock": true, 00:31:03.181 "num_base_bdevs": 4, 00:31:03.181 "num_base_bdevs_discovered": 3, 00:31:03.181 "num_base_bdevs_operational": 3, 00:31:03.181 "base_bdevs_list": [ 00:31:03.181 { 00:31:03.181 "name": null, 00:31:03.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.181 "is_configured": false, 00:31:03.181 "data_offset": 2048, 00:31:03.181 "data_size": 63488 00:31:03.181 }, 00:31:03.181 { 00:31:03.181 "name": "pt2", 00:31:03.181 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:03.181 "is_configured": true, 00:31:03.181 "data_offset": 2048, 00:31:03.181 "data_size": 63488 00:31:03.181 }, 00:31:03.181 { 00:31:03.181 "name": "pt3", 00:31:03.181 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:03.181 "is_configured": true, 00:31:03.181 "data_offset": 2048, 00:31:03.181 "data_size": 63488 00:31:03.181 }, 00:31:03.181 { 00:31:03.181 "name": "pt4", 00:31:03.181 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:03.181 "is_configured": true, 00:31:03.181 "data_offset": 2048, 00:31:03.181 "data_size": 63488 00:31:03.181 } 00:31:03.181 ] 00:31:03.181 }' 00:31:03.181 00:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:03.181 00:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.439 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:03.698 [2024-07-25 00:14:59.412765] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:03.698 [2024-07-25 00:14:59.412982] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:03.698 [2024-07-25 00:14:59.413083] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:03.698 [2024-07-25 00:14:59.413163] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:03.698 [2024-07-25 00:14:59.413180] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state offline 00:31:03.698 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:03.698 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:31:03.956 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:31:03.956 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:31:03.956 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 4 -gt 2 ']' 00:31:03.956 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # i=3 00:31:03.956 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:31:03.956 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:04.217 [2024-07-25 00:14:59.976837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:04.217 [2024-07-25 00:14:59.976892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:04.217 [2024-07-25 00:14:59.976912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c680 00:31:04.217 [2024-07-25 00:14:59.976925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:04.217 [2024-07-25 00:14:59.979021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:04.217 [2024-07-25 00:14:59.979063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:04.217 [2024-07-25 00:14:59.979150] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:04.217 [2024-07-25 00:14:59.979204] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:04.217 [2024-07-25 00:14:59.979330] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:31:04.217 [2024-07-25 00:14:59.979349] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:04.218 [2024-07-25 00:14:59.979364] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000cc80 name raid_bdev1, state configuring 00:31:04.218 [2024-07-25 00:14:59.979423] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:04.218 [2024-07-25 00:14:59.979558] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:04.218 pt1 00:31:04.218 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 4 -gt 2 ']' 00:31:04.218 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:31:04.218 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:04.218 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:04.218 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:04.218 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:04.218 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:04.218 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:04.218 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:04.218 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:04.218 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:04.218 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:04.218 00:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:04.476 00:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:04.476 "name": "raid_bdev1", 00:31:04.476 "uuid": "024460ef-7dc2-4ffa-8d3a-b7233a009666", 00:31:04.476 "strip_size_kb": 64, 00:31:04.476 "state": "configuring", 00:31:04.476 "raid_level": "raid5f", 00:31:04.476 "superblock": true, 00:31:04.476 "num_base_bdevs": 4, 00:31:04.476 "num_base_bdevs_discovered": 2, 00:31:04.476 "num_base_bdevs_operational": 3, 00:31:04.476 "base_bdevs_list": [ 00:31:04.476 { 00:31:04.476 "name": null, 00:31:04.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.476 "is_configured": false, 00:31:04.476 "data_offset": 2048, 00:31:04.476 "data_size": 63488 00:31:04.476 }, 00:31:04.476 { 00:31:04.476 "name": "pt2", 00:31:04.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:04.476 "is_configured": true, 00:31:04.476 "data_offset": 2048, 00:31:04.476 "data_size": 63488 00:31:04.476 }, 00:31:04.476 { 00:31:04.476 "name": "pt3", 00:31:04.476 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:04.476 "is_configured": true, 00:31:04.476 "data_offset": 2048, 00:31:04.476 "data_size": 63488 00:31:04.476 }, 00:31:04.476 { 00:31:04.476 "name": null, 00:31:04.476 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:04.476 "is_configured": false, 00:31:04.476 "data_offset": 2048, 00:31:04.476 "data_size": 63488 00:31:04.476 } 00:31:04.476 ] 00:31:04.476 }' 00:31:04.476 00:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:04.476 00:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.734 00:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:31:04.734 00:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:04.992 00:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:31:04.992 00:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:31:05.250 [2024-07-25 00:15:01.037145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:31:05.250 [2024-07-25 00:15:01.037209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:05.250 [2024-07-25 00:15:01.037242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d280 00:31:05.250 [2024-07-25 00:15:01.037254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:05.250 [2024-07-25 00:15:01.037723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:05.250 [2024-07-25 00:15:01.037745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:31:05.250 [2024-07-25 00:15:01.037875] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:31:05.250 [2024-07-25 00:15:01.037902] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:31:05.250 [2024-07-25 00:15:01.038066] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000cf80 00:31:05.250 [2024-07-25 00:15:01.038080] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:31:05.250 [2024-07-25 00:15:01.038204] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005c70 00:31:05.250 [2024-07-25 00:15:01.043697] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000cf80 00:31:05.250 [2024-07-25 00:15:01.043725] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000cf80 00:31:05.250 [2024-07-25 00:15:01.044058] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:05.250 pt4 00:31:05.250 00:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:05.250 00:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:05.250 00:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:05.250 00:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:05.250 00:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:05.250 00:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:05.250 00:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:05.250 00:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:05.250 00:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:05.250 00:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:05.250 00:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:05.250 00:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:05.508 00:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:05.508 "name": "raid_bdev1", 00:31:05.508 "uuid": "024460ef-7dc2-4ffa-8d3a-b7233a009666", 00:31:05.508 "strip_size_kb": 64, 00:31:05.508 "state": "online", 00:31:05.508 "raid_level": "raid5f", 00:31:05.508 "superblock": true, 00:31:05.508 "num_base_bdevs": 4, 00:31:05.508 "num_base_bdevs_discovered": 3, 00:31:05.508 "num_base_bdevs_operational": 3, 00:31:05.508 "base_bdevs_list": [ 00:31:05.508 { 00:31:05.508 "name": null, 00:31:05.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:05.508 "is_configured": false, 00:31:05.508 "data_offset": 2048, 00:31:05.508 "data_size": 63488 00:31:05.508 }, 00:31:05.508 { 00:31:05.508 "name": "pt2", 00:31:05.508 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:05.508 "is_configured": true, 00:31:05.508 "data_offset": 2048, 00:31:05.508 "data_size": 63488 00:31:05.508 }, 00:31:05.508 { 00:31:05.508 "name": "pt3", 00:31:05.508 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:05.508 "is_configured": true, 00:31:05.508 "data_offset": 2048, 00:31:05.508 "data_size": 63488 00:31:05.508 }, 00:31:05.508 { 00:31:05.508 "name": "pt4", 00:31:05.508 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:05.508 "is_configured": true, 00:31:05.508 "data_offset": 2048, 00:31:05.508 "data_size": 63488 00:31:05.508 } 00:31:05.508 ] 00:31:05.508 }' 00:31:05.508 00:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:05.508 00:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.766 00:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:31:05.766 00:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:06.024 00:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:31:06.024 00:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:06.024 00:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:31:06.282 [2024-07-25 00:15:02.058814] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:06.282 00:15:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 024460ef-7dc2-4ffa-8d3a-b7233a009666 '!=' 024460ef-7dc2-4ffa-8d3a-b7233a009666 ']' 00:31:06.282 00:15:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 107879 00:31:06.282 00:15:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 107879 ']' 00:31:06.282 00:15:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 107879 00:31:06.282 00:15:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:31:06.282 00:15:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:06.282 00:15:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 107879 00:31:06.282 killing process with pid 107879 00:31:06.282 00:15:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:06.282 00:15:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:06.282 00:15:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 107879' 00:31:06.282 00:15:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 107879 00:31:06.282 00:15:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 107879 00:31:06.282 [2024-07-25 00:15:02.106132] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:06.282 [2024-07-25 00:15:02.106300] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:06.282 [2024-07-25 00:15:02.106431] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:06.282 [2024-07-25 00:15:02.106464] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000cf80 name raid_bdev1, state offline 00:31:06.849 [2024-07-25 00:15:02.453545] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:07.784 00:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:31:07.784 00:31:07.784 real 0m20.388s 00:31:07.784 user 0m35.539s 00:31:07.784 sys 0m3.211s 00:31:07.784 00:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:07.784 00:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.784 ************************************ 00:31:07.784 END TEST raid5f_superblock_test 00:31:07.784 ************************************ 00:31:07.784 00:15:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # '[' true = true ']' 00:31:07.784 00:15:03 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:31:07.784 00:15:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:31:07.784 00:15:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:07.784 00:15:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:07.784 ************************************ 00:31:07.784 START TEST raid5f_rebuild_test 00:31:07.784 ************************************ 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid5f 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:31:07.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid5f '!=' raid1 ']' 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # strip_size=64 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # create_arg+=' -z 64' 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=108632 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 108632 /var/tmp/spdk-raid.sock 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 108632 ']' 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:07.784 00:15:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.784 [2024-07-25 00:15:03.558174] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:31:07.784 [2024-07-25 00:15:03.558553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108632 ] 00:31:07.784 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:07.784 Zero copy mechanism will not be used. 00:31:08.042 [2024-07-25 00:15:03.731583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.299 [2024-07-25 00:15:03.946653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.299 [2024-07-25 00:15:04.092975] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:08.556 00:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:08.556 00:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:31:08.556 00:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:31:08.556 00:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:08.814 BaseBdev1_malloc 00:31:08.814 00:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:09.072 [2024-07-25 00:15:04.829434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:09.072 [2024-07-25 00:15:04.829681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:09.072 [2024-07-25 00:15:04.829722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:31:09.072 [2024-07-25 00:15:04.829739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:09.072 [2024-07-25 00:15:04.832014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:09.072 [2024-07-25 00:15:04.832061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:09.072 BaseBdev1 00:31:09.072 00:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:31:09.072 00:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:09.331 BaseBdev2_malloc 00:31:09.331 00:15:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:09.590 [2024-07-25 00:15:05.272981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:09.590 [2024-07-25 00:15:05.273058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:09.590 [2024-07-25 00:15:05.273083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:31:09.590 [2024-07-25 00:15:05.273099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:09.590 [2024-07-25 00:15:05.275091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:09.590 [2024-07-25 00:15:05.275136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:09.590 BaseBdev2 00:31:09.590 00:15:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:31:09.590 00:15:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:09.848 BaseBdev3_malloc 00:31:09.848 00:15:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:31:09.848 [2024-07-25 00:15:05.662622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:31:09.848 [2024-07-25 00:15:05.662684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:09.848 [2024-07-25 00:15:05.662709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:31:09.848 [2024-07-25 00:15:05.662722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:09.848 [2024-07-25 00:15:05.664842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:09.848 [2024-07-25 00:15:05.664880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:09.848 BaseBdev3 00:31:09.848 00:15:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:31:09.848 00:15:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:31:10.106 BaseBdev4_malloc 00:31:10.106 00:15:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:31:10.365 [2024-07-25 00:15:06.066937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:31:10.365 [2024-07-25 00:15:06.067007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:10.365 [2024-07-25 00:15:06.067038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:31:10.365 [2024-07-25 00:15:06.067052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:10.365 BaseBdev4 00:31:10.365 [2024-07-25 00:15:06.069317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:10.365 [2024-07-25 00:15:06.069352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:31:10.365 00:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:31:10.624 spare_malloc 00:31:10.624 00:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:10.624 spare_delay 00:31:10.882 00:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:10.882 [2024-07-25 00:15:06.668480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:10.882 [2024-07-25 00:15:06.668671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:10.883 [2024-07-25 00:15:06.668705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:31:10.883 [2024-07-25 00:15:06.668721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:10.883 [2024-07-25 00:15:06.670794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:10.883 [2024-07-25 00:15:06.670843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:10.883 spare 00:31:10.883 00:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:31:11.141 [2024-07-25 00:15:06.848524] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:11.141 [2024-07-25 00:15:06.850299] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:11.141 [2024-07-25 00:15:06.850368] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:11.141 [2024-07-25 00:15:06.850432] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:11.141 [2024-07-25 00:15:06.850542] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a880 00:31:11.142 [2024-07-25 00:15:06.850557] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:31:11.142 [2024-07-25 00:15:06.850659] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:31:11.142 [2024-07-25 00:15:06.856336] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a880 00:31:11.142 [2024-07-25 00:15:06.856360] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a880 00:31:11.142 [2024-07-25 00:15:06.856538] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:11.142 00:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:31:11.142 00:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:11.142 00:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:11.142 00:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:11.142 00:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:11.142 00:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:31:11.142 00:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:11.142 00:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:11.142 00:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:11.142 00:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:11.142 00:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:11.142 00:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:11.401 00:15:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:11.401 "name": "raid_bdev1", 00:31:11.401 "uuid": "d5c54a68-996e-4bf6-994e-18d599cd03cc", 00:31:11.401 "strip_size_kb": 64, 00:31:11.401 "state": "online", 00:31:11.401 "raid_level": "raid5f", 00:31:11.401 "superblock": false, 00:31:11.401 "num_base_bdevs": 4, 00:31:11.401 "num_base_bdevs_discovered": 4, 00:31:11.401 "num_base_bdevs_operational": 4, 00:31:11.401 "base_bdevs_list": [ 00:31:11.401 { 00:31:11.401 "name": "BaseBdev1", 00:31:11.401 "uuid": "2d741f85-c711-5756-9ab3-d83c7ac7043b", 00:31:11.401 "is_configured": true, 00:31:11.401 "data_offset": 0, 00:31:11.401 "data_size": 65536 00:31:11.401 }, 00:31:11.401 { 00:31:11.401 "name": "BaseBdev2", 00:31:11.401 "uuid": "0ba3bf81-00ca-5b8c-8826-f39bac9b1f6b", 00:31:11.401 "is_configured": true, 00:31:11.401 "data_offset": 0, 00:31:11.401 "data_size": 65536 00:31:11.401 }, 00:31:11.401 { 00:31:11.401 "name": "BaseBdev3", 00:31:11.401 "uuid": "bad3bc1e-8a09-55e8-818d-08f8809a89c0", 00:31:11.401 "is_configured": true, 00:31:11.401 "data_offset": 0, 00:31:11.401 "data_size": 65536 00:31:11.401 }, 00:31:11.401 { 00:31:11.401 "name": "BaseBdev4", 00:31:11.401 "uuid": "af3898fa-d991-53ea-b06a-4e1d85ca816f", 00:31:11.401 "is_configured": true, 00:31:11.401 "data_offset": 0, 00:31:11.401 "data_size": 65536 00:31:11.401 } 00:31:11.401 ] 00:31:11.401 }' 00:31:11.401 00:15:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:11.401 00:15:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.659 00:15:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:11.659 00:15:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:31:11.917 [2024-07-25 00:15:07.594288] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:11.917 00:15:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=196608 00:31:11.917 00:15:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:11.917 00:15:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:12.176 00:15:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:31:12.176 00:15:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:31:12.176 00:15:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:31:12.176 00:15:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:31:12.176 00:15:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:31:12.176 00:15:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:12.176 00:15:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:12.176 00:15:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:12.176 00:15:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:12.176 00:15:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:12.176 00:15:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:31:12.176 00:15:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:12.176 00:15:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:12.176 00:15:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:12.176 [2024-07-25 00:15:07.982215] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005c70 00:31:12.176 /dev/nbd0 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:12.176 1+0 records in 00:31:12.176 1+0 records out 00:31:12.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237092 s, 17.3 MB/s 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid5f ']' 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # write_unit_size=384 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # echo 192 00:31:12.176 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:31:12.744 512+0 records in 00:31:12.744 512+0 records out 00:31:12.744 100663296 bytes (101 MB, 96 MiB) copied, 0.459627 s, 219 MB/s 00:31:12.744 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:12.744 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:12.744 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:12.744 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:12.744 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:31:12.744 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:12.744 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:13.003 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:13.003 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:13.003 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:13.003 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:13.003 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:13.003 [2024-07-25 00:15:08.754201] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:13.003 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:13.003 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:13.003 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:13.003 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:13.261 [2024-07-25 00:15:08.941294] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:13.261 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:13.261 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:13.261 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:13.261 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:13.261 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:13.261 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:13.261 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:13.261 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:13.261 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:13.261 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:13.261 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:13.261 00:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:13.520 00:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:13.520 "name": "raid_bdev1", 00:31:13.520 "uuid": "d5c54a68-996e-4bf6-994e-18d599cd03cc", 00:31:13.520 "strip_size_kb": 64, 00:31:13.520 "state": "online", 00:31:13.520 "raid_level": "raid5f", 00:31:13.520 "superblock": false, 00:31:13.520 "num_base_bdevs": 4, 00:31:13.520 "num_base_bdevs_discovered": 3, 00:31:13.520 "num_base_bdevs_operational": 3, 00:31:13.520 "base_bdevs_list": [ 00:31:13.520 { 00:31:13.520 "name": null, 00:31:13.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:13.520 "is_configured": false, 00:31:13.520 "data_offset": 0, 00:31:13.520 "data_size": 65536 00:31:13.520 }, 00:31:13.520 { 00:31:13.520 "name": "BaseBdev2", 00:31:13.520 "uuid": "0ba3bf81-00ca-5b8c-8826-f39bac9b1f6b", 00:31:13.520 "is_configured": true, 00:31:13.520 "data_offset": 0, 00:31:13.520 "data_size": 65536 00:31:13.520 }, 00:31:13.520 { 00:31:13.520 "name": "BaseBdev3", 00:31:13.520 "uuid": "bad3bc1e-8a09-55e8-818d-08f8809a89c0", 00:31:13.520 "is_configured": true, 00:31:13.520 "data_offset": 0, 00:31:13.520 "data_size": 65536 00:31:13.520 }, 00:31:13.520 { 00:31:13.520 "name": "BaseBdev4", 00:31:13.520 "uuid": "af3898fa-d991-53ea-b06a-4e1d85ca816f", 00:31:13.520 "is_configured": true, 00:31:13.520 "data_offset": 0, 00:31:13.520 "data_size": 65536 00:31:13.520 } 00:31:13.520 ] 00:31:13.520 }' 00:31:13.520 00:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:13.520 00:15:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.778 00:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:14.037 [2024-07-25 00:15:09.709498] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:14.037 [2024-07-25 00:15:09.719706] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002b270 00:31:14.037 [2024-07-25 00:15:09.726599] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:14.037 00:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:14.973 00:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:14.973 00:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:14.973 00:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:14.973 00:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:14.973 00:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:14.973 00:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:14.973 00:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:15.255 00:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:15.255 "name": "raid_bdev1", 00:31:15.255 "uuid": "d5c54a68-996e-4bf6-994e-18d599cd03cc", 00:31:15.255 "strip_size_kb": 64, 00:31:15.255 "state": "online", 00:31:15.255 "raid_level": "raid5f", 00:31:15.255 "superblock": false, 00:31:15.255 "num_base_bdevs": 4, 00:31:15.255 "num_base_bdevs_discovered": 4, 00:31:15.255 "num_base_bdevs_operational": 4, 00:31:15.255 "process": { 00:31:15.255 "type": "rebuild", 00:31:15.255 "target": "spare", 00:31:15.255 "progress": { 00:31:15.255 "blocks": 23040, 00:31:15.255 "percent": 11 00:31:15.255 } 00:31:15.255 }, 00:31:15.255 "base_bdevs_list": [ 00:31:15.255 { 00:31:15.255 "name": "spare", 00:31:15.255 "uuid": "5e797b09-9495-5a29-aaa3-0813ddd0556a", 00:31:15.255 "is_configured": true, 00:31:15.255 "data_offset": 0, 00:31:15.255 "data_size": 65536 00:31:15.255 }, 00:31:15.255 { 00:31:15.255 "name": "BaseBdev2", 00:31:15.255 "uuid": "0ba3bf81-00ca-5b8c-8826-f39bac9b1f6b", 00:31:15.255 "is_configured": true, 00:31:15.255 "data_offset": 0, 00:31:15.255 "data_size": 65536 00:31:15.255 }, 00:31:15.255 { 00:31:15.255 "name": "BaseBdev3", 00:31:15.255 "uuid": "bad3bc1e-8a09-55e8-818d-08f8809a89c0", 00:31:15.255 "is_configured": true, 00:31:15.255 "data_offset": 0, 00:31:15.255 "data_size": 65536 00:31:15.255 }, 00:31:15.255 { 00:31:15.255 "name": "BaseBdev4", 00:31:15.255 "uuid": "af3898fa-d991-53ea-b06a-4e1d85ca816f", 00:31:15.255 "is_configured": true, 00:31:15.255 "data_offset": 0, 00:31:15.255 "data_size": 65536 00:31:15.255 } 00:31:15.255 ] 00:31:15.255 }' 00:31:15.255 00:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:15.255 00:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:15.255 00:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:15.255 00:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:15.255 00:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:15.528 [2024-07-25 00:15:11.175943] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:15.528 [2024-07-25 00:15:11.237371] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:15.528 [2024-07-25 00:15:11.237438] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:15.528 [2024-07-25 00:15:11.237459] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:15.528 [2024-07-25 00:15:11.237470] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:15.528 00:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:15.528 00:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:15.528 00:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:15.528 00:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:15.528 00:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:15.528 00:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:15.528 00:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:15.528 00:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:15.528 00:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:15.528 00:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:15.528 00:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:15.528 00:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:15.787 00:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:15.787 "name": "raid_bdev1", 00:31:15.787 "uuid": "d5c54a68-996e-4bf6-994e-18d599cd03cc", 00:31:15.787 "strip_size_kb": 64, 00:31:15.787 "state": "online", 00:31:15.787 "raid_level": "raid5f", 00:31:15.787 "superblock": false, 00:31:15.787 "num_base_bdevs": 4, 00:31:15.787 "num_base_bdevs_discovered": 3, 00:31:15.787 "num_base_bdevs_operational": 3, 00:31:15.787 "base_bdevs_list": [ 00:31:15.787 { 00:31:15.787 "name": null, 00:31:15.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:15.787 "is_configured": false, 00:31:15.787 "data_offset": 0, 00:31:15.787 "data_size": 65536 00:31:15.787 }, 00:31:15.787 { 00:31:15.787 "name": "BaseBdev2", 00:31:15.787 "uuid": "0ba3bf81-00ca-5b8c-8826-f39bac9b1f6b", 00:31:15.787 "is_configured": true, 00:31:15.787 "data_offset": 0, 00:31:15.787 "data_size": 65536 00:31:15.787 }, 00:31:15.787 { 00:31:15.787 "name": "BaseBdev3", 00:31:15.787 "uuid": "bad3bc1e-8a09-55e8-818d-08f8809a89c0", 00:31:15.787 "is_configured": true, 00:31:15.787 "data_offset": 0, 00:31:15.787 "data_size": 65536 00:31:15.787 }, 00:31:15.787 { 00:31:15.787 "name": "BaseBdev4", 00:31:15.787 "uuid": "af3898fa-d991-53ea-b06a-4e1d85ca816f", 00:31:15.787 "is_configured": true, 00:31:15.787 "data_offset": 0, 00:31:15.787 "data_size": 65536 00:31:15.787 } 00:31:15.787 ] 00:31:15.787 }' 00:31:15.787 00:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:15.787 00:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.046 00:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:16.046 00:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:16.046 00:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:16.046 00:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:16.046 00:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:16.046 00:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:16.046 00:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:16.305 00:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:16.305 "name": "raid_bdev1", 00:31:16.305 "uuid": "d5c54a68-996e-4bf6-994e-18d599cd03cc", 00:31:16.305 "strip_size_kb": 64, 00:31:16.305 "state": "online", 00:31:16.305 "raid_level": "raid5f", 00:31:16.305 "superblock": false, 00:31:16.305 "num_base_bdevs": 4, 00:31:16.305 "num_base_bdevs_discovered": 3, 00:31:16.305 "num_base_bdevs_operational": 3, 00:31:16.305 "base_bdevs_list": [ 00:31:16.305 { 00:31:16.305 "name": null, 00:31:16.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:16.305 "is_configured": false, 00:31:16.305 "data_offset": 0, 00:31:16.305 "data_size": 65536 00:31:16.305 }, 00:31:16.305 { 00:31:16.305 "name": "BaseBdev2", 00:31:16.305 "uuid": "0ba3bf81-00ca-5b8c-8826-f39bac9b1f6b", 00:31:16.305 "is_configured": true, 00:31:16.305 "data_offset": 0, 00:31:16.305 "data_size": 65536 00:31:16.305 }, 00:31:16.305 { 00:31:16.305 "name": "BaseBdev3", 00:31:16.305 "uuid": "bad3bc1e-8a09-55e8-818d-08f8809a89c0", 00:31:16.305 "is_configured": true, 00:31:16.305 "data_offset": 0, 00:31:16.305 "data_size": 65536 00:31:16.305 }, 00:31:16.305 { 00:31:16.305 "name": "BaseBdev4", 00:31:16.305 "uuid": "af3898fa-d991-53ea-b06a-4e1d85ca816f", 00:31:16.305 "is_configured": true, 00:31:16.305 "data_offset": 0, 00:31:16.305 "data_size": 65536 00:31:16.305 } 00:31:16.305 ] 00:31:16.305 }' 00:31:16.305 00:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:16.305 00:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:16.305 00:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:16.305 00:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:16.305 00:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:16.564 [2024-07-25 00:15:12.296803] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:16.564 [2024-07-25 00:15:12.307596] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002b340 00:31:16.564 [2024-07-25 00:15:12.314909] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:16.564 00:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:31:17.500 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:17.500 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:17.500 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:17.500 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:17.500 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:17.500 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:17.500 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:17.758 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:17.758 "name": "raid_bdev1", 00:31:17.758 "uuid": "d5c54a68-996e-4bf6-994e-18d599cd03cc", 00:31:17.758 "strip_size_kb": 64, 00:31:17.758 "state": "online", 00:31:17.758 "raid_level": "raid5f", 00:31:17.758 "superblock": false, 00:31:17.758 "num_base_bdevs": 4, 00:31:17.758 "num_base_bdevs_discovered": 4, 00:31:17.758 "num_base_bdevs_operational": 4, 00:31:17.758 "process": { 00:31:17.758 "type": "rebuild", 00:31:17.758 "target": "spare", 00:31:17.758 "progress": { 00:31:17.758 "blocks": 23040, 00:31:17.758 "percent": 11 00:31:17.758 } 00:31:17.758 }, 00:31:17.758 "base_bdevs_list": [ 00:31:17.758 { 00:31:17.758 "name": "spare", 00:31:17.758 "uuid": "5e797b09-9495-5a29-aaa3-0813ddd0556a", 00:31:17.758 "is_configured": true, 00:31:17.758 "data_offset": 0, 00:31:17.758 "data_size": 65536 00:31:17.758 }, 00:31:17.758 { 00:31:17.758 "name": "BaseBdev2", 00:31:17.758 "uuid": "0ba3bf81-00ca-5b8c-8826-f39bac9b1f6b", 00:31:17.758 "is_configured": true, 00:31:17.758 "data_offset": 0, 00:31:17.758 "data_size": 65536 00:31:17.758 }, 00:31:17.758 { 00:31:17.758 "name": "BaseBdev3", 00:31:17.758 "uuid": "bad3bc1e-8a09-55e8-818d-08f8809a89c0", 00:31:17.758 "is_configured": true, 00:31:17.758 "data_offset": 0, 00:31:17.758 "data_size": 65536 00:31:17.758 }, 00:31:17.758 { 00:31:17.758 "name": "BaseBdev4", 00:31:17.758 "uuid": "af3898fa-d991-53ea-b06a-4e1d85ca816f", 00:31:17.758 "is_configured": true, 00:31:17.758 "data_offset": 0, 00:31:17.758 "data_size": 65536 00:31:17.758 } 00:31:17.758 ] 00:31:17.758 }' 00:31:17.758 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:17.758 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:17.758 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:17.758 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:17.758 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:31:17.758 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:31:17.759 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid5f = raid1 ']' 00:31:17.759 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=1103 00:31:17.759 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:17.759 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:17.759 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:17.759 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:17.759 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:17.759 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:17.759 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:17.759 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:18.016 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:18.016 "name": "raid_bdev1", 00:31:18.016 "uuid": "d5c54a68-996e-4bf6-994e-18d599cd03cc", 00:31:18.016 "strip_size_kb": 64, 00:31:18.016 "state": "online", 00:31:18.016 "raid_level": "raid5f", 00:31:18.016 "superblock": false, 00:31:18.016 "num_base_bdevs": 4, 00:31:18.016 "num_base_bdevs_discovered": 4, 00:31:18.016 "num_base_bdevs_operational": 4, 00:31:18.016 "process": { 00:31:18.016 "type": "rebuild", 00:31:18.016 "target": "spare", 00:31:18.016 "progress": { 00:31:18.016 "blocks": 26880, 00:31:18.016 "percent": 13 00:31:18.016 } 00:31:18.016 }, 00:31:18.016 "base_bdevs_list": [ 00:31:18.016 { 00:31:18.016 "name": "spare", 00:31:18.016 "uuid": "5e797b09-9495-5a29-aaa3-0813ddd0556a", 00:31:18.016 "is_configured": true, 00:31:18.016 "data_offset": 0, 00:31:18.016 "data_size": 65536 00:31:18.016 }, 00:31:18.016 { 00:31:18.016 "name": "BaseBdev2", 00:31:18.016 "uuid": "0ba3bf81-00ca-5b8c-8826-f39bac9b1f6b", 00:31:18.016 "is_configured": true, 00:31:18.016 "data_offset": 0, 00:31:18.016 "data_size": 65536 00:31:18.016 }, 00:31:18.016 { 00:31:18.016 "name": "BaseBdev3", 00:31:18.016 "uuid": "bad3bc1e-8a09-55e8-818d-08f8809a89c0", 00:31:18.016 "is_configured": true, 00:31:18.016 "data_offset": 0, 00:31:18.016 "data_size": 65536 00:31:18.016 }, 00:31:18.016 { 00:31:18.016 "name": "BaseBdev4", 00:31:18.016 "uuid": "af3898fa-d991-53ea-b06a-4e1d85ca816f", 00:31:18.016 "is_configured": true, 00:31:18.016 "data_offset": 0, 00:31:18.016 "data_size": 65536 00:31:18.016 } 00:31:18.016 ] 00:31:18.016 }' 00:31:18.016 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:18.016 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:18.017 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:18.017 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:18.017 00:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:19.392 00:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:19.392 00:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:19.392 00:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:19.392 00:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:19.392 00:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:19.392 00:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:19.392 00:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:19.392 00:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:19.392 00:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:19.392 "name": "raid_bdev1", 00:31:19.392 "uuid": "d5c54a68-996e-4bf6-994e-18d599cd03cc", 00:31:19.392 "strip_size_kb": 64, 00:31:19.392 "state": "online", 00:31:19.392 "raid_level": "raid5f", 00:31:19.392 "superblock": false, 00:31:19.392 "num_base_bdevs": 4, 00:31:19.392 "num_base_bdevs_discovered": 4, 00:31:19.393 "num_base_bdevs_operational": 4, 00:31:19.393 "process": { 00:31:19.393 "type": "rebuild", 00:31:19.393 "target": "spare", 00:31:19.393 "progress": { 00:31:19.393 "blocks": 49920, 00:31:19.393 "percent": 25 00:31:19.393 } 00:31:19.393 }, 00:31:19.393 "base_bdevs_list": [ 00:31:19.393 { 00:31:19.393 "name": "spare", 00:31:19.393 "uuid": "5e797b09-9495-5a29-aaa3-0813ddd0556a", 00:31:19.393 "is_configured": true, 00:31:19.393 "data_offset": 0, 00:31:19.393 "data_size": 65536 00:31:19.393 }, 00:31:19.393 { 00:31:19.393 "name": "BaseBdev2", 00:31:19.393 "uuid": "0ba3bf81-00ca-5b8c-8826-f39bac9b1f6b", 00:31:19.393 "is_configured": true, 00:31:19.393 "data_offset": 0, 00:31:19.393 "data_size": 65536 00:31:19.393 }, 00:31:19.393 { 00:31:19.393 "name": "BaseBdev3", 00:31:19.393 "uuid": "bad3bc1e-8a09-55e8-818d-08f8809a89c0", 00:31:19.393 "is_configured": true, 00:31:19.393 "data_offset": 0, 00:31:19.393 "data_size": 65536 00:31:19.393 }, 00:31:19.393 { 00:31:19.393 "name": "BaseBdev4", 00:31:19.393 "uuid": "af3898fa-d991-53ea-b06a-4e1d85ca816f", 00:31:19.393 "is_configured": true, 00:31:19.393 "data_offset": 0, 00:31:19.393 "data_size": 65536 00:31:19.393 } 00:31:19.393 ] 00:31:19.393 }' 00:31:19.393 00:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:19.393 00:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:19.393 00:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:19.393 00:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:19.393 00:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:20.330 00:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:20.330 00:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:20.330 00:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:20.330 00:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:20.330 00:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:20.330 00:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:20.330 00:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:20.330 00:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:20.589 00:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:20.589 "name": "raid_bdev1", 00:31:20.589 "uuid": "d5c54a68-996e-4bf6-994e-18d599cd03cc", 00:31:20.589 "strip_size_kb": 64, 00:31:20.589 "state": "online", 00:31:20.589 "raid_level": "raid5f", 00:31:20.589 "superblock": false, 00:31:20.589 "num_base_bdevs": 4, 00:31:20.589 "num_base_bdevs_discovered": 4, 00:31:20.589 "num_base_bdevs_operational": 4, 00:31:20.589 "process": { 00:31:20.589 "type": "rebuild", 00:31:20.589 "target": "spare", 00:31:20.589 "progress": { 00:31:20.589 "blocks": 74880, 00:31:20.589 "percent": 38 00:31:20.589 } 00:31:20.589 }, 00:31:20.589 "base_bdevs_list": [ 00:31:20.589 { 00:31:20.589 "name": "spare", 00:31:20.589 "uuid": "5e797b09-9495-5a29-aaa3-0813ddd0556a", 00:31:20.589 "is_configured": true, 00:31:20.589 "data_offset": 0, 00:31:20.589 "data_size": 65536 00:31:20.589 }, 00:31:20.589 { 00:31:20.589 "name": "BaseBdev2", 00:31:20.589 "uuid": "0ba3bf81-00ca-5b8c-8826-f39bac9b1f6b", 00:31:20.589 "is_configured": true, 00:31:20.589 "data_offset": 0, 00:31:20.589 "data_size": 65536 00:31:20.589 }, 00:31:20.589 { 00:31:20.589 "name": "BaseBdev3", 00:31:20.589 "uuid": "bad3bc1e-8a09-55e8-818d-08f8809a89c0", 00:31:20.589 "is_configured": true, 00:31:20.589 "data_offset": 0, 00:31:20.589 "data_size": 65536 00:31:20.589 }, 00:31:20.589 { 00:31:20.589 "name": "BaseBdev4", 00:31:20.589 "uuid": "af3898fa-d991-53ea-b06a-4e1d85ca816f", 00:31:20.589 "is_configured": true, 00:31:20.589 "data_offset": 0, 00:31:20.589 "data_size": 65536 00:31:20.589 } 00:31:20.589 ] 00:31:20.589 }' 00:31:20.589 00:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:20.589 00:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:20.589 00:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:20.589 00:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:20.589 00:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:21.526 00:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:21.526 00:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:21.526 00:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:21.526 00:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:21.526 00:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:21.526 00:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:21.526 00:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:21.526 00:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:21.785 00:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:21.785 "name": "raid_bdev1", 00:31:21.785 "uuid": "d5c54a68-996e-4bf6-994e-18d599cd03cc", 00:31:21.785 "strip_size_kb": 64, 00:31:21.785 "state": "online", 00:31:21.785 "raid_level": "raid5f", 00:31:21.785 "superblock": false, 00:31:21.785 "num_base_bdevs": 4, 00:31:21.785 "num_base_bdevs_discovered": 4, 00:31:21.785 "num_base_bdevs_operational": 4, 00:31:21.785 "process": { 00:31:21.785 "type": "rebuild", 00:31:21.785 "target": "spare", 00:31:21.785 "progress": { 00:31:21.785 "blocks": 97920, 00:31:21.785 "percent": 49 00:31:21.785 } 00:31:21.785 }, 00:31:21.785 "base_bdevs_list": [ 00:31:21.785 { 00:31:21.785 "name": "spare", 00:31:21.785 "uuid": "5e797b09-9495-5a29-aaa3-0813ddd0556a", 00:31:21.785 "is_configured": true, 00:31:21.785 "data_offset": 0, 00:31:21.785 "data_size": 65536 00:31:21.785 }, 00:31:21.785 { 00:31:21.785 "name": "BaseBdev2", 00:31:21.785 "uuid": "0ba3bf81-00ca-5b8c-8826-f39bac9b1f6b", 00:31:21.785 "is_configured": true, 00:31:21.785 "data_offset": 0, 00:31:21.785 "data_size": 65536 00:31:21.785 }, 00:31:21.785 { 00:31:21.785 "name": "BaseBdev3", 00:31:21.785 "uuid": "bad3bc1e-8a09-55e8-818d-08f8809a89c0", 00:31:21.785 "is_configured": true, 00:31:21.785 "data_offset": 0, 00:31:21.785 "data_size": 65536 00:31:21.785 }, 00:31:21.785 { 00:31:21.785 "name": "BaseBdev4", 00:31:21.785 "uuid": "af3898fa-d991-53ea-b06a-4e1d85ca816f", 00:31:21.785 "is_configured": true, 00:31:21.785 "data_offset": 0, 00:31:21.785 "data_size": 65536 00:31:21.785 } 00:31:21.785 ] 00:31:21.785 }' 00:31:21.785 00:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:21.785 00:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:21.785 00:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:21.785 00:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:21.785 00:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:22.721 00:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:22.721 00:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:22.721 00:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:22.721 00:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:22.721 00:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:22.721 00:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:22.721 00:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:22.721 00:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:22.980 00:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:22.980 "name": "raid_bdev1", 00:31:22.980 "uuid": "d5c54a68-996e-4bf6-994e-18d599cd03cc", 00:31:22.980 "strip_size_kb": 64, 00:31:22.980 "state": "online", 00:31:22.980 "raid_level": "raid5f", 00:31:22.980 "superblock": false, 00:31:22.980 "num_base_bdevs": 4, 00:31:22.980 "num_base_bdevs_discovered": 4, 00:31:22.981 "num_base_bdevs_operational": 4, 00:31:22.981 "process": { 00:31:22.981 "type": "rebuild", 00:31:22.981 "target": "spare", 00:31:22.981 "progress": { 00:31:22.981 "blocks": 122880, 00:31:22.981 "percent": 62 00:31:22.981 } 00:31:22.981 }, 00:31:22.981 "base_bdevs_list": [ 00:31:22.981 { 00:31:22.981 "name": "spare", 00:31:22.981 "uuid": "5e797b09-9495-5a29-aaa3-0813ddd0556a", 00:31:22.981 "is_configured": true, 00:31:22.981 "data_offset": 0, 00:31:22.981 "data_size": 65536 00:31:22.981 }, 00:31:22.981 { 00:31:22.981 "name": "BaseBdev2", 00:31:22.981 "uuid": "0ba3bf81-00ca-5b8c-8826-f39bac9b1f6b", 00:31:22.981 "is_configured": true, 00:31:22.981 "data_offset": 0, 00:31:22.981 "data_size": 65536 00:31:22.981 }, 00:31:22.981 { 00:31:22.981 "name": "BaseBdev3", 00:31:22.981 "uuid": "bad3bc1e-8a09-55e8-818d-08f8809a89c0", 00:31:22.981 "is_configured": true, 00:31:22.981 "data_offset": 0, 00:31:22.981 "data_size": 65536 00:31:22.981 }, 00:31:22.981 { 00:31:22.981 "name": "BaseBdev4", 00:31:22.981 "uuid": "af3898fa-d991-53ea-b06a-4e1d85ca816f", 00:31:22.981 "is_configured": true, 00:31:22.981 "data_offset": 0, 00:31:22.981 "data_size": 65536 00:31:22.981 } 00:31:22.981 ] 00:31:22.981 }' 00:31:22.981 00:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:22.981 00:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:22.981 00:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:22.981 00:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:22.981 00:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:24.358 00:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:24.358 00:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:24.358 00:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:24.358 00:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:24.358 00:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:24.358 00:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:24.358 00:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:24.358 00:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:24.358 00:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:24.358 "name": "raid_bdev1", 00:31:24.358 "uuid": "d5c54a68-996e-4bf6-994e-18d599cd03cc", 00:31:24.358 "strip_size_kb": 64, 00:31:24.358 "state": "online", 00:31:24.358 "raid_level": "raid5f", 00:31:24.358 "superblock": false, 00:31:24.358 "num_base_bdevs": 4, 00:31:24.358 "num_base_bdevs_discovered": 4, 00:31:24.358 "num_base_bdevs_operational": 4, 00:31:24.358 "process": { 00:31:24.358 "type": "rebuild", 00:31:24.358 "target": "spare", 00:31:24.358 "progress": { 00:31:24.358 "blocks": 147840, 00:31:24.358 "percent": 75 00:31:24.358 } 00:31:24.358 }, 00:31:24.358 "base_bdevs_list": [ 00:31:24.358 { 00:31:24.358 "name": "spare", 00:31:24.358 "uuid": "5e797b09-9495-5a29-aaa3-0813ddd0556a", 00:31:24.358 "is_configured": true, 00:31:24.358 "data_offset": 0, 00:31:24.358 "data_size": 65536 00:31:24.358 }, 00:31:24.358 { 00:31:24.358 "name": "BaseBdev2", 00:31:24.358 "uuid": "0ba3bf81-00ca-5b8c-8826-f39bac9b1f6b", 00:31:24.358 "is_configured": true, 00:31:24.358 "data_offset": 0, 00:31:24.358 "data_size": 65536 00:31:24.358 }, 00:31:24.358 { 00:31:24.358 "name": "BaseBdev3", 00:31:24.358 "uuid": "bad3bc1e-8a09-55e8-818d-08f8809a89c0", 00:31:24.358 "is_configured": true, 00:31:24.358 "data_offset": 0, 00:31:24.358 "data_size": 65536 00:31:24.358 }, 00:31:24.358 { 00:31:24.358 "name": "BaseBdev4", 00:31:24.358 "uuid": "af3898fa-d991-53ea-b06a-4e1d85ca816f", 00:31:24.358 "is_configured": true, 00:31:24.358 "data_offset": 0, 00:31:24.358 "data_size": 65536 00:31:24.358 } 00:31:24.358 ] 00:31:24.358 }' 00:31:24.358 00:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:24.358 00:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:24.358 00:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:24.358 00:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:24.358 00:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:25.294 00:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:25.294 00:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:25.294 00:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:25.294 00:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:25.294 00:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:25.294 00:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:25.294 00:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:25.294 00:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:25.553 00:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:25.553 "name": "raid_bdev1", 00:31:25.553 "uuid": "d5c54a68-996e-4bf6-994e-18d599cd03cc", 00:31:25.553 "strip_size_kb": 64, 00:31:25.553 "state": "online", 00:31:25.553 "raid_level": "raid5f", 00:31:25.553 "superblock": false, 00:31:25.553 "num_base_bdevs": 4, 00:31:25.553 "num_base_bdevs_discovered": 4, 00:31:25.553 "num_base_bdevs_operational": 4, 00:31:25.553 "process": { 00:31:25.553 "type": "rebuild", 00:31:25.553 "target": "spare", 00:31:25.553 "progress": { 00:31:25.553 "blocks": 170880, 00:31:25.553 "percent": 86 00:31:25.553 } 00:31:25.553 }, 00:31:25.553 "base_bdevs_list": [ 00:31:25.553 { 00:31:25.553 "name": "spare", 00:31:25.553 "uuid": "5e797b09-9495-5a29-aaa3-0813ddd0556a", 00:31:25.553 "is_configured": true, 00:31:25.553 "data_offset": 0, 00:31:25.553 "data_size": 65536 00:31:25.553 }, 00:31:25.553 { 00:31:25.553 "name": "BaseBdev2", 00:31:25.553 "uuid": "0ba3bf81-00ca-5b8c-8826-f39bac9b1f6b", 00:31:25.553 "is_configured": true, 00:31:25.553 "data_offset": 0, 00:31:25.553 "data_size": 65536 00:31:25.553 }, 00:31:25.553 { 00:31:25.553 "name": "BaseBdev3", 00:31:25.553 "uuid": "bad3bc1e-8a09-55e8-818d-08f8809a89c0", 00:31:25.553 "is_configured": true, 00:31:25.553 "data_offset": 0, 00:31:25.553 "data_size": 65536 00:31:25.553 }, 00:31:25.553 { 00:31:25.553 "name": "BaseBdev4", 00:31:25.553 "uuid": "af3898fa-d991-53ea-b06a-4e1d85ca816f", 00:31:25.553 "is_configured": true, 00:31:25.553 "data_offset": 0, 00:31:25.553 "data_size": 65536 00:31:25.553 } 00:31:25.553 ] 00:31:25.553 }' 00:31:25.553 00:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:25.553 00:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:25.553 00:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:25.553 00:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:25.553 00:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:26.928 00:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:26.928 00:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:26.928 00:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:26.928 00:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:26.928 00:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:26.928 00:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:26.928 00:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:26.928 00:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:26.928 00:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:26.928 "name": "raid_bdev1", 00:31:26.928 "uuid": "d5c54a68-996e-4bf6-994e-18d599cd03cc", 00:31:26.928 "strip_size_kb": 64, 00:31:26.928 "state": "online", 00:31:26.928 "raid_level": "raid5f", 00:31:26.928 "superblock": false, 00:31:26.928 "num_base_bdevs": 4, 00:31:26.928 "num_base_bdevs_discovered": 4, 00:31:26.928 "num_base_bdevs_operational": 4, 00:31:26.928 "process": { 00:31:26.928 "type": "rebuild", 00:31:26.928 "target": "spare", 00:31:26.928 "progress": { 00:31:26.928 "blocks": 195840, 00:31:26.928 "percent": 99 00:31:26.928 } 00:31:26.928 }, 00:31:26.928 "base_bdevs_list": [ 00:31:26.928 { 00:31:26.928 "name": "spare", 00:31:26.929 "uuid": "5e797b09-9495-5a29-aaa3-0813ddd0556a", 00:31:26.929 "is_configured": true, 00:31:26.929 "data_offset": 0, 00:31:26.929 "data_size": 65536 00:31:26.929 }, 00:31:26.929 { 00:31:26.929 "name": "BaseBdev2", 00:31:26.929 "uuid": "0ba3bf81-00ca-5b8c-8826-f39bac9b1f6b", 00:31:26.929 "is_configured": true, 00:31:26.929 "data_offset": 0, 00:31:26.929 "data_size": 65536 00:31:26.929 }, 00:31:26.929 { 00:31:26.929 "name": "BaseBdev3", 00:31:26.929 "uuid": "bad3bc1e-8a09-55e8-818d-08f8809a89c0", 00:31:26.929 "is_configured": true, 00:31:26.929 "data_offset": 0, 00:31:26.929 "data_size": 65536 00:31:26.929 }, 00:31:26.929 { 00:31:26.929 "name": "BaseBdev4", 00:31:26.929 "uuid": "af3898fa-d991-53ea-b06a-4e1d85ca816f", 00:31:26.929 "is_configured": true, 00:31:26.929 "data_offset": 0, 00:31:26.929 "data_size": 65536 00:31:26.929 } 00:31:26.929 ] 00:31:26.929 }' 00:31:26.929 00:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:26.929 00:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:26.929 00:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:26.929 00:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:26.929 00:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:26.929 [2024-07-25 00:15:22.682985] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:26.929 [2024-07-25 00:15:22.683052] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:26.929 [2024-07-25 00:15:22.683117] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:27.863 00:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:27.863 00:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:27.863 00:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:27.863 00:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:27.863 00:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:27.863 00:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:27.863 00:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:27.863 00:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:28.122 00:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:28.122 "name": "raid_bdev1", 00:31:28.122 "uuid": "d5c54a68-996e-4bf6-994e-18d599cd03cc", 00:31:28.122 "strip_size_kb": 64, 00:31:28.122 "state": "online", 00:31:28.122 "raid_level": "raid5f", 00:31:28.122 "superblock": false, 00:31:28.122 "num_base_bdevs": 4, 00:31:28.122 "num_base_bdevs_discovered": 4, 00:31:28.122 "num_base_bdevs_operational": 4, 00:31:28.122 "base_bdevs_list": [ 00:31:28.122 { 00:31:28.122 "name": "spare", 00:31:28.122 "uuid": "5e797b09-9495-5a29-aaa3-0813ddd0556a", 00:31:28.122 "is_configured": true, 00:31:28.122 "data_offset": 0, 00:31:28.122 "data_size": 65536 00:31:28.122 }, 00:31:28.122 { 00:31:28.122 "name": "BaseBdev2", 00:31:28.122 "uuid": "0ba3bf81-00ca-5b8c-8826-f39bac9b1f6b", 00:31:28.122 "is_configured": true, 00:31:28.122 "data_offset": 0, 00:31:28.122 "data_size": 65536 00:31:28.122 }, 00:31:28.122 { 00:31:28.122 "name": "BaseBdev3", 00:31:28.122 "uuid": "bad3bc1e-8a09-55e8-818d-08f8809a89c0", 00:31:28.122 "is_configured": true, 00:31:28.122 "data_offset": 0, 00:31:28.122 "data_size": 65536 00:31:28.122 }, 00:31:28.122 { 00:31:28.122 "name": "BaseBdev4", 00:31:28.122 "uuid": "af3898fa-d991-53ea-b06a-4e1d85ca816f", 00:31:28.122 "is_configured": true, 00:31:28.122 "data_offset": 0, 00:31:28.122 "data_size": 65536 00:31:28.122 } 00:31:28.122 ] 00:31:28.122 }' 00:31:28.122 00:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:28.122 00:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:28.122 00:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:28.122 00:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:28.122 00:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:31:28.122 00:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:28.122 00:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:28.122 00:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:28.122 00:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:28.122 00:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:28.122 00:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:28.122 00:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:28.381 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:28.381 "name": "raid_bdev1", 00:31:28.381 "uuid": "d5c54a68-996e-4bf6-994e-18d599cd03cc", 00:31:28.381 "strip_size_kb": 64, 00:31:28.381 "state": "online", 00:31:28.381 "raid_level": "raid5f", 00:31:28.381 "superblock": false, 00:31:28.381 "num_base_bdevs": 4, 00:31:28.381 "num_base_bdevs_discovered": 4, 00:31:28.381 "num_base_bdevs_operational": 4, 00:31:28.381 "base_bdevs_list": [ 00:31:28.381 { 00:31:28.381 "name": "spare", 00:31:28.381 "uuid": "5e797b09-9495-5a29-aaa3-0813ddd0556a", 00:31:28.381 "is_configured": true, 00:31:28.381 "data_offset": 0, 00:31:28.381 "data_size": 65536 00:31:28.381 }, 00:31:28.381 { 00:31:28.381 "name": "BaseBdev2", 00:31:28.381 "uuid": "0ba3bf81-00ca-5b8c-8826-f39bac9b1f6b", 00:31:28.381 "is_configured": true, 00:31:28.381 "data_offset": 0, 00:31:28.381 "data_size": 65536 00:31:28.381 }, 00:31:28.381 { 00:31:28.381 "name": "BaseBdev3", 00:31:28.381 "uuid": "bad3bc1e-8a09-55e8-818d-08f8809a89c0", 00:31:28.381 "is_configured": true, 00:31:28.381 "data_offset": 0, 00:31:28.381 "data_size": 65536 00:31:28.381 }, 00:31:28.381 { 00:31:28.381 "name": "BaseBdev4", 00:31:28.381 "uuid": "af3898fa-d991-53ea-b06a-4e1d85ca816f", 00:31:28.381 "is_configured": true, 00:31:28.381 "data_offset": 0, 00:31:28.381 "data_size": 65536 00:31:28.381 } 00:31:28.381 ] 00:31:28.381 }' 00:31:28.381 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:28.381 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:28.381 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:28.381 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:28.381 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:31:28.381 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:28.381 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:28.381 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:28.381 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:28.381 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:31:28.381 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:28.381 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:28.381 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:28.381 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:28.381 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:28.381 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:28.639 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:28.639 "name": "raid_bdev1", 00:31:28.639 "uuid": "d5c54a68-996e-4bf6-994e-18d599cd03cc", 00:31:28.639 "strip_size_kb": 64, 00:31:28.639 "state": "online", 00:31:28.639 "raid_level": "raid5f", 00:31:28.639 "superblock": false, 00:31:28.639 "num_base_bdevs": 4, 00:31:28.639 "num_base_bdevs_discovered": 4, 00:31:28.639 "num_base_bdevs_operational": 4, 00:31:28.639 "base_bdevs_list": [ 00:31:28.639 { 00:31:28.639 "name": "spare", 00:31:28.639 "uuid": "5e797b09-9495-5a29-aaa3-0813ddd0556a", 00:31:28.639 "is_configured": true, 00:31:28.639 "data_offset": 0, 00:31:28.639 "data_size": 65536 00:31:28.639 }, 00:31:28.639 { 00:31:28.639 "name": "BaseBdev2", 00:31:28.639 "uuid": "0ba3bf81-00ca-5b8c-8826-f39bac9b1f6b", 00:31:28.639 "is_configured": true, 00:31:28.639 "data_offset": 0, 00:31:28.639 "data_size": 65536 00:31:28.639 }, 00:31:28.639 { 00:31:28.639 "name": "BaseBdev3", 00:31:28.639 "uuid": "bad3bc1e-8a09-55e8-818d-08f8809a89c0", 00:31:28.639 "is_configured": true, 00:31:28.639 "data_offset": 0, 00:31:28.639 "data_size": 65536 00:31:28.639 }, 00:31:28.639 { 00:31:28.639 "name": "BaseBdev4", 00:31:28.639 "uuid": "af3898fa-d991-53ea-b06a-4e1d85ca816f", 00:31:28.639 "is_configured": true, 00:31:28.639 "data_offset": 0, 00:31:28.639 "data_size": 65536 00:31:28.639 } 00:31:28.639 ] 00:31:28.639 }' 00:31:28.639 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:28.639 00:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.897 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:29.155 [2024-07-25 00:15:24.911124] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:29.155 [2024-07-25 00:15:24.911158] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:29.155 [2024-07-25 00:15:24.911232] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:29.155 [2024-07-25 00:15:24.911328] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:29.155 [2024-07-25 00:15:24.911342] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state offline 00:31:29.155 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:31:29.155 00:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:29.414 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:31:29.414 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:31:29.414 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:31:29.414 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:29.414 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:29.414 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:29.414 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:29.414 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:29.414 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:29.414 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:31:29.414 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:29.414 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:29.414 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:29.672 /dev/nbd0 00:31:29.672 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:29.672 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:29.672 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:29.672 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:31:29.672 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:29.672 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:29.672 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:29.672 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:31:29.672 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:29.672 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:29.672 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:29.672 1+0 records in 00:31:29.672 1+0 records out 00:31:29.672 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178695 s, 22.9 MB/s 00:31:29.672 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:29.672 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:31:29.672 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:29.672 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:29.673 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:31:29.673 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:29.673 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:29.673 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:31:29.931 /dev/nbd1 00:31:29.931 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:29.931 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:29.931 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:31:29.931 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:31:29.931 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:29.931 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:29.931 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:31:29.931 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:31:29.931 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:29.931 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:29.932 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:29.932 1+0 records in 00:31:29.932 1+0 records out 00:31:29.932 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311703 s, 13.1 MB/s 00:31:29.932 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:29.932 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:31:29.932 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:29.932 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:29.932 00:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:31:29.932 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:29.932 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:29.932 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:31:30.190 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:31:30.190 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:30.190 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:30.190 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:30.190 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:31:30.190 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:30.190 00:15:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:30.448 00:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:30.448 00:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:30.448 00:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:30.448 00:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:30.448 00:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:30.448 00:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:30.448 00:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:30.448 00:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:30.448 00:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:30.448 00:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:30.706 00:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:30.706 00:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:30.706 00:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:30.706 00:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:30.706 00:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:30.706 00:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:30.706 00:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:30.706 00:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:30.706 00:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:31:30.706 00:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 108632 00:31:30.706 00:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 108632 ']' 00:31:30.706 00:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 108632 00:31:30.706 00:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:31:30.706 00:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:30.706 00:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 108632 00:31:30.706 killing process with pid 108632 00:31:30.706 Received shutdown signal, test time was about 60.000000 seconds 00:31:30.706 00:31:30.706 Latency(us) 00:31:30.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.706 =================================================================================================================== 00:31:30.706 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:30.706 00:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:30.706 00:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:30.706 00:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 108632' 00:31:30.706 00:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 108632 00:31:30.706 [2024-07-25 00:15:26.400432] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:30.706 00:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 108632 00:31:30.964 [2024-07-25 00:15:26.716210] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:31.900 ************************************ 00:31:31.900 END TEST raid5f_rebuild_test 00:31:31.900 ************************************ 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:31:31.900 00:31:31.900 real 0m24.145s 00:31:31.900 user 0m32.994s 00:31:31.900 sys 0m2.828s 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.900 00:15:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:31:31.900 00:15:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:31:31.900 00:15:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:31.900 00:15:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:31.900 ************************************ 00:31:31.900 START TEST raid5f_rebuild_test_sb 00:31:31.900 ************************************ 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid5f 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid5f '!=' raid1 ']' 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # strip_size=64 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # create_arg+=' -z 64' 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=109208 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 109208 /var/tmp/spdk-raid.sock 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 109208 ']' 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:31.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:31.900 00:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.900 [2024-07-25 00:15:27.756854] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:31:31.900 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:31.900 Zero copy mechanism will not be used. 00:31:31.900 [2024-07-25 00:15:27.757031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109208 ] 00:31:32.159 [2024-07-25 00:15:27.926143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.417 [2024-07-25 00:15:28.075280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.417 [2024-07-25 00:15:28.215463] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:32.984 00:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:32.984 00:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:31:32.984 00:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:31:32.984 00:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:33.257 BaseBdev1_malloc 00:31:33.257 00:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:33.531 [2024-07-25 00:15:29.143355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:33.531 [2024-07-25 00:15:29.143447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:33.531 [2024-07-25 00:15:29.143477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:31:33.531 [2024-07-25 00:15:29.143492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:33.531 [2024-07-25 00:15:29.145623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:33.531 [2024-07-25 00:15:29.145666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:33.531 BaseBdev1 00:31:33.531 00:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:31:33.531 00:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:33.790 BaseBdev2_malloc 00:31:33.790 00:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:33.790 [2024-07-25 00:15:29.590202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:33.790 [2024-07-25 00:15:29.590296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:33.790 [2024-07-25 00:15:29.590322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:31:33.790 [2024-07-25 00:15:29.590339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:33.790 [2024-07-25 00:15:29.592493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:33.790 [2024-07-25 00:15:29.592534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:33.790 BaseBdev2 00:31:33.790 00:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:31:33.790 00:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:34.049 BaseBdev3_malloc 00:31:34.049 00:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:31:34.308 [2024-07-25 00:15:30.038359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:31:34.308 [2024-07-25 00:15:30.038445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:34.308 [2024-07-25 00:15:30.038472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:31:34.308 [2024-07-25 00:15:30.038487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:34.308 [2024-07-25 00:15:30.040732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:34.308 [2024-07-25 00:15:30.040807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:34.308 BaseBdev3 00:31:34.308 00:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:31:34.308 00:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:31:34.567 BaseBdev4_malloc 00:31:34.567 00:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:31:34.826 [2024-07-25 00:15:30.459773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:31:34.826 [2024-07-25 00:15:30.459868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:34.826 [2024-07-25 00:15:30.459898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:31:34.826 [2024-07-25 00:15:30.459912] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:34.826 [2024-07-25 00:15:30.462015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:34.826 [2024-07-25 00:15:30.462074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:31:34.826 BaseBdev4 00:31:34.826 00:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:31:34.826 spare_malloc 00:31:34.826 00:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:35.085 spare_delay 00:31:35.085 00:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:35.344 [2024-07-25 00:15:31.040151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:35.344 [2024-07-25 00:15:31.040267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:35.344 [2024-07-25 00:15:31.040294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:31:35.344 [2024-07-25 00:15:31.040323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:35.344 [2024-07-25 00:15:31.042351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:35.344 [2024-07-25 00:15:31.042391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:35.344 spare 00:31:35.344 00:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:31:35.604 [2024-07-25 00:15:31.224211] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:35.604 [2024-07-25 00:15:31.226116] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:35.604 [2024-07-25 00:15:31.226206] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:35.604 [2024-07-25 00:15:31.226275] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:35.604 [2024-07-25 00:15:31.226535] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a880 00:31:35.604 [2024-07-25 00:15:31.226563] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:31:35.604 [2024-07-25 00:15:31.226684] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:31:35.604 [2024-07-25 00:15:31.232315] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a880 00:31:35.604 [2024-07-25 00:15:31.232356] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a880 00:31:35.604 [2024-07-25 00:15:31.232608] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:35.604 00:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:31:35.604 00:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:35.604 00:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:35.604 00:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:35.604 00:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:35.604 00:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:31:35.604 00:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:35.604 00:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:35.604 00:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:35.604 00:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:35.604 00:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:35.604 00:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:35.863 00:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:35.863 "name": "raid_bdev1", 00:31:35.863 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:35.863 "strip_size_kb": 64, 00:31:35.863 "state": "online", 00:31:35.863 "raid_level": "raid5f", 00:31:35.863 "superblock": true, 00:31:35.863 "num_base_bdevs": 4, 00:31:35.863 "num_base_bdevs_discovered": 4, 00:31:35.863 "num_base_bdevs_operational": 4, 00:31:35.863 "base_bdevs_list": [ 00:31:35.863 { 00:31:35.863 "name": "BaseBdev1", 00:31:35.863 "uuid": "17b9575d-28c9-5336-8572-1a042cc2d115", 00:31:35.863 "is_configured": true, 00:31:35.863 "data_offset": 2048, 00:31:35.863 "data_size": 63488 00:31:35.863 }, 00:31:35.863 { 00:31:35.863 "name": "BaseBdev2", 00:31:35.863 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:35.863 "is_configured": true, 00:31:35.863 "data_offset": 2048, 00:31:35.863 "data_size": 63488 00:31:35.863 }, 00:31:35.863 { 00:31:35.863 "name": "BaseBdev3", 00:31:35.863 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:35.863 "is_configured": true, 00:31:35.863 "data_offset": 2048, 00:31:35.863 "data_size": 63488 00:31:35.863 }, 00:31:35.863 { 00:31:35.863 "name": "BaseBdev4", 00:31:35.863 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:35.863 "is_configured": true, 00:31:35.863 "data_offset": 2048, 00:31:35.863 "data_size": 63488 00:31:35.863 } 00:31:35.863 ] 00:31:35.863 }' 00:31:35.863 00:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:35.863 00:15:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.122 00:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:36.122 00:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:31:36.122 [2024-07-25 00:15:31.906654] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:36.122 00:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=190464 00:31:36.122 00:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:36.122 00:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:36.382 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:31:36.382 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:31:36.382 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:31:36.382 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:31:36.382 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:31:36.382 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:36.382 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:36.382 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:36.382 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:36.382 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:36.382 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:31:36.382 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:36.382 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:36.382 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:36.641 [2024-07-25 00:15:32.274628] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005c70 00:31:36.641 /dev/nbd0 00:31:36.641 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:36.641 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:36.641 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:36.641 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:31:36.641 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:36.641 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:36.641 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:36.641 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:31:36.641 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:36.641 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:36.641 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:36.641 1+0 records in 00:31:36.641 1+0 records out 00:31:36.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219575 s, 18.7 MB/s 00:31:36.641 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:36.641 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:31:36.641 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:36.641 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:36.641 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:31:36.641 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:36.641 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:36.641 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid5f ']' 00:31:36.642 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # write_unit_size=384 00:31:36.642 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # echo 192 00:31:36.642 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:31:37.210 496+0 records in 00:31:37.210 496+0 records out 00:31:37.210 97517568 bytes (98 MB, 93 MiB) copied, 0.511498 s, 191 MB/s 00:31:37.210 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:37.210 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:37.210 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:37.210 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:37.210 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:31:37.210 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:37.210 00:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:37.210 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:37.210 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:37.210 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:37.210 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:37.210 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:37.210 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:37.210 [2024-07-25 00:15:33.052310] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:37.210 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:31:37.210 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:31:37.210 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:37.469 [2024-07-25 00:15:33.275610] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:37.469 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:37.469 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:37.469 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:37.469 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:37.469 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:37.469 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:37.469 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:37.469 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:37.469 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:37.469 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:37.469 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:37.469 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:37.727 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:37.727 "name": "raid_bdev1", 00:31:37.727 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:37.727 "strip_size_kb": 64, 00:31:37.727 "state": "online", 00:31:37.727 "raid_level": "raid5f", 00:31:37.727 "superblock": true, 00:31:37.727 "num_base_bdevs": 4, 00:31:37.727 "num_base_bdevs_discovered": 3, 00:31:37.727 "num_base_bdevs_operational": 3, 00:31:37.727 "base_bdevs_list": [ 00:31:37.727 { 00:31:37.727 "name": null, 00:31:37.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:37.727 "is_configured": false, 00:31:37.727 "data_offset": 2048, 00:31:37.727 "data_size": 63488 00:31:37.727 }, 00:31:37.727 { 00:31:37.727 "name": "BaseBdev2", 00:31:37.727 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:37.727 "is_configured": true, 00:31:37.727 "data_offset": 2048, 00:31:37.727 "data_size": 63488 00:31:37.727 }, 00:31:37.727 { 00:31:37.727 "name": "BaseBdev3", 00:31:37.727 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:37.727 "is_configured": true, 00:31:37.727 "data_offset": 2048, 00:31:37.727 "data_size": 63488 00:31:37.727 }, 00:31:37.727 { 00:31:37.727 "name": "BaseBdev4", 00:31:37.727 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:37.727 "is_configured": true, 00:31:37.727 "data_offset": 2048, 00:31:37.727 "data_size": 63488 00:31:37.727 } 00:31:37.727 ] 00:31:37.727 }' 00:31:37.727 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:37.727 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.985 00:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:38.243 [2024-07-25 00:15:34.003813] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:38.243 [2024-07-25 00:15:34.014035] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002a570 00:31:38.243 [2024-07-25 00:15:34.021134] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:38.243 00:15:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:39.178 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:39.178 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:39.178 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:39.178 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:39.178 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:39.178 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:39.178 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:39.437 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:39.437 "name": "raid_bdev1", 00:31:39.437 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:39.437 "strip_size_kb": 64, 00:31:39.437 "state": "online", 00:31:39.437 "raid_level": "raid5f", 00:31:39.437 "superblock": true, 00:31:39.437 "num_base_bdevs": 4, 00:31:39.437 "num_base_bdevs_discovered": 4, 00:31:39.437 "num_base_bdevs_operational": 4, 00:31:39.437 "process": { 00:31:39.437 "type": "rebuild", 00:31:39.437 "target": "spare", 00:31:39.437 "progress": { 00:31:39.437 "blocks": 23040, 00:31:39.437 "percent": 12 00:31:39.437 } 00:31:39.437 }, 00:31:39.437 "base_bdevs_list": [ 00:31:39.437 { 00:31:39.437 "name": "spare", 00:31:39.437 "uuid": "f69385fc-8311-57aa-a5e6-5f41f1f14f1b", 00:31:39.437 "is_configured": true, 00:31:39.437 "data_offset": 2048, 00:31:39.437 "data_size": 63488 00:31:39.437 }, 00:31:39.437 { 00:31:39.437 "name": "BaseBdev2", 00:31:39.437 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:39.437 "is_configured": true, 00:31:39.437 "data_offset": 2048, 00:31:39.437 "data_size": 63488 00:31:39.437 }, 00:31:39.437 { 00:31:39.437 "name": "BaseBdev3", 00:31:39.437 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:39.437 "is_configured": true, 00:31:39.437 "data_offset": 2048, 00:31:39.437 "data_size": 63488 00:31:39.437 }, 00:31:39.437 { 00:31:39.437 "name": "BaseBdev4", 00:31:39.437 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:39.437 "is_configured": true, 00:31:39.437 "data_offset": 2048, 00:31:39.437 "data_size": 63488 00:31:39.437 } 00:31:39.437 ] 00:31:39.437 }' 00:31:39.437 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:39.437 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:39.437 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:39.437 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:39.437 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:39.696 [2024-07-25 00:15:35.510444] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:39.696 [2024-07-25 00:15:35.531125] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:39.696 [2024-07-25 00:15:35.531238] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:39.696 [2024-07-25 00:15:35.531260] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:39.696 [2024-07-25 00:15:35.531272] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:39.956 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:39.956 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:39.956 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:39.956 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:39.956 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:39.956 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:39.956 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:39.956 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:39.956 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:39.956 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:39.956 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:39.956 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:39.956 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:39.956 "name": "raid_bdev1", 00:31:39.956 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:39.956 "strip_size_kb": 64, 00:31:39.956 "state": "online", 00:31:39.956 "raid_level": "raid5f", 00:31:39.956 "superblock": true, 00:31:39.956 "num_base_bdevs": 4, 00:31:39.956 "num_base_bdevs_discovered": 3, 00:31:39.956 "num_base_bdevs_operational": 3, 00:31:39.956 "base_bdevs_list": [ 00:31:39.956 { 00:31:39.956 "name": null, 00:31:39.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:39.956 "is_configured": false, 00:31:39.956 "data_offset": 2048, 00:31:39.956 "data_size": 63488 00:31:39.956 }, 00:31:39.956 { 00:31:39.956 "name": "BaseBdev2", 00:31:39.956 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:39.956 "is_configured": true, 00:31:39.956 "data_offset": 2048, 00:31:39.956 "data_size": 63488 00:31:39.956 }, 00:31:39.956 { 00:31:39.956 "name": "BaseBdev3", 00:31:39.956 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:39.956 "is_configured": true, 00:31:39.956 "data_offset": 2048, 00:31:39.956 "data_size": 63488 00:31:39.956 }, 00:31:39.956 { 00:31:39.956 "name": "BaseBdev4", 00:31:39.956 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:39.956 "is_configured": true, 00:31:39.956 "data_offset": 2048, 00:31:39.956 "data_size": 63488 00:31:39.956 } 00:31:39.956 ] 00:31:39.956 }' 00:31:39.956 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:39.956 00:15:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.215 00:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:40.215 00:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:40.215 00:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:40.215 00:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:40.215 00:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:40.215 00:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:40.215 00:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:40.474 00:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:40.474 "name": "raid_bdev1", 00:31:40.474 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:40.474 "strip_size_kb": 64, 00:31:40.475 "state": "online", 00:31:40.475 "raid_level": "raid5f", 00:31:40.475 "superblock": true, 00:31:40.475 "num_base_bdevs": 4, 00:31:40.475 "num_base_bdevs_discovered": 3, 00:31:40.475 "num_base_bdevs_operational": 3, 00:31:40.475 "base_bdevs_list": [ 00:31:40.475 { 00:31:40.475 "name": null, 00:31:40.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:40.475 "is_configured": false, 00:31:40.475 "data_offset": 2048, 00:31:40.475 "data_size": 63488 00:31:40.475 }, 00:31:40.475 { 00:31:40.475 "name": "BaseBdev2", 00:31:40.475 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:40.475 "is_configured": true, 00:31:40.475 "data_offset": 2048, 00:31:40.475 "data_size": 63488 00:31:40.475 }, 00:31:40.475 { 00:31:40.475 "name": "BaseBdev3", 00:31:40.475 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:40.475 "is_configured": true, 00:31:40.475 "data_offset": 2048, 00:31:40.475 "data_size": 63488 00:31:40.475 }, 00:31:40.475 { 00:31:40.475 "name": "BaseBdev4", 00:31:40.475 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:40.475 "is_configured": true, 00:31:40.475 "data_offset": 2048, 00:31:40.475 "data_size": 63488 00:31:40.475 } 00:31:40.475 ] 00:31:40.475 }' 00:31:40.475 00:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:40.475 00:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:40.475 00:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:40.475 00:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:40.475 00:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:40.734 [2024-07-25 00:15:36.451540] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:40.734 [2024-07-25 00:15:36.460964] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002a640 00:31:40.734 [2024-07-25 00:15:36.467586] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:40.734 00:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:31:41.671 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:41.671 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:41.671 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:41.671 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:41.671 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:41.671 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:41.671 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:41.930 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:41.930 "name": "raid_bdev1", 00:31:41.930 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:41.930 "strip_size_kb": 64, 00:31:41.930 "state": "online", 00:31:41.930 "raid_level": "raid5f", 00:31:41.930 "superblock": true, 00:31:41.930 "num_base_bdevs": 4, 00:31:41.930 "num_base_bdevs_discovered": 4, 00:31:41.931 "num_base_bdevs_operational": 4, 00:31:41.931 "process": { 00:31:41.931 "type": "rebuild", 00:31:41.931 "target": "spare", 00:31:41.931 "progress": { 00:31:41.931 "blocks": 23040, 00:31:41.931 "percent": 12 00:31:41.931 } 00:31:41.931 }, 00:31:41.931 "base_bdevs_list": [ 00:31:41.931 { 00:31:41.931 "name": "spare", 00:31:41.931 "uuid": "f69385fc-8311-57aa-a5e6-5f41f1f14f1b", 00:31:41.931 "is_configured": true, 00:31:41.931 "data_offset": 2048, 00:31:41.931 "data_size": 63488 00:31:41.931 }, 00:31:41.931 { 00:31:41.931 "name": "BaseBdev2", 00:31:41.931 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:41.931 "is_configured": true, 00:31:41.931 "data_offset": 2048, 00:31:41.931 "data_size": 63488 00:31:41.931 }, 00:31:41.931 { 00:31:41.931 "name": "BaseBdev3", 00:31:41.931 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:41.931 "is_configured": true, 00:31:41.931 "data_offset": 2048, 00:31:41.931 "data_size": 63488 00:31:41.931 }, 00:31:41.931 { 00:31:41.931 "name": "BaseBdev4", 00:31:41.931 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:41.931 "is_configured": true, 00:31:41.931 "data_offset": 2048, 00:31:41.931 "data_size": 63488 00:31:41.931 } 00:31:41.931 ] 00:31:41.931 }' 00:31:41.931 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:41.931 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:41.931 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:41.931 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:41.931 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:31:41.931 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:31:41.931 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:31:41.931 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:31:41.931 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid5f = raid1 ']' 00:31:41.931 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=1127 00:31:41.931 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:41.931 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:41.931 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:41.931 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:41.931 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:41.931 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:41.931 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:41.931 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:42.190 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:42.190 "name": "raid_bdev1", 00:31:42.190 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:42.190 "strip_size_kb": 64, 00:31:42.190 "state": "online", 00:31:42.190 "raid_level": "raid5f", 00:31:42.190 "superblock": true, 00:31:42.190 "num_base_bdevs": 4, 00:31:42.190 "num_base_bdevs_discovered": 4, 00:31:42.190 "num_base_bdevs_operational": 4, 00:31:42.190 "process": { 00:31:42.190 "type": "rebuild", 00:31:42.190 "target": "spare", 00:31:42.190 "progress": { 00:31:42.190 "blocks": 26880, 00:31:42.190 "percent": 14 00:31:42.190 } 00:31:42.190 }, 00:31:42.190 "base_bdevs_list": [ 00:31:42.190 { 00:31:42.190 "name": "spare", 00:31:42.190 "uuid": "f69385fc-8311-57aa-a5e6-5f41f1f14f1b", 00:31:42.190 "is_configured": true, 00:31:42.190 "data_offset": 2048, 00:31:42.190 "data_size": 63488 00:31:42.190 }, 00:31:42.190 { 00:31:42.190 "name": "BaseBdev2", 00:31:42.190 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:42.190 "is_configured": true, 00:31:42.190 "data_offset": 2048, 00:31:42.190 "data_size": 63488 00:31:42.190 }, 00:31:42.190 { 00:31:42.190 "name": "BaseBdev3", 00:31:42.190 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:42.190 "is_configured": true, 00:31:42.190 "data_offset": 2048, 00:31:42.190 "data_size": 63488 00:31:42.190 }, 00:31:42.190 { 00:31:42.190 "name": "BaseBdev4", 00:31:42.190 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:42.190 "is_configured": true, 00:31:42.190 "data_offset": 2048, 00:31:42.190 "data_size": 63488 00:31:42.190 } 00:31:42.190 ] 00:31:42.190 }' 00:31:42.190 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:42.190 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:42.190 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:42.190 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:42.190 00:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:43.567 00:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:43.567 00:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:43.567 00:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:43.567 00:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:43.567 00:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:43.567 00:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:43.567 00:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:43.567 00:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:43.567 00:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:43.567 "name": "raid_bdev1", 00:31:43.567 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:43.567 "strip_size_kb": 64, 00:31:43.567 "state": "online", 00:31:43.567 "raid_level": "raid5f", 00:31:43.567 "superblock": true, 00:31:43.567 "num_base_bdevs": 4, 00:31:43.567 "num_base_bdevs_discovered": 4, 00:31:43.567 "num_base_bdevs_operational": 4, 00:31:43.567 "process": { 00:31:43.567 "type": "rebuild", 00:31:43.567 "target": "spare", 00:31:43.567 "progress": { 00:31:43.567 "blocks": 51840, 00:31:43.567 "percent": 27 00:31:43.567 } 00:31:43.567 }, 00:31:43.567 "base_bdevs_list": [ 00:31:43.567 { 00:31:43.567 "name": "spare", 00:31:43.567 "uuid": "f69385fc-8311-57aa-a5e6-5f41f1f14f1b", 00:31:43.567 "is_configured": true, 00:31:43.567 "data_offset": 2048, 00:31:43.567 "data_size": 63488 00:31:43.567 }, 00:31:43.567 { 00:31:43.567 "name": "BaseBdev2", 00:31:43.567 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:43.567 "is_configured": true, 00:31:43.567 "data_offset": 2048, 00:31:43.567 "data_size": 63488 00:31:43.567 }, 00:31:43.567 { 00:31:43.567 "name": "BaseBdev3", 00:31:43.567 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:43.567 "is_configured": true, 00:31:43.567 "data_offset": 2048, 00:31:43.567 "data_size": 63488 00:31:43.567 }, 00:31:43.567 { 00:31:43.567 "name": "BaseBdev4", 00:31:43.567 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:43.567 "is_configured": true, 00:31:43.567 "data_offset": 2048, 00:31:43.567 "data_size": 63488 00:31:43.567 } 00:31:43.567 ] 00:31:43.567 }' 00:31:43.567 00:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:43.567 00:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:43.567 00:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:43.567 00:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:43.567 00:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:44.505 00:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:44.505 00:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:44.505 00:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:44.505 00:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:44.505 00:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:44.505 00:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:44.505 00:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:44.505 00:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:44.764 00:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:44.764 "name": "raid_bdev1", 00:31:44.764 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:44.764 "strip_size_kb": 64, 00:31:44.764 "state": "online", 00:31:44.764 "raid_level": "raid5f", 00:31:44.764 "superblock": true, 00:31:44.764 "num_base_bdevs": 4, 00:31:44.764 "num_base_bdevs_discovered": 4, 00:31:44.764 "num_base_bdevs_operational": 4, 00:31:44.764 "process": { 00:31:44.764 "type": "rebuild", 00:31:44.764 "target": "spare", 00:31:44.764 "progress": { 00:31:44.764 "blocks": 74880, 00:31:44.764 "percent": 39 00:31:44.764 } 00:31:44.764 }, 00:31:44.764 "base_bdevs_list": [ 00:31:44.764 { 00:31:44.764 "name": "spare", 00:31:44.764 "uuid": "f69385fc-8311-57aa-a5e6-5f41f1f14f1b", 00:31:44.764 "is_configured": true, 00:31:44.764 "data_offset": 2048, 00:31:44.764 "data_size": 63488 00:31:44.764 }, 00:31:44.764 { 00:31:44.764 "name": "BaseBdev2", 00:31:44.764 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:44.764 "is_configured": true, 00:31:44.764 "data_offset": 2048, 00:31:44.764 "data_size": 63488 00:31:44.764 }, 00:31:44.764 { 00:31:44.764 "name": "BaseBdev3", 00:31:44.764 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:44.764 "is_configured": true, 00:31:44.764 "data_offset": 2048, 00:31:44.764 "data_size": 63488 00:31:44.764 }, 00:31:44.764 { 00:31:44.764 "name": "BaseBdev4", 00:31:44.764 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:44.764 "is_configured": true, 00:31:44.764 "data_offset": 2048, 00:31:44.764 "data_size": 63488 00:31:44.764 } 00:31:44.764 ] 00:31:44.764 }' 00:31:44.764 00:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:44.764 00:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:44.764 00:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:44.764 00:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:44.764 00:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:45.701 00:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:45.701 00:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:45.701 00:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:45.701 00:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:45.701 00:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:45.701 00:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:45.701 00:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.701 00:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:45.960 00:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:45.960 "name": "raid_bdev1", 00:31:45.960 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:45.960 "strip_size_kb": 64, 00:31:45.960 "state": "online", 00:31:45.960 "raid_level": "raid5f", 00:31:45.960 "superblock": true, 00:31:45.960 "num_base_bdevs": 4, 00:31:45.960 "num_base_bdevs_discovered": 4, 00:31:45.960 "num_base_bdevs_operational": 4, 00:31:45.960 "process": { 00:31:45.960 "type": "rebuild", 00:31:45.960 "target": "spare", 00:31:45.960 "progress": { 00:31:45.960 "blocks": 99840, 00:31:45.960 "percent": 52 00:31:45.960 } 00:31:45.960 }, 00:31:45.960 "base_bdevs_list": [ 00:31:45.960 { 00:31:45.960 "name": "spare", 00:31:45.960 "uuid": "f69385fc-8311-57aa-a5e6-5f41f1f14f1b", 00:31:45.960 "is_configured": true, 00:31:45.960 "data_offset": 2048, 00:31:45.960 "data_size": 63488 00:31:45.960 }, 00:31:45.960 { 00:31:45.960 "name": "BaseBdev2", 00:31:45.960 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:45.960 "is_configured": true, 00:31:45.960 "data_offset": 2048, 00:31:45.960 "data_size": 63488 00:31:45.960 }, 00:31:45.960 { 00:31:45.960 "name": "BaseBdev3", 00:31:45.960 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:45.960 "is_configured": true, 00:31:45.960 "data_offset": 2048, 00:31:45.960 "data_size": 63488 00:31:45.960 }, 00:31:45.960 { 00:31:45.960 "name": "BaseBdev4", 00:31:45.960 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:45.960 "is_configured": true, 00:31:45.960 "data_offset": 2048, 00:31:45.960 "data_size": 63488 00:31:45.960 } 00:31:45.960 ] 00:31:45.960 }' 00:31:45.960 00:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:45.960 00:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:45.960 00:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:45.960 00:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:45.960 00:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:46.897 00:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:46.897 00:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:46.897 00:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:46.897 00:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:46.897 00:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:46.897 00:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:46.897 00:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:46.897 00:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:47.157 00:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:47.157 "name": "raid_bdev1", 00:31:47.157 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:47.157 "strip_size_kb": 64, 00:31:47.157 "state": "online", 00:31:47.157 "raid_level": "raid5f", 00:31:47.157 "superblock": true, 00:31:47.157 "num_base_bdevs": 4, 00:31:47.157 "num_base_bdevs_discovered": 4, 00:31:47.157 "num_base_bdevs_operational": 4, 00:31:47.157 "process": { 00:31:47.157 "type": "rebuild", 00:31:47.157 "target": "spare", 00:31:47.157 "progress": { 00:31:47.157 "blocks": 122880, 00:31:47.157 "percent": 64 00:31:47.157 } 00:31:47.157 }, 00:31:47.157 "base_bdevs_list": [ 00:31:47.157 { 00:31:47.157 "name": "spare", 00:31:47.157 "uuid": "f69385fc-8311-57aa-a5e6-5f41f1f14f1b", 00:31:47.157 "is_configured": true, 00:31:47.157 "data_offset": 2048, 00:31:47.157 "data_size": 63488 00:31:47.157 }, 00:31:47.157 { 00:31:47.157 "name": "BaseBdev2", 00:31:47.157 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:47.157 "is_configured": true, 00:31:47.157 "data_offset": 2048, 00:31:47.157 "data_size": 63488 00:31:47.157 }, 00:31:47.157 { 00:31:47.157 "name": "BaseBdev3", 00:31:47.157 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:47.157 "is_configured": true, 00:31:47.157 "data_offset": 2048, 00:31:47.157 "data_size": 63488 00:31:47.157 }, 00:31:47.157 { 00:31:47.157 "name": "BaseBdev4", 00:31:47.157 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:47.157 "is_configured": true, 00:31:47.157 "data_offset": 2048, 00:31:47.157 "data_size": 63488 00:31:47.157 } 00:31:47.157 ] 00:31:47.157 }' 00:31:47.157 00:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:47.157 00:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:47.157 00:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:47.157 00:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:47.157 00:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:48.533 00:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:48.533 00:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:48.533 00:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:48.533 00:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:48.533 00:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:48.533 00:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:48.533 00:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:48.533 00:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:48.533 00:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:48.533 "name": "raid_bdev1", 00:31:48.533 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:48.533 "strip_size_kb": 64, 00:31:48.533 "state": "online", 00:31:48.533 "raid_level": "raid5f", 00:31:48.533 "superblock": true, 00:31:48.533 "num_base_bdevs": 4, 00:31:48.533 "num_base_bdevs_discovered": 4, 00:31:48.533 "num_base_bdevs_operational": 4, 00:31:48.533 "process": { 00:31:48.533 "type": "rebuild", 00:31:48.533 "target": "spare", 00:31:48.533 "progress": { 00:31:48.533 "blocks": 147840, 00:31:48.533 "percent": 77 00:31:48.533 } 00:31:48.533 }, 00:31:48.533 "base_bdevs_list": [ 00:31:48.533 { 00:31:48.533 "name": "spare", 00:31:48.533 "uuid": "f69385fc-8311-57aa-a5e6-5f41f1f14f1b", 00:31:48.533 "is_configured": true, 00:31:48.533 "data_offset": 2048, 00:31:48.533 "data_size": 63488 00:31:48.533 }, 00:31:48.533 { 00:31:48.533 "name": "BaseBdev2", 00:31:48.533 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:48.533 "is_configured": true, 00:31:48.533 "data_offset": 2048, 00:31:48.533 "data_size": 63488 00:31:48.533 }, 00:31:48.533 { 00:31:48.533 "name": "BaseBdev3", 00:31:48.533 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:48.533 "is_configured": true, 00:31:48.533 "data_offset": 2048, 00:31:48.533 "data_size": 63488 00:31:48.533 }, 00:31:48.533 { 00:31:48.533 "name": "BaseBdev4", 00:31:48.533 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:48.533 "is_configured": true, 00:31:48.533 "data_offset": 2048, 00:31:48.533 "data_size": 63488 00:31:48.533 } 00:31:48.533 ] 00:31:48.533 }' 00:31:48.533 00:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:48.533 00:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:48.533 00:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:48.533 00:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:48.533 00:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:49.471 00:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:49.471 00:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:49.471 00:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:49.471 00:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:49.471 00:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:49.471 00:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:49.471 00:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:49.471 00:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:49.730 00:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:49.730 "name": "raid_bdev1", 00:31:49.730 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:49.730 "strip_size_kb": 64, 00:31:49.730 "state": "online", 00:31:49.730 "raid_level": "raid5f", 00:31:49.730 "superblock": true, 00:31:49.730 "num_base_bdevs": 4, 00:31:49.730 "num_base_bdevs_discovered": 4, 00:31:49.730 "num_base_bdevs_operational": 4, 00:31:49.730 "process": { 00:31:49.730 "type": "rebuild", 00:31:49.730 "target": "spare", 00:31:49.730 "progress": { 00:31:49.730 "blocks": 170880, 00:31:49.730 "percent": 89 00:31:49.730 } 00:31:49.730 }, 00:31:49.730 "base_bdevs_list": [ 00:31:49.730 { 00:31:49.730 "name": "spare", 00:31:49.730 "uuid": "f69385fc-8311-57aa-a5e6-5f41f1f14f1b", 00:31:49.730 "is_configured": true, 00:31:49.730 "data_offset": 2048, 00:31:49.730 "data_size": 63488 00:31:49.730 }, 00:31:49.730 { 00:31:49.730 "name": "BaseBdev2", 00:31:49.730 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:49.730 "is_configured": true, 00:31:49.730 "data_offset": 2048, 00:31:49.730 "data_size": 63488 00:31:49.730 }, 00:31:49.730 { 00:31:49.730 "name": "BaseBdev3", 00:31:49.730 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:49.730 "is_configured": true, 00:31:49.730 "data_offset": 2048, 00:31:49.730 "data_size": 63488 00:31:49.730 }, 00:31:49.730 { 00:31:49.730 "name": "BaseBdev4", 00:31:49.730 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:49.730 "is_configured": true, 00:31:49.730 "data_offset": 2048, 00:31:49.730 "data_size": 63488 00:31:49.730 } 00:31:49.730 ] 00:31:49.730 }' 00:31:49.730 00:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:49.730 00:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:49.730 00:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:49.730 00:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:49.730 00:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:50.667 [2024-07-25 00:15:46.533113] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:50.667 [2024-07-25 00:15:46.533223] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:50.667 [2024-07-25 00:15:46.533375] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:50.926 00:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:50.926 00:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:50.926 00:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:50.926 00:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:50.926 00:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:50.926 00:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:50.926 00:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:50.926 00:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:51.191 00:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:51.191 "name": "raid_bdev1", 00:31:51.191 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:51.191 "strip_size_kb": 64, 00:31:51.191 "state": "online", 00:31:51.191 "raid_level": "raid5f", 00:31:51.191 "superblock": true, 00:31:51.191 "num_base_bdevs": 4, 00:31:51.191 "num_base_bdevs_discovered": 4, 00:31:51.191 "num_base_bdevs_operational": 4, 00:31:51.191 "base_bdevs_list": [ 00:31:51.191 { 00:31:51.191 "name": "spare", 00:31:51.191 "uuid": "f69385fc-8311-57aa-a5e6-5f41f1f14f1b", 00:31:51.191 "is_configured": true, 00:31:51.191 "data_offset": 2048, 00:31:51.191 "data_size": 63488 00:31:51.191 }, 00:31:51.191 { 00:31:51.191 "name": "BaseBdev2", 00:31:51.191 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:51.191 "is_configured": true, 00:31:51.191 "data_offset": 2048, 00:31:51.191 "data_size": 63488 00:31:51.191 }, 00:31:51.191 { 00:31:51.191 "name": "BaseBdev3", 00:31:51.191 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:51.191 "is_configured": true, 00:31:51.191 "data_offset": 2048, 00:31:51.191 "data_size": 63488 00:31:51.191 }, 00:31:51.191 { 00:31:51.191 "name": "BaseBdev4", 00:31:51.191 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:51.191 "is_configured": true, 00:31:51.191 "data_offset": 2048, 00:31:51.191 "data_size": 63488 00:31:51.191 } 00:31:51.191 ] 00:31:51.191 }' 00:31:51.191 00:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:51.191 00:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:51.191 00:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:51.191 00:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:51.191 00:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:31:51.191 00:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:51.191 00:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:51.191 00:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:51.191 00:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:51.191 00:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:51.191 00:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:51.191 00:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:51.191 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:51.191 "name": "raid_bdev1", 00:31:51.191 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:51.191 "strip_size_kb": 64, 00:31:51.191 "state": "online", 00:31:51.191 "raid_level": "raid5f", 00:31:51.191 "superblock": true, 00:31:51.191 "num_base_bdevs": 4, 00:31:51.191 "num_base_bdevs_discovered": 4, 00:31:51.191 "num_base_bdevs_operational": 4, 00:31:51.191 "base_bdevs_list": [ 00:31:51.191 { 00:31:51.191 "name": "spare", 00:31:51.191 "uuid": "f69385fc-8311-57aa-a5e6-5f41f1f14f1b", 00:31:51.191 "is_configured": true, 00:31:51.191 "data_offset": 2048, 00:31:51.191 "data_size": 63488 00:31:51.191 }, 00:31:51.191 { 00:31:51.191 "name": "BaseBdev2", 00:31:51.191 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:51.191 "is_configured": true, 00:31:51.191 "data_offset": 2048, 00:31:51.191 "data_size": 63488 00:31:51.191 }, 00:31:51.191 { 00:31:51.191 "name": "BaseBdev3", 00:31:51.191 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:51.191 "is_configured": true, 00:31:51.191 "data_offset": 2048, 00:31:51.191 "data_size": 63488 00:31:51.191 }, 00:31:51.191 { 00:31:51.191 "name": "BaseBdev4", 00:31:51.191 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:51.191 "is_configured": true, 00:31:51.191 "data_offset": 2048, 00:31:51.191 "data_size": 63488 00:31:51.191 } 00:31:51.191 ] 00:31:51.191 }' 00:31:51.191 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:51.191 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:51.191 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:51.191 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:51.191 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:31:51.191 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:51.191 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:51.191 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:51.191 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:51.191 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:31:51.191 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:51.191 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:51.191 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:51.191 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:51.191 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:51.191 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:51.473 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:51.473 "name": "raid_bdev1", 00:31:51.473 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:51.473 "strip_size_kb": 64, 00:31:51.473 "state": "online", 00:31:51.473 "raid_level": "raid5f", 00:31:51.473 "superblock": true, 00:31:51.473 "num_base_bdevs": 4, 00:31:51.473 "num_base_bdevs_discovered": 4, 00:31:51.473 "num_base_bdevs_operational": 4, 00:31:51.473 "base_bdevs_list": [ 00:31:51.473 { 00:31:51.473 "name": "spare", 00:31:51.473 "uuid": "f69385fc-8311-57aa-a5e6-5f41f1f14f1b", 00:31:51.473 "is_configured": true, 00:31:51.473 "data_offset": 2048, 00:31:51.473 "data_size": 63488 00:31:51.473 }, 00:31:51.473 { 00:31:51.473 "name": "BaseBdev2", 00:31:51.473 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:51.473 "is_configured": true, 00:31:51.473 "data_offset": 2048, 00:31:51.473 "data_size": 63488 00:31:51.473 }, 00:31:51.473 { 00:31:51.473 "name": "BaseBdev3", 00:31:51.473 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:51.473 "is_configured": true, 00:31:51.473 "data_offset": 2048, 00:31:51.473 "data_size": 63488 00:31:51.473 }, 00:31:51.473 { 00:31:51.473 "name": "BaseBdev4", 00:31:51.473 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:51.473 "is_configured": true, 00:31:51.473 "data_offset": 2048, 00:31:51.473 "data_size": 63488 00:31:51.473 } 00:31:51.473 ] 00:31:51.473 }' 00:31:51.473 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:51.473 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:51.750 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:52.009 [2024-07-25 00:15:47.750027] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:52.009 [2024-07-25 00:15:47.750084] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:52.009 [2024-07-25 00:15:47.750161] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:52.009 [2024-07-25 00:15:47.750278] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:52.009 [2024-07-25 00:15:47.750309] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state offline 00:31:52.009 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:31:52.009 00:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:52.268 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:31:52.268 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:31:52.268 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:31:52.268 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:52.268 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:52.268 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:52.268 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:52.268 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:52.268 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:52.268 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:31:52.268 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:52.268 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:52.268 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:52.526 /dev/nbd0 00:31:52.526 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:52.527 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:52.527 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:52.527 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:31:52.527 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:52.527 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:52.527 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:52.527 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:31:52.527 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:52.527 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:52.527 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:52.527 1+0 records in 00:31:52.527 1+0 records out 00:31:52.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467125 s, 8.8 MB/s 00:31:52.527 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:52.527 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:31:52.527 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:52.527 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:52.527 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:31:52.527 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:52.527 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:52.527 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:31:52.784 /dev/nbd1 00:31:52.784 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:52.785 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:52.785 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:31:52.785 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:31:52.785 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:52.785 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:52.785 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:31:52.785 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:31:52.785 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:52.785 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:52.785 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:52.785 1+0 records in 00:31:52.785 1+0 records out 00:31:52.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285657 s, 14.3 MB/s 00:31:52.785 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:52.785 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:31:52.785 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:52.785 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:52.785 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:31:52.785 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:52.785 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:52.785 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:53.042 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:31:53.042 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:53.042 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:53.042 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:53.042 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:31:53.042 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:53.042 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:53.042 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:53.042 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:53.042 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:53.042 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:53.042 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:53.042 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:53.042 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:31:53.042 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:31:53.042 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:53.042 00:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:53.300 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:53.300 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:53.300 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:53.300 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:53.300 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:53.300 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:53.300 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:31:53.300 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:31:53.300 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:31:53.300 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:53.557 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:53.816 [2024-07-25 00:15:49.565707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:53.816 [2024-07-25 00:15:49.565782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:53.816 [2024-07-25 00:15:49.565822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b780 00:31:53.816 [2024-07-25 00:15:49.565837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:53.816 [2024-07-25 00:15:49.568199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:53.816 [2024-07-25 00:15:49.568239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:53.816 [2024-07-25 00:15:49.568339] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:53.816 [2024-07-25 00:15:49.568388] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:53.816 [2024-07-25 00:15:49.568553] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:53.816 [2024-07-25 00:15:49.568661] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:53.816 [2024-07-25 00:15:49.568755] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:53.816 spare 00:31:53.816 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:31:53.816 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:53.816 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:53.816 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:53.816 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:53.816 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:31:53.816 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:53.816 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:53.816 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:53.816 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:53.816 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:53.816 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:53.816 [2024-07-25 00:15:49.668869] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000bd80 00:31:53.816 [2024-07-25 00:15:49.668918] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:31:53.816 [2024-07-25 00:15:49.669028] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000048cf0 00:31:53.816 [2024-07-25 00:15:49.674276] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000bd80 00:31:53.816 [2024-07-25 00:15:49.674302] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000bd80 00:31:53.816 [2024-07-25 00:15:49.674474] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:54.074 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:54.074 "name": "raid_bdev1", 00:31:54.074 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:54.074 "strip_size_kb": 64, 00:31:54.074 "state": "online", 00:31:54.074 "raid_level": "raid5f", 00:31:54.074 "superblock": true, 00:31:54.074 "num_base_bdevs": 4, 00:31:54.074 "num_base_bdevs_discovered": 4, 00:31:54.074 "num_base_bdevs_operational": 4, 00:31:54.074 "base_bdevs_list": [ 00:31:54.074 { 00:31:54.074 "name": "spare", 00:31:54.074 "uuid": "f69385fc-8311-57aa-a5e6-5f41f1f14f1b", 00:31:54.075 "is_configured": true, 00:31:54.075 "data_offset": 2048, 00:31:54.075 "data_size": 63488 00:31:54.075 }, 00:31:54.075 { 00:31:54.075 "name": "BaseBdev2", 00:31:54.075 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:54.075 "is_configured": true, 00:31:54.075 "data_offset": 2048, 00:31:54.075 "data_size": 63488 00:31:54.075 }, 00:31:54.075 { 00:31:54.075 "name": "BaseBdev3", 00:31:54.075 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:54.075 "is_configured": true, 00:31:54.075 "data_offset": 2048, 00:31:54.075 "data_size": 63488 00:31:54.075 }, 00:31:54.075 { 00:31:54.075 "name": "BaseBdev4", 00:31:54.075 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:54.075 "is_configured": true, 00:31:54.075 "data_offset": 2048, 00:31:54.075 "data_size": 63488 00:31:54.075 } 00:31:54.075 ] 00:31:54.075 }' 00:31:54.075 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:54.075 00:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:54.333 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:54.333 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:54.333 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:54.333 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:54.333 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:54.333 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:54.333 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:54.591 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:54.591 "name": "raid_bdev1", 00:31:54.591 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:54.591 "strip_size_kb": 64, 00:31:54.592 "state": "online", 00:31:54.592 "raid_level": "raid5f", 00:31:54.592 "superblock": true, 00:31:54.592 "num_base_bdevs": 4, 00:31:54.592 "num_base_bdevs_discovered": 4, 00:31:54.592 "num_base_bdevs_operational": 4, 00:31:54.592 "base_bdevs_list": [ 00:31:54.592 { 00:31:54.592 "name": "spare", 00:31:54.592 "uuid": "f69385fc-8311-57aa-a5e6-5f41f1f14f1b", 00:31:54.592 "is_configured": true, 00:31:54.592 "data_offset": 2048, 00:31:54.592 "data_size": 63488 00:31:54.592 }, 00:31:54.592 { 00:31:54.592 "name": "BaseBdev2", 00:31:54.592 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:54.592 "is_configured": true, 00:31:54.592 "data_offset": 2048, 00:31:54.592 "data_size": 63488 00:31:54.592 }, 00:31:54.592 { 00:31:54.592 "name": "BaseBdev3", 00:31:54.592 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:54.592 "is_configured": true, 00:31:54.592 "data_offset": 2048, 00:31:54.592 "data_size": 63488 00:31:54.592 }, 00:31:54.592 { 00:31:54.592 "name": "BaseBdev4", 00:31:54.592 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:54.592 "is_configured": true, 00:31:54.592 "data_offset": 2048, 00:31:54.592 "data_size": 63488 00:31:54.592 } 00:31:54.592 ] 00:31:54.592 }' 00:31:54.592 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:54.592 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:54.592 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:54.592 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:54.592 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:54.592 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:31:54.851 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:31:54.851 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:55.109 [2024-07-25 00:15:50.924367] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:55.109 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:55.109 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:55.109 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:55.109 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:55.109 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:55.109 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:55.109 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:55.109 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:55.109 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:55.109 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:55.109 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:55.109 00:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:55.367 00:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:55.367 "name": "raid_bdev1", 00:31:55.367 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:55.367 "strip_size_kb": 64, 00:31:55.367 "state": "online", 00:31:55.367 "raid_level": "raid5f", 00:31:55.367 "superblock": true, 00:31:55.367 "num_base_bdevs": 4, 00:31:55.367 "num_base_bdevs_discovered": 3, 00:31:55.367 "num_base_bdevs_operational": 3, 00:31:55.367 "base_bdevs_list": [ 00:31:55.367 { 00:31:55.367 "name": null, 00:31:55.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:55.367 "is_configured": false, 00:31:55.367 "data_offset": 2048, 00:31:55.367 "data_size": 63488 00:31:55.367 }, 00:31:55.367 { 00:31:55.367 "name": "BaseBdev2", 00:31:55.367 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:55.367 "is_configured": true, 00:31:55.367 "data_offset": 2048, 00:31:55.367 "data_size": 63488 00:31:55.367 }, 00:31:55.367 { 00:31:55.367 "name": "BaseBdev3", 00:31:55.367 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:55.367 "is_configured": true, 00:31:55.367 "data_offset": 2048, 00:31:55.367 "data_size": 63488 00:31:55.367 }, 00:31:55.367 { 00:31:55.367 "name": "BaseBdev4", 00:31:55.367 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:55.367 "is_configured": true, 00:31:55.367 "data_offset": 2048, 00:31:55.367 "data_size": 63488 00:31:55.367 } 00:31:55.367 ] 00:31:55.367 }' 00:31:55.367 00:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:55.367 00:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.625 00:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:55.883 [2024-07-25 00:15:51.696651] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:55.883 [2024-07-25 00:15:51.696915] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:55.883 [2024-07-25 00:15:51.696962] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:55.883 [2024-07-25 00:15:51.697003] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:55.883 [2024-07-25 00:15:51.707586] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000048dc0 00:31:55.883 [2024-07-25 00:15:51.714637] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:55.883 00:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:31:57.258 00:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:57.258 00:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:57.258 00:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:57.258 00:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:57.258 00:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:57.258 00:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:57.258 00:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.258 00:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:57.258 "name": "raid_bdev1", 00:31:57.258 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:57.258 "strip_size_kb": 64, 00:31:57.258 "state": "online", 00:31:57.258 "raid_level": "raid5f", 00:31:57.258 "superblock": true, 00:31:57.258 "num_base_bdevs": 4, 00:31:57.258 "num_base_bdevs_discovered": 4, 00:31:57.258 "num_base_bdevs_operational": 4, 00:31:57.258 "process": { 00:31:57.258 "type": "rebuild", 00:31:57.258 "target": "spare", 00:31:57.258 "progress": { 00:31:57.258 "blocks": 21120, 00:31:57.258 "percent": 11 00:31:57.258 } 00:31:57.258 }, 00:31:57.258 "base_bdevs_list": [ 00:31:57.258 { 00:31:57.258 "name": "spare", 00:31:57.258 "uuid": "f69385fc-8311-57aa-a5e6-5f41f1f14f1b", 00:31:57.258 "is_configured": true, 00:31:57.258 "data_offset": 2048, 00:31:57.258 "data_size": 63488 00:31:57.258 }, 00:31:57.258 { 00:31:57.258 "name": "BaseBdev2", 00:31:57.258 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:57.258 "is_configured": true, 00:31:57.258 "data_offset": 2048, 00:31:57.258 "data_size": 63488 00:31:57.258 }, 00:31:57.258 { 00:31:57.258 "name": "BaseBdev3", 00:31:57.258 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:57.258 "is_configured": true, 00:31:57.258 "data_offset": 2048, 00:31:57.258 "data_size": 63488 00:31:57.258 }, 00:31:57.258 { 00:31:57.258 "name": "BaseBdev4", 00:31:57.258 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:57.258 "is_configured": true, 00:31:57.258 "data_offset": 2048, 00:31:57.258 "data_size": 63488 00:31:57.258 } 00:31:57.258 ] 00:31:57.258 }' 00:31:57.258 00:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:57.258 00:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:57.258 00:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:57.258 00:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:57.258 00:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:57.517 [2024-07-25 00:15:53.176466] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:57.517 [2024-07-25 00:15:53.225275] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:57.517 [2024-07-25 00:15:53.225372] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:57.517 [2024-07-25 00:15:53.225394] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:57.518 [2024-07-25 00:15:53.225405] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:57.518 00:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:57.518 00:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:57.518 00:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:57.518 00:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:57.518 00:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:57.518 00:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:57.518 00:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:57.518 00:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:57.518 00:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:57.518 00:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:57.518 00:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:57.518 00:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.776 00:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:57.776 "name": "raid_bdev1", 00:31:57.776 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:57.776 "strip_size_kb": 64, 00:31:57.776 "state": "online", 00:31:57.776 "raid_level": "raid5f", 00:31:57.776 "superblock": true, 00:31:57.776 "num_base_bdevs": 4, 00:31:57.776 "num_base_bdevs_discovered": 3, 00:31:57.776 "num_base_bdevs_operational": 3, 00:31:57.776 "base_bdevs_list": [ 00:31:57.776 { 00:31:57.776 "name": null, 00:31:57.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:57.776 "is_configured": false, 00:31:57.776 "data_offset": 2048, 00:31:57.776 "data_size": 63488 00:31:57.776 }, 00:31:57.776 { 00:31:57.776 "name": "BaseBdev2", 00:31:57.776 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:57.776 "is_configured": true, 00:31:57.776 "data_offset": 2048, 00:31:57.776 "data_size": 63488 00:31:57.776 }, 00:31:57.776 { 00:31:57.776 "name": "BaseBdev3", 00:31:57.776 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:57.776 "is_configured": true, 00:31:57.776 "data_offset": 2048, 00:31:57.776 "data_size": 63488 00:31:57.776 }, 00:31:57.776 { 00:31:57.776 "name": "BaseBdev4", 00:31:57.776 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:57.776 "is_configured": true, 00:31:57.776 "data_offset": 2048, 00:31:57.776 "data_size": 63488 00:31:57.776 } 00:31:57.776 ] 00:31:57.776 }' 00:31:57.776 00:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:57.776 00:15:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.034 00:15:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:58.292 [2024-07-25 00:15:54.005545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:58.292 [2024-07-25 00:15:54.005625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:58.292 [2024-07-25 00:15:54.005656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:31:58.292 [2024-07-25 00:15:54.005671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:58.292 [2024-07-25 00:15:54.006178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:58.292 [2024-07-25 00:15:54.006215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:58.292 [2024-07-25 00:15:54.006314] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:58.292 [2024-07-25 00:15:54.006334] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:58.292 [2024-07-25 00:15:54.006345] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:58.292 [2024-07-25 00:15:54.006378] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:58.292 [2024-07-25 00:15:54.015743] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000048e90 00:31:58.292 spare 00:31:58.292 [2024-07-25 00:15:54.022341] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:58.292 00:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:31:59.227 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:59.227 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:59.227 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:59.227 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:59.227 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:59.227 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:59.227 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:59.486 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:59.486 "name": "raid_bdev1", 00:31:59.486 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:31:59.486 "strip_size_kb": 64, 00:31:59.486 "state": "online", 00:31:59.486 "raid_level": "raid5f", 00:31:59.486 "superblock": true, 00:31:59.486 "num_base_bdevs": 4, 00:31:59.486 "num_base_bdevs_discovered": 4, 00:31:59.486 "num_base_bdevs_operational": 4, 00:31:59.486 "process": { 00:31:59.486 "type": "rebuild", 00:31:59.486 "target": "spare", 00:31:59.486 "progress": { 00:31:59.486 "blocks": 23040, 00:31:59.486 "percent": 12 00:31:59.486 } 00:31:59.486 }, 00:31:59.486 "base_bdevs_list": [ 00:31:59.486 { 00:31:59.486 "name": "spare", 00:31:59.486 "uuid": "f69385fc-8311-57aa-a5e6-5f41f1f14f1b", 00:31:59.486 "is_configured": true, 00:31:59.486 "data_offset": 2048, 00:31:59.486 "data_size": 63488 00:31:59.486 }, 00:31:59.486 { 00:31:59.486 "name": "BaseBdev2", 00:31:59.486 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:31:59.486 "is_configured": true, 00:31:59.486 "data_offset": 2048, 00:31:59.486 "data_size": 63488 00:31:59.486 }, 00:31:59.486 { 00:31:59.486 "name": "BaseBdev3", 00:31:59.486 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:31:59.486 "is_configured": true, 00:31:59.486 "data_offset": 2048, 00:31:59.486 "data_size": 63488 00:31:59.486 }, 00:31:59.486 { 00:31:59.486 "name": "BaseBdev4", 00:31:59.486 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:31:59.486 "is_configured": true, 00:31:59.486 "data_offset": 2048, 00:31:59.486 "data_size": 63488 00:31:59.486 } 00:31:59.486 ] 00:31:59.486 }' 00:31:59.486 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:59.486 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:59.486 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:59.486 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:59.486 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:59.745 [2024-07-25 00:15:55.527605] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:59.745 [2024-07-25 00:15:55.532370] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:59.745 [2024-07-25 00:15:55.532557] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:59.745 [2024-07-25 00:15:55.532589] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:59.745 [2024-07-25 00:15:55.532600] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:59.745 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:59.745 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:59.745 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:59.745 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:59.745 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:59.745 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:59.745 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:59.745 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:59.745 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:59.745 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:59.745 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:59.745 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:00.004 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:00.004 "name": "raid_bdev1", 00:32:00.004 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:32:00.004 "strip_size_kb": 64, 00:32:00.004 "state": "online", 00:32:00.004 "raid_level": "raid5f", 00:32:00.004 "superblock": true, 00:32:00.004 "num_base_bdevs": 4, 00:32:00.004 "num_base_bdevs_discovered": 3, 00:32:00.004 "num_base_bdevs_operational": 3, 00:32:00.004 "base_bdevs_list": [ 00:32:00.004 { 00:32:00.004 "name": null, 00:32:00.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:00.004 "is_configured": false, 00:32:00.004 "data_offset": 2048, 00:32:00.004 "data_size": 63488 00:32:00.004 }, 00:32:00.004 { 00:32:00.004 "name": "BaseBdev2", 00:32:00.004 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:32:00.004 "is_configured": true, 00:32:00.004 "data_offset": 2048, 00:32:00.004 "data_size": 63488 00:32:00.004 }, 00:32:00.004 { 00:32:00.004 "name": "BaseBdev3", 00:32:00.004 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:32:00.004 "is_configured": true, 00:32:00.004 "data_offset": 2048, 00:32:00.005 "data_size": 63488 00:32:00.005 }, 00:32:00.005 { 00:32:00.005 "name": "BaseBdev4", 00:32:00.005 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:32:00.005 "is_configured": true, 00:32:00.005 "data_offset": 2048, 00:32:00.005 "data_size": 63488 00:32:00.005 } 00:32:00.005 ] 00:32:00.005 }' 00:32:00.005 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:00.005 00:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:00.263 00:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:00.263 00:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:00.263 00:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:00.263 00:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:00.263 00:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:00.263 00:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:00.263 00:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:00.522 00:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:00.522 "name": "raid_bdev1", 00:32:00.522 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:32:00.522 "strip_size_kb": 64, 00:32:00.522 "state": "online", 00:32:00.522 "raid_level": "raid5f", 00:32:00.522 "superblock": true, 00:32:00.522 "num_base_bdevs": 4, 00:32:00.522 "num_base_bdevs_discovered": 3, 00:32:00.522 "num_base_bdevs_operational": 3, 00:32:00.522 "base_bdevs_list": [ 00:32:00.522 { 00:32:00.522 "name": null, 00:32:00.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:00.523 "is_configured": false, 00:32:00.523 "data_offset": 2048, 00:32:00.523 "data_size": 63488 00:32:00.523 }, 00:32:00.523 { 00:32:00.523 "name": "BaseBdev2", 00:32:00.523 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:32:00.523 "is_configured": true, 00:32:00.523 "data_offset": 2048, 00:32:00.523 "data_size": 63488 00:32:00.523 }, 00:32:00.523 { 00:32:00.523 "name": "BaseBdev3", 00:32:00.523 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:32:00.523 "is_configured": true, 00:32:00.523 "data_offset": 2048, 00:32:00.523 "data_size": 63488 00:32:00.523 }, 00:32:00.523 { 00:32:00.523 "name": "BaseBdev4", 00:32:00.523 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:32:00.523 "is_configured": true, 00:32:00.523 "data_offset": 2048, 00:32:00.523 "data_size": 63488 00:32:00.523 } 00:32:00.523 ] 00:32:00.523 }' 00:32:00.523 00:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:00.523 00:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:00.523 00:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:00.523 00:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:00.523 00:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:32:00.782 00:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:01.041 [2024-07-25 00:15:56.831772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:01.041 [2024-07-25 00:15:56.831856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:01.041 [2024-07-25 00:15:56.831890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c980 00:32:01.041 [2024-07-25 00:15:56.831915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:01.041 [2024-07-25 00:15:56.832481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:01.041 [2024-07-25 00:15:56.832509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:01.041 [2024-07-25 00:15:56.832598] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:01.041 [2024-07-25 00:15:56.832614] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:01.041 [2024-07-25 00:15:56.832626] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:01.041 BaseBdev1 00:32:01.041 00:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:32:02.419 00:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:02.419 00:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:02.419 00:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:02.419 00:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:02.419 00:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:02.419 00:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:02.419 00:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:02.419 00:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:02.419 00:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:02.419 00:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:02.419 00:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:02.419 00:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:02.419 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:02.419 "name": "raid_bdev1", 00:32:02.419 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:32:02.419 "strip_size_kb": 64, 00:32:02.419 "state": "online", 00:32:02.419 "raid_level": "raid5f", 00:32:02.419 "superblock": true, 00:32:02.419 "num_base_bdevs": 4, 00:32:02.419 "num_base_bdevs_discovered": 3, 00:32:02.419 "num_base_bdevs_operational": 3, 00:32:02.419 "base_bdevs_list": [ 00:32:02.419 { 00:32:02.419 "name": null, 00:32:02.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:02.419 "is_configured": false, 00:32:02.419 "data_offset": 2048, 00:32:02.419 "data_size": 63488 00:32:02.419 }, 00:32:02.419 { 00:32:02.419 "name": "BaseBdev2", 00:32:02.419 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:32:02.419 "is_configured": true, 00:32:02.419 "data_offset": 2048, 00:32:02.419 "data_size": 63488 00:32:02.419 }, 00:32:02.419 { 00:32:02.419 "name": "BaseBdev3", 00:32:02.419 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:32:02.419 "is_configured": true, 00:32:02.419 "data_offset": 2048, 00:32:02.419 "data_size": 63488 00:32:02.419 }, 00:32:02.419 { 00:32:02.419 "name": "BaseBdev4", 00:32:02.419 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:32:02.419 "is_configured": true, 00:32:02.419 "data_offset": 2048, 00:32:02.419 "data_size": 63488 00:32:02.419 } 00:32:02.419 ] 00:32:02.419 }' 00:32:02.419 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:02.419 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:02.678 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:02.678 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:02.678 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:02.678 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:02.678 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:02.678 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:02.678 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:02.937 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:02.937 "name": "raid_bdev1", 00:32:02.937 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:32:02.937 "strip_size_kb": 64, 00:32:02.937 "state": "online", 00:32:02.937 "raid_level": "raid5f", 00:32:02.937 "superblock": true, 00:32:02.937 "num_base_bdevs": 4, 00:32:02.937 "num_base_bdevs_discovered": 3, 00:32:02.937 "num_base_bdevs_operational": 3, 00:32:02.937 "base_bdevs_list": [ 00:32:02.937 { 00:32:02.937 "name": null, 00:32:02.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:02.937 "is_configured": false, 00:32:02.937 "data_offset": 2048, 00:32:02.937 "data_size": 63488 00:32:02.937 }, 00:32:02.937 { 00:32:02.937 "name": "BaseBdev2", 00:32:02.937 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:32:02.937 "is_configured": true, 00:32:02.937 "data_offset": 2048, 00:32:02.937 "data_size": 63488 00:32:02.937 }, 00:32:02.937 { 00:32:02.937 "name": "BaseBdev3", 00:32:02.937 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:32:02.937 "is_configured": true, 00:32:02.937 "data_offset": 2048, 00:32:02.937 "data_size": 63488 00:32:02.937 }, 00:32:02.937 { 00:32:02.937 "name": "BaseBdev4", 00:32:02.937 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:32:02.937 "is_configured": true, 00:32:02.937 "data_offset": 2048, 00:32:02.937 "data_size": 63488 00:32:02.937 } 00:32:02.937 ] 00:32:02.937 }' 00:32:02.937 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:02.937 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:02.937 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:02.937 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:02.937 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:02.937 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:32:02.937 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:02.937 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:02.937 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:02.937 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:02.937 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:02.937 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:02.937 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:02.937 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:02.938 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:02.938 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:03.196 [2024-07-25 00:15:58.868301] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:03.196 [2024-07-25 00:15:58.868604] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:03.197 [2024-07-25 00:15:58.868628] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:03.197 request: 00:32:03.197 { 00:32:03.197 "base_bdev": "BaseBdev1", 00:32:03.197 "raid_bdev": "raid_bdev1", 00:32:03.197 "method": "bdev_raid_add_base_bdev", 00:32:03.197 "req_id": 1 00:32:03.197 } 00:32:03.197 Got JSON-RPC error response 00:32:03.197 response: 00:32:03.197 { 00:32:03.197 "code": -22, 00:32:03.197 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:03.197 } 00:32:03.197 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:32:03.197 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:03.197 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:03.197 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:03.197 00:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:32:04.134 00:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:04.134 00:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:04.134 00:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:04.134 00:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:04.134 00:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:04.134 00:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:04.134 00:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:04.134 00:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:04.134 00:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:04.134 00:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:04.134 00:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:04.134 00:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:04.393 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:04.393 "name": "raid_bdev1", 00:32:04.393 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:32:04.393 "strip_size_kb": 64, 00:32:04.393 "state": "online", 00:32:04.393 "raid_level": "raid5f", 00:32:04.393 "superblock": true, 00:32:04.393 "num_base_bdevs": 4, 00:32:04.393 "num_base_bdevs_discovered": 3, 00:32:04.393 "num_base_bdevs_operational": 3, 00:32:04.393 "base_bdevs_list": [ 00:32:04.393 { 00:32:04.393 "name": null, 00:32:04.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:04.393 "is_configured": false, 00:32:04.393 "data_offset": 2048, 00:32:04.394 "data_size": 63488 00:32:04.394 }, 00:32:04.394 { 00:32:04.394 "name": "BaseBdev2", 00:32:04.394 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:32:04.394 "is_configured": true, 00:32:04.394 "data_offset": 2048, 00:32:04.394 "data_size": 63488 00:32:04.394 }, 00:32:04.394 { 00:32:04.394 "name": "BaseBdev3", 00:32:04.394 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:32:04.394 "is_configured": true, 00:32:04.394 "data_offset": 2048, 00:32:04.394 "data_size": 63488 00:32:04.394 }, 00:32:04.394 { 00:32:04.394 "name": "BaseBdev4", 00:32:04.394 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:32:04.394 "is_configured": true, 00:32:04.394 "data_offset": 2048, 00:32:04.394 "data_size": 63488 00:32:04.394 } 00:32:04.394 ] 00:32:04.394 }' 00:32:04.394 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:04.394 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:04.653 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:04.653 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:04.653 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:04.653 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:04.653 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:04.653 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:04.653 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:04.912 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:04.912 "name": "raid_bdev1", 00:32:04.912 "uuid": "2675cfc3-19bf-48d4-8186-63c08a58406e", 00:32:04.912 "strip_size_kb": 64, 00:32:04.912 "state": "online", 00:32:04.912 "raid_level": "raid5f", 00:32:04.912 "superblock": true, 00:32:04.912 "num_base_bdevs": 4, 00:32:04.912 "num_base_bdevs_discovered": 3, 00:32:04.912 "num_base_bdevs_operational": 3, 00:32:04.912 "base_bdevs_list": [ 00:32:04.912 { 00:32:04.912 "name": null, 00:32:04.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:04.912 "is_configured": false, 00:32:04.912 "data_offset": 2048, 00:32:04.912 "data_size": 63488 00:32:04.912 }, 00:32:04.912 { 00:32:04.912 "name": "BaseBdev2", 00:32:04.912 "uuid": "86e25388-4be4-57f7-a7c3-106d1e3fd662", 00:32:04.912 "is_configured": true, 00:32:04.912 "data_offset": 2048, 00:32:04.912 "data_size": 63488 00:32:04.912 }, 00:32:04.912 { 00:32:04.912 "name": "BaseBdev3", 00:32:04.912 "uuid": "1e8c58ae-0e73-5fe7-8211-635568f8fe4e", 00:32:04.912 "is_configured": true, 00:32:04.912 "data_offset": 2048, 00:32:04.912 "data_size": 63488 00:32:04.912 }, 00:32:04.912 { 00:32:04.912 "name": "BaseBdev4", 00:32:04.912 "uuid": "3d43dd0f-27a0-586e-a8e5-d3ad457734db", 00:32:04.912 "is_configured": true, 00:32:04.912 "data_offset": 2048, 00:32:04.912 "data_size": 63488 00:32:04.912 } 00:32:04.912 ] 00:32:04.912 }' 00:32:04.912 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:04.912 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:04.912 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:04.912 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:04.912 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 109208 00:32:04.912 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 109208 ']' 00:32:04.912 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 109208 00:32:04.912 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:32:04.912 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:04.912 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109208 00:32:04.912 killing process with pid 109208 00:32:04.912 Received shutdown signal, test time was about 60.000000 seconds 00:32:04.912 00:32:04.912 Latency(us) 00:32:04.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:04.912 =================================================================================================================== 00:32:04.912 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:04.912 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:04.912 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:04.912 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109208' 00:32:04.912 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 109208 00:32:04.912 [2024-07-25 00:16:00.665321] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:04.912 00:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 109208 00:32:04.912 [2024-07-25 00:16:00.665427] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:04.912 [2024-07-25 00:16:00.665511] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:04.912 [2024-07-25 00:16:00.665525] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state offline 00:32:05.171 [2024-07-25 00:16:00.974202] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:06.108 00:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:32:06.108 00:32:06.108 real 0m34.202s 00:32:06.108 user 0m48.948s 00:32:06.108 sys 0m3.945s 00:32:06.108 00:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:06.108 00:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:06.108 ************************************ 00:32:06.108 END TEST raid5f_rebuild_test_sb 00:32:06.108 ************************************ 00:32:06.108 00:16:01 bdev_raid -- bdev/bdev_raid.sh@976 -- # base_blocklen=4096 00:32:06.108 00:16:01 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:32:06.108 00:16:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:32:06.108 00:16:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:06.108 00:16:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:06.108 ************************************ 00:32:06.108 START TEST raid_state_function_test_sb_4k 00:32:06.108 ************************************ 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:32:06.108 Process raid pid: 110106 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=110106 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 110106' 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 110106 /var/tmp/spdk-raid.sock 00:32:06.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 110106 ']' 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:06.108 00:16:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:06.367 [2024-07-25 00:16:02.017506] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:32:06.367 [2024-07-25 00:16:02.017684] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.367 [2024-07-25 00:16:02.190395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.626 [2024-07-25 00:16:02.344922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.626 [2024-07-25 00:16:02.486453] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:07.194 00:16:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:07.194 00:16:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:32:07.194 00:16:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:07.453 [2024-07-25 00:16:03.149064] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:07.453 [2024-07-25 00:16:03.149132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:07.453 [2024-07-25 00:16:03.149146] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:07.453 [2024-07-25 00:16:03.149159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:07.453 00:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:07.453 00:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:07.453 00:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:07.453 00:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:07.453 00:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:07.453 00:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:07.453 00:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:07.453 00:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:07.453 00:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:07.453 00:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:07.453 00:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:07.453 00:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:07.711 00:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:07.711 "name": "Existed_Raid", 00:32:07.711 "uuid": "573e8d68-aac2-4dd6-a4ca-318efff8d685", 00:32:07.711 "strip_size_kb": 0, 00:32:07.711 "state": "configuring", 00:32:07.711 "raid_level": "raid1", 00:32:07.711 "superblock": true, 00:32:07.711 "num_base_bdevs": 2, 00:32:07.711 "num_base_bdevs_discovered": 0, 00:32:07.711 "num_base_bdevs_operational": 2, 00:32:07.711 "base_bdevs_list": [ 00:32:07.711 { 00:32:07.711 "name": "BaseBdev1", 00:32:07.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:07.711 "is_configured": false, 00:32:07.711 "data_offset": 0, 00:32:07.711 "data_size": 0 00:32:07.711 }, 00:32:07.711 { 00:32:07.711 "name": "BaseBdev2", 00:32:07.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:07.711 "is_configured": false, 00:32:07.711 "data_offset": 0, 00:32:07.711 "data_size": 0 00:32:07.711 } 00:32:07.711 ] 00:32:07.711 }' 00:32:07.711 00:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:07.711 00:16:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:07.969 00:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:08.227 [2024-07-25 00:16:03.933165] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:08.227 [2024-07-25 00:16:03.933205] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:32:08.227 00:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:08.501 [2024-07-25 00:16:04.189252] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:08.501 [2024-07-25 00:16:04.189319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:08.501 [2024-07-25 00:16:04.189332] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:08.501 [2024-07-25 00:16:04.189345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:08.501 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:32:08.773 [2024-07-25 00:16:04.457571] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:08.773 BaseBdev1 00:32:08.773 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:32:08.773 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:32:08.773 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:08.773 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:32:08.773 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:08.773 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:08.773 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:09.032 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:09.032 [ 00:32:09.032 { 00:32:09.032 "name": "BaseBdev1", 00:32:09.032 "aliases": [ 00:32:09.032 "8f6b4e74-e183-4fb1-be0c-b6615303573d" 00:32:09.032 ], 00:32:09.032 "product_name": "Malloc disk", 00:32:09.032 "block_size": 4096, 00:32:09.032 "num_blocks": 8192, 00:32:09.032 "uuid": "8f6b4e74-e183-4fb1-be0c-b6615303573d", 00:32:09.032 "assigned_rate_limits": { 00:32:09.032 "rw_ios_per_sec": 0, 00:32:09.032 "rw_mbytes_per_sec": 0, 00:32:09.032 "r_mbytes_per_sec": 0, 00:32:09.032 "w_mbytes_per_sec": 0 00:32:09.032 }, 00:32:09.032 "claimed": true, 00:32:09.032 "claim_type": "exclusive_write", 00:32:09.032 "zoned": false, 00:32:09.032 "supported_io_types": { 00:32:09.032 "read": true, 00:32:09.032 "write": true, 00:32:09.032 "unmap": true, 00:32:09.032 "flush": true, 00:32:09.032 "reset": true, 00:32:09.032 "nvme_admin": false, 00:32:09.032 "nvme_io": false, 00:32:09.032 "nvme_io_md": false, 00:32:09.032 "write_zeroes": true, 00:32:09.032 "zcopy": true, 00:32:09.032 "get_zone_info": false, 00:32:09.032 "zone_management": false, 00:32:09.032 "zone_append": false, 00:32:09.032 "compare": false, 00:32:09.032 "compare_and_write": false, 00:32:09.032 "abort": true, 00:32:09.032 "seek_hole": false, 00:32:09.032 "seek_data": false, 00:32:09.032 "copy": true, 00:32:09.032 "nvme_iov_md": false 00:32:09.032 }, 00:32:09.032 "memory_domains": [ 00:32:09.032 { 00:32:09.032 "dma_device_id": "system", 00:32:09.032 "dma_device_type": 1 00:32:09.032 }, 00:32:09.032 { 00:32:09.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:09.032 "dma_device_type": 2 00:32:09.032 } 00:32:09.032 ], 00:32:09.032 "driver_specific": {} 00:32:09.032 } 00:32:09.032 ] 00:32:09.032 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:32:09.032 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:09.032 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:09.032 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:09.032 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:09.033 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:09.033 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:09.033 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:09.033 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:09.033 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:09.033 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:09.033 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:09.033 00:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:09.291 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:09.291 "name": "Existed_Raid", 00:32:09.291 "uuid": "f5242f92-4c0e-4595-bdc8-997561b02ce0", 00:32:09.291 "strip_size_kb": 0, 00:32:09.291 "state": "configuring", 00:32:09.291 "raid_level": "raid1", 00:32:09.292 "superblock": true, 00:32:09.292 "num_base_bdevs": 2, 00:32:09.292 "num_base_bdevs_discovered": 1, 00:32:09.292 "num_base_bdevs_operational": 2, 00:32:09.292 "base_bdevs_list": [ 00:32:09.292 { 00:32:09.292 "name": "BaseBdev1", 00:32:09.292 "uuid": "8f6b4e74-e183-4fb1-be0c-b6615303573d", 00:32:09.292 "is_configured": true, 00:32:09.292 "data_offset": 256, 00:32:09.292 "data_size": 7936 00:32:09.292 }, 00:32:09.292 { 00:32:09.292 "name": "BaseBdev2", 00:32:09.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:09.292 "is_configured": false, 00:32:09.292 "data_offset": 0, 00:32:09.292 "data_size": 0 00:32:09.292 } 00:32:09.292 ] 00:32:09.292 }' 00:32:09.292 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:09.292 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:09.551 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:09.810 [2024-07-25 00:16:05.589925] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:09.810 [2024-07-25 00:16:05.589976] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:32:09.810 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:10.069 [2024-07-25 00:16:05.785999] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:10.069 [2024-07-25 00:16:05.787672] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:10.069 [2024-07-25 00:16:05.787721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:10.069 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:32:10.069 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:10.069 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:10.069 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:10.069 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:10.069 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:10.069 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:10.069 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:10.069 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:10.069 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:10.069 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:10.069 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:10.069 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:10.069 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:10.328 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:10.328 "name": "Existed_Raid", 00:32:10.328 "uuid": "9954c323-7c39-4277-984b-de53687b5ff0", 00:32:10.328 "strip_size_kb": 0, 00:32:10.328 "state": "configuring", 00:32:10.328 "raid_level": "raid1", 00:32:10.328 "superblock": true, 00:32:10.328 "num_base_bdevs": 2, 00:32:10.328 "num_base_bdevs_discovered": 1, 00:32:10.328 "num_base_bdevs_operational": 2, 00:32:10.328 "base_bdevs_list": [ 00:32:10.328 { 00:32:10.328 "name": "BaseBdev1", 00:32:10.328 "uuid": "8f6b4e74-e183-4fb1-be0c-b6615303573d", 00:32:10.328 "is_configured": true, 00:32:10.328 "data_offset": 256, 00:32:10.328 "data_size": 7936 00:32:10.328 }, 00:32:10.328 { 00:32:10.328 "name": "BaseBdev2", 00:32:10.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:10.328 "is_configured": false, 00:32:10.328 "data_offset": 0, 00:32:10.328 "data_size": 0 00:32:10.328 } 00:32:10.328 ] 00:32:10.328 }' 00:32:10.329 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:10.329 00:16:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:10.588 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:32:10.847 BaseBdev2 00:32:10.847 [2024-07-25 00:16:06.523877] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:10.847 [2024-07-25 00:16:06.524134] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:32:10.847 [2024-07-25 00:16:06.524151] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:10.847 [2024-07-25 00:16:06.524247] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:32:10.847 [2024-07-25 00:16:06.524572] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:32:10.847 [2024-07-25 00:16:06.524590] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:32:10.847 [2024-07-25 00:16:06.524721] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:10.847 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:32:10.847 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:32:10.847 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:10.847 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:32:10.847 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:10.847 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:10.847 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:11.106 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:11.365 [ 00:32:11.365 { 00:32:11.365 "name": "BaseBdev2", 00:32:11.365 "aliases": [ 00:32:11.365 "752bdb8e-d9ff-4ba7-ab7c-d5473a270145" 00:32:11.365 ], 00:32:11.365 "product_name": "Malloc disk", 00:32:11.365 "block_size": 4096, 00:32:11.365 "num_blocks": 8192, 00:32:11.365 "uuid": "752bdb8e-d9ff-4ba7-ab7c-d5473a270145", 00:32:11.365 "assigned_rate_limits": { 00:32:11.365 "rw_ios_per_sec": 0, 00:32:11.365 "rw_mbytes_per_sec": 0, 00:32:11.365 "r_mbytes_per_sec": 0, 00:32:11.365 "w_mbytes_per_sec": 0 00:32:11.365 }, 00:32:11.365 "claimed": true, 00:32:11.365 "claim_type": "exclusive_write", 00:32:11.365 "zoned": false, 00:32:11.365 "supported_io_types": { 00:32:11.365 "read": true, 00:32:11.365 "write": true, 00:32:11.365 "unmap": true, 00:32:11.365 "flush": true, 00:32:11.365 "reset": true, 00:32:11.365 "nvme_admin": false, 00:32:11.365 "nvme_io": false, 00:32:11.365 "nvme_io_md": false, 00:32:11.365 "write_zeroes": true, 00:32:11.365 "zcopy": true, 00:32:11.365 "get_zone_info": false, 00:32:11.365 "zone_management": false, 00:32:11.365 "zone_append": false, 00:32:11.365 "compare": false, 00:32:11.365 "compare_and_write": false, 00:32:11.365 "abort": true, 00:32:11.365 "seek_hole": false, 00:32:11.365 "seek_data": false, 00:32:11.365 "copy": true, 00:32:11.365 "nvme_iov_md": false 00:32:11.365 }, 00:32:11.365 "memory_domains": [ 00:32:11.365 { 00:32:11.365 "dma_device_id": "system", 00:32:11.365 "dma_device_type": 1 00:32:11.365 }, 00:32:11.365 { 00:32:11.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:11.365 "dma_device_type": 2 00:32:11.365 } 00:32:11.365 ], 00:32:11.365 "driver_specific": {} 00:32:11.365 } 00:32:11.365 ] 00:32:11.365 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:32:11.365 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:11.365 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:11.365 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:32:11.365 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:11.365 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:11.365 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:11.365 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:11.365 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:11.365 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:11.365 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:11.365 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:11.365 00:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:11.365 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:11.365 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:11.366 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:11.366 "name": "Existed_Raid", 00:32:11.366 "uuid": "9954c323-7c39-4277-984b-de53687b5ff0", 00:32:11.366 "strip_size_kb": 0, 00:32:11.366 "state": "online", 00:32:11.366 "raid_level": "raid1", 00:32:11.366 "superblock": true, 00:32:11.366 "num_base_bdevs": 2, 00:32:11.366 "num_base_bdevs_discovered": 2, 00:32:11.366 "num_base_bdevs_operational": 2, 00:32:11.366 "base_bdevs_list": [ 00:32:11.366 { 00:32:11.366 "name": "BaseBdev1", 00:32:11.366 "uuid": "8f6b4e74-e183-4fb1-be0c-b6615303573d", 00:32:11.366 "is_configured": true, 00:32:11.366 "data_offset": 256, 00:32:11.366 "data_size": 7936 00:32:11.366 }, 00:32:11.366 { 00:32:11.366 "name": "BaseBdev2", 00:32:11.366 "uuid": "752bdb8e-d9ff-4ba7-ab7c-d5473a270145", 00:32:11.366 "is_configured": true, 00:32:11.366 "data_offset": 256, 00:32:11.366 "data_size": 7936 00:32:11.366 } 00:32:11.366 ] 00:32:11.366 }' 00:32:11.366 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:11.366 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:11.624 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:32:11.624 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:11.624 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:11.625 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:11.625 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:11.625 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:32:11.625 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:11.625 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:11.883 [2024-07-25 00:16:07.664492] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:11.883 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:11.883 "name": "Existed_Raid", 00:32:11.883 "aliases": [ 00:32:11.883 "9954c323-7c39-4277-984b-de53687b5ff0" 00:32:11.883 ], 00:32:11.883 "product_name": "Raid Volume", 00:32:11.883 "block_size": 4096, 00:32:11.883 "num_blocks": 7936, 00:32:11.883 "uuid": "9954c323-7c39-4277-984b-de53687b5ff0", 00:32:11.883 "assigned_rate_limits": { 00:32:11.883 "rw_ios_per_sec": 0, 00:32:11.883 "rw_mbytes_per_sec": 0, 00:32:11.883 "r_mbytes_per_sec": 0, 00:32:11.883 "w_mbytes_per_sec": 0 00:32:11.883 }, 00:32:11.883 "claimed": false, 00:32:11.883 "zoned": false, 00:32:11.883 "supported_io_types": { 00:32:11.883 "read": true, 00:32:11.883 "write": true, 00:32:11.883 "unmap": false, 00:32:11.883 "flush": false, 00:32:11.883 "reset": true, 00:32:11.883 "nvme_admin": false, 00:32:11.883 "nvme_io": false, 00:32:11.883 "nvme_io_md": false, 00:32:11.883 "write_zeroes": true, 00:32:11.883 "zcopy": false, 00:32:11.883 "get_zone_info": false, 00:32:11.883 "zone_management": false, 00:32:11.883 "zone_append": false, 00:32:11.883 "compare": false, 00:32:11.883 "compare_and_write": false, 00:32:11.884 "abort": false, 00:32:11.884 "seek_hole": false, 00:32:11.884 "seek_data": false, 00:32:11.884 "copy": false, 00:32:11.884 "nvme_iov_md": false 00:32:11.884 }, 00:32:11.884 "memory_domains": [ 00:32:11.884 { 00:32:11.884 "dma_device_id": "system", 00:32:11.884 "dma_device_type": 1 00:32:11.884 }, 00:32:11.884 { 00:32:11.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:11.884 "dma_device_type": 2 00:32:11.884 }, 00:32:11.884 { 00:32:11.884 "dma_device_id": "system", 00:32:11.884 "dma_device_type": 1 00:32:11.884 }, 00:32:11.884 { 00:32:11.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:11.884 "dma_device_type": 2 00:32:11.884 } 00:32:11.884 ], 00:32:11.884 "driver_specific": { 00:32:11.884 "raid": { 00:32:11.884 "uuid": "9954c323-7c39-4277-984b-de53687b5ff0", 00:32:11.884 "strip_size_kb": 0, 00:32:11.884 "state": "online", 00:32:11.884 "raid_level": "raid1", 00:32:11.884 "superblock": true, 00:32:11.884 "num_base_bdevs": 2, 00:32:11.884 "num_base_bdevs_discovered": 2, 00:32:11.884 "num_base_bdevs_operational": 2, 00:32:11.884 "base_bdevs_list": [ 00:32:11.884 { 00:32:11.884 "name": "BaseBdev1", 00:32:11.884 "uuid": "8f6b4e74-e183-4fb1-be0c-b6615303573d", 00:32:11.884 "is_configured": true, 00:32:11.884 "data_offset": 256, 00:32:11.884 "data_size": 7936 00:32:11.884 }, 00:32:11.884 { 00:32:11.884 "name": "BaseBdev2", 00:32:11.884 "uuid": "752bdb8e-d9ff-4ba7-ab7c-d5473a270145", 00:32:11.884 "is_configured": true, 00:32:11.884 "data_offset": 256, 00:32:11.884 "data_size": 7936 00:32:11.884 } 00:32:11.884 ] 00:32:11.884 } 00:32:11.884 } 00:32:11.884 }' 00:32:11.884 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:11.884 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:32:11.884 BaseBdev2' 00:32:11.884 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:11.884 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:11.884 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:32:12.142 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:12.142 "name": "BaseBdev1", 00:32:12.142 "aliases": [ 00:32:12.142 "8f6b4e74-e183-4fb1-be0c-b6615303573d" 00:32:12.142 ], 00:32:12.142 "product_name": "Malloc disk", 00:32:12.142 "block_size": 4096, 00:32:12.142 "num_blocks": 8192, 00:32:12.142 "uuid": "8f6b4e74-e183-4fb1-be0c-b6615303573d", 00:32:12.142 "assigned_rate_limits": { 00:32:12.142 "rw_ios_per_sec": 0, 00:32:12.142 "rw_mbytes_per_sec": 0, 00:32:12.142 "r_mbytes_per_sec": 0, 00:32:12.142 "w_mbytes_per_sec": 0 00:32:12.142 }, 00:32:12.142 "claimed": true, 00:32:12.142 "claim_type": "exclusive_write", 00:32:12.142 "zoned": false, 00:32:12.142 "supported_io_types": { 00:32:12.142 "read": true, 00:32:12.142 "write": true, 00:32:12.142 "unmap": true, 00:32:12.142 "flush": true, 00:32:12.142 "reset": true, 00:32:12.142 "nvme_admin": false, 00:32:12.142 "nvme_io": false, 00:32:12.142 "nvme_io_md": false, 00:32:12.142 "write_zeroes": true, 00:32:12.142 "zcopy": true, 00:32:12.142 "get_zone_info": false, 00:32:12.142 "zone_management": false, 00:32:12.142 "zone_append": false, 00:32:12.142 "compare": false, 00:32:12.142 "compare_and_write": false, 00:32:12.142 "abort": true, 00:32:12.142 "seek_hole": false, 00:32:12.142 "seek_data": false, 00:32:12.142 "copy": true, 00:32:12.142 "nvme_iov_md": false 00:32:12.142 }, 00:32:12.142 "memory_domains": [ 00:32:12.142 { 00:32:12.142 "dma_device_id": "system", 00:32:12.142 "dma_device_type": 1 00:32:12.142 }, 00:32:12.142 { 00:32:12.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:12.142 "dma_device_type": 2 00:32:12.142 } 00:32:12.142 ], 00:32:12.142 "driver_specific": {} 00:32:12.142 }' 00:32:12.142 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:12.142 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:12.142 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:12.142 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:12.142 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:12.142 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:12.142 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:12.142 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:12.142 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:12.142 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:12.142 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:12.142 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:12.142 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:12.142 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:12.142 00:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:12.400 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:12.400 "name": "BaseBdev2", 00:32:12.400 "aliases": [ 00:32:12.400 "752bdb8e-d9ff-4ba7-ab7c-d5473a270145" 00:32:12.400 ], 00:32:12.400 "product_name": "Malloc disk", 00:32:12.400 "block_size": 4096, 00:32:12.400 "num_blocks": 8192, 00:32:12.400 "uuid": "752bdb8e-d9ff-4ba7-ab7c-d5473a270145", 00:32:12.400 "assigned_rate_limits": { 00:32:12.400 "rw_ios_per_sec": 0, 00:32:12.400 "rw_mbytes_per_sec": 0, 00:32:12.400 "r_mbytes_per_sec": 0, 00:32:12.400 "w_mbytes_per_sec": 0 00:32:12.400 }, 00:32:12.400 "claimed": true, 00:32:12.400 "claim_type": "exclusive_write", 00:32:12.400 "zoned": false, 00:32:12.400 "supported_io_types": { 00:32:12.400 "read": true, 00:32:12.400 "write": true, 00:32:12.400 "unmap": true, 00:32:12.400 "flush": true, 00:32:12.400 "reset": true, 00:32:12.400 "nvme_admin": false, 00:32:12.400 "nvme_io": false, 00:32:12.400 "nvme_io_md": false, 00:32:12.400 "write_zeroes": true, 00:32:12.400 "zcopy": true, 00:32:12.400 "get_zone_info": false, 00:32:12.400 "zone_management": false, 00:32:12.400 "zone_append": false, 00:32:12.400 "compare": false, 00:32:12.400 "compare_and_write": false, 00:32:12.400 "abort": true, 00:32:12.400 "seek_hole": false, 00:32:12.400 "seek_data": false, 00:32:12.400 "copy": true, 00:32:12.400 "nvme_iov_md": false 00:32:12.400 }, 00:32:12.400 "memory_domains": [ 00:32:12.400 { 00:32:12.400 "dma_device_id": "system", 00:32:12.400 "dma_device_type": 1 00:32:12.400 }, 00:32:12.400 { 00:32:12.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:12.400 "dma_device_type": 2 00:32:12.400 } 00:32:12.400 ], 00:32:12.400 "driver_specific": {} 00:32:12.400 }' 00:32:12.400 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:12.400 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:12.400 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:12.400 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:12.400 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:12.400 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:12.400 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:12.400 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:12.400 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:12.400 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:12.400 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:12.400 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:12.400 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:12.658 [2024-07-25 00:16:08.396475] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:12.658 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:32:12.658 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:32:12.658 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:12.658 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:32:12.658 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:32:12.658 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:32:12.659 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:12.659 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:12.659 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:12.659 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:12.659 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:12.659 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:12.659 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:12.659 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:12.659 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:12.659 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:12.659 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:12.916 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:12.916 "name": "Existed_Raid", 00:32:12.916 "uuid": "9954c323-7c39-4277-984b-de53687b5ff0", 00:32:12.916 "strip_size_kb": 0, 00:32:12.916 "state": "online", 00:32:12.916 "raid_level": "raid1", 00:32:12.916 "superblock": true, 00:32:12.916 "num_base_bdevs": 2, 00:32:12.916 "num_base_bdevs_discovered": 1, 00:32:12.916 "num_base_bdevs_operational": 1, 00:32:12.916 "base_bdevs_list": [ 00:32:12.916 { 00:32:12.916 "name": null, 00:32:12.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:12.916 "is_configured": false, 00:32:12.916 "data_offset": 256, 00:32:12.916 "data_size": 7936 00:32:12.916 }, 00:32:12.916 { 00:32:12.916 "name": "BaseBdev2", 00:32:12.916 "uuid": "752bdb8e-d9ff-4ba7-ab7c-d5473a270145", 00:32:12.916 "is_configured": true, 00:32:12.916 "data_offset": 256, 00:32:12.916 "data_size": 7936 00:32:12.916 } 00:32:12.916 ] 00:32:12.916 }' 00:32:12.916 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:12.916 00:16:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:13.483 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:32:13.483 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:13.483 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:13.483 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:13.483 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:13.483 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:13.483 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:32:13.741 [2024-07-25 00:16:09.443706] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:13.741 [2024-07-25 00:16:09.443841] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:13.741 [2024-07-25 00:16:09.513517] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:13.741 [2024-07-25 00:16:09.513567] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:13.741 [2024-07-25 00:16:09.513584] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:32:13.741 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:13.741 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:13.741 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:13.741 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:32:14.000 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:32:14.000 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:32:14.000 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:32:14.000 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 110106 00:32:14.000 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 110106 ']' 00:32:14.000 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 110106 00:32:14.000 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:32:14.000 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:14.000 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110106 00:32:14.000 killing process with pid 110106 00:32:14.000 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:14.000 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:14.000 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110106' 00:32:14.000 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 110106 00:32:14.000 [2024-07-25 00:16:09.813827] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:14.000 [2024-07-25 00:16:09.813961] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:14.000 00:16:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 110106 00:32:14.935 00:16:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:32:14.935 00:32:14.935 real 0m8.778s 00:32:14.935 user 0m14.438s 00:32:14.935 sys 0m1.394s 00:32:14.935 ************************************ 00:32:14.935 END TEST raid_state_function_test_sb_4k 00:32:14.935 ************************************ 00:32:14.935 00:16:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:14.935 00:16:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:14.935 00:16:10 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:32:14.935 00:16:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:14.935 00:16:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:14.935 00:16:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:14.935 ************************************ 00:32:14.935 START TEST raid_superblock_test_4k 00:32:14.935 ************************************ 00:32:14.935 00:16:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:32:14.935 00:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:32:14.935 00:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:32:14.935 00:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:32:14.935 00:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:32:14.935 00:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:32:14.935 00:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:32:14.935 00:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:32:14.935 00:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:32:14.935 00:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:32:14.935 00:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@414 -- # local strip_size 00:32:14.935 00:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:32:14.935 00:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:32:14.935 00:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:32:14.935 00:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:32:14.935 00:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:32:14.935 00:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@427 -- # raid_pid=110431 00:32:14.935 00:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@428 -- # waitforlisten 110431 /var/tmp/spdk-raid.sock 00:32:14.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:14.936 00:16:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 110431 ']' 00:32:14.936 00:16:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:14.936 00:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:32:14.936 00:16:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:14.936 00:16:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:14.936 00:16:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:14.936 00:16:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:15.194 [2024-07-25 00:16:10.850623] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:32:15.195 [2024-07-25 00:16:10.850826] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110431 ] 00:32:15.195 [2024-07-25 00:16:11.022098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.454 [2024-07-25 00:16:11.175072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.454 [2024-07-25 00:16:11.316955] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:16.022 00:16:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:16.022 00:16:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:32:16.022 00:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:32:16.022 00:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:32:16.022 00:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:32:16.022 00:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:32:16.022 00:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:16.022 00:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:16.022 00:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:32:16.022 00:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:16.022 00:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:32:16.281 malloc1 00:32:16.281 00:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:16.540 [2024-07-25 00:16:12.165577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:16.540 [2024-07-25 00:16:12.165641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:16.540 [2024-07-25 00:16:12.165669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:32:16.540 [2024-07-25 00:16:12.165681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:16.540 [2024-07-25 00:16:12.167715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:16.540 [2024-07-25 00:16:12.167753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:16.540 pt1 00:32:16.540 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:32:16.540 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:32:16.540 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:32:16.540 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:32:16.540 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:16.540 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:16.540 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:32:16.540 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:16.540 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:32:16.799 malloc2 00:32:16.799 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:16.799 [2024-07-25 00:16:12.621354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:16.799 [2024-07-25 00:16:12.621418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:16.799 [2024-07-25 00:16:12.621444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:32:16.799 [2024-07-25 00:16:12.621456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:16.799 [2024-07-25 00:16:12.623517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:16.799 [2024-07-25 00:16:12.623554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:16.799 pt2 00:32:16.799 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:32:16.799 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:32:16.799 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:32:17.057 [2024-07-25 00:16:12.865436] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:17.057 [2024-07-25 00:16:12.867182] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:17.057 [2024-07-25 00:16:12.867378] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007e80 00:32:17.057 [2024-07-25 00:16:12.867393] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:17.057 [2024-07-25 00:16:12.867501] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:32:17.057 [2024-07-25 00:16:12.867822] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007e80 00:32:17.057 [2024-07-25 00:16:12.867840] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007e80 00:32:17.057 [2024-07-25 00:16:12.867983] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:17.057 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:17.057 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:17.057 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:17.057 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:17.057 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:17.057 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:17.057 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:17.057 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:17.057 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:17.057 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:17.057 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:17.057 00:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:17.316 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:17.316 "name": "raid_bdev1", 00:32:17.316 "uuid": "f38f61de-c668-4a34-a571-b0c313425fc6", 00:32:17.316 "strip_size_kb": 0, 00:32:17.316 "state": "online", 00:32:17.316 "raid_level": "raid1", 00:32:17.316 "superblock": true, 00:32:17.316 "num_base_bdevs": 2, 00:32:17.316 "num_base_bdevs_discovered": 2, 00:32:17.316 "num_base_bdevs_operational": 2, 00:32:17.316 "base_bdevs_list": [ 00:32:17.316 { 00:32:17.316 "name": "pt1", 00:32:17.316 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:17.316 "is_configured": true, 00:32:17.316 "data_offset": 256, 00:32:17.316 "data_size": 7936 00:32:17.316 }, 00:32:17.316 { 00:32:17.316 "name": "pt2", 00:32:17.316 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:17.316 "is_configured": true, 00:32:17.316 "data_offset": 256, 00:32:17.316 "data_size": 7936 00:32:17.316 } 00:32:17.316 ] 00:32:17.316 }' 00:32:17.316 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:17.316 00:16:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:17.575 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:32:17.575 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:32:17.575 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:17.575 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:17.575 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:17.575 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:32:17.575 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:17.575 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:17.833 [2024-07-25 00:16:13.569797] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:17.833 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:17.833 "name": "raid_bdev1", 00:32:17.833 "aliases": [ 00:32:17.833 "f38f61de-c668-4a34-a571-b0c313425fc6" 00:32:17.833 ], 00:32:17.833 "product_name": "Raid Volume", 00:32:17.833 "block_size": 4096, 00:32:17.833 "num_blocks": 7936, 00:32:17.833 "uuid": "f38f61de-c668-4a34-a571-b0c313425fc6", 00:32:17.833 "assigned_rate_limits": { 00:32:17.833 "rw_ios_per_sec": 0, 00:32:17.833 "rw_mbytes_per_sec": 0, 00:32:17.833 "r_mbytes_per_sec": 0, 00:32:17.833 "w_mbytes_per_sec": 0 00:32:17.833 }, 00:32:17.833 "claimed": false, 00:32:17.833 "zoned": false, 00:32:17.833 "supported_io_types": { 00:32:17.833 "read": true, 00:32:17.833 "write": true, 00:32:17.833 "unmap": false, 00:32:17.833 "flush": false, 00:32:17.833 "reset": true, 00:32:17.833 "nvme_admin": false, 00:32:17.833 "nvme_io": false, 00:32:17.833 "nvme_io_md": false, 00:32:17.833 "write_zeroes": true, 00:32:17.833 "zcopy": false, 00:32:17.833 "get_zone_info": false, 00:32:17.833 "zone_management": false, 00:32:17.833 "zone_append": false, 00:32:17.833 "compare": false, 00:32:17.833 "compare_and_write": false, 00:32:17.833 "abort": false, 00:32:17.833 "seek_hole": false, 00:32:17.833 "seek_data": false, 00:32:17.833 "copy": false, 00:32:17.833 "nvme_iov_md": false 00:32:17.833 }, 00:32:17.833 "memory_domains": [ 00:32:17.833 { 00:32:17.833 "dma_device_id": "system", 00:32:17.833 "dma_device_type": 1 00:32:17.833 }, 00:32:17.833 { 00:32:17.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:17.834 "dma_device_type": 2 00:32:17.834 }, 00:32:17.834 { 00:32:17.834 "dma_device_id": "system", 00:32:17.834 "dma_device_type": 1 00:32:17.834 }, 00:32:17.834 { 00:32:17.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:17.834 "dma_device_type": 2 00:32:17.834 } 00:32:17.834 ], 00:32:17.834 "driver_specific": { 00:32:17.834 "raid": { 00:32:17.834 "uuid": "f38f61de-c668-4a34-a571-b0c313425fc6", 00:32:17.834 "strip_size_kb": 0, 00:32:17.834 "state": "online", 00:32:17.834 "raid_level": "raid1", 00:32:17.834 "superblock": true, 00:32:17.834 "num_base_bdevs": 2, 00:32:17.834 "num_base_bdevs_discovered": 2, 00:32:17.834 "num_base_bdevs_operational": 2, 00:32:17.834 "base_bdevs_list": [ 00:32:17.834 { 00:32:17.834 "name": "pt1", 00:32:17.834 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:17.834 "is_configured": true, 00:32:17.834 "data_offset": 256, 00:32:17.834 "data_size": 7936 00:32:17.834 }, 00:32:17.834 { 00:32:17.834 "name": "pt2", 00:32:17.834 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:17.834 "is_configured": true, 00:32:17.834 "data_offset": 256, 00:32:17.834 "data_size": 7936 00:32:17.834 } 00:32:17.834 ] 00:32:17.834 } 00:32:17.834 } 00:32:17.834 }' 00:32:17.834 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:17.834 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:32:17.834 pt2' 00:32:17.834 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:17.834 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:32:17.834 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:18.092 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:18.092 "name": "pt1", 00:32:18.092 "aliases": [ 00:32:18.092 "00000000-0000-0000-0000-000000000001" 00:32:18.092 ], 00:32:18.092 "product_name": "passthru", 00:32:18.092 "block_size": 4096, 00:32:18.092 "num_blocks": 8192, 00:32:18.092 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:18.092 "assigned_rate_limits": { 00:32:18.092 "rw_ios_per_sec": 0, 00:32:18.092 "rw_mbytes_per_sec": 0, 00:32:18.092 "r_mbytes_per_sec": 0, 00:32:18.092 "w_mbytes_per_sec": 0 00:32:18.092 }, 00:32:18.092 "claimed": true, 00:32:18.092 "claim_type": "exclusive_write", 00:32:18.092 "zoned": false, 00:32:18.092 "supported_io_types": { 00:32:18.092 "read": true, 00:32:18.092 "write": true, 00:32:18.092 "unmap": true, 00:32:18.092 "flush": true, 00:32:18.092 "reset": true, 00:32:18.092 "nvme_admin": false, 00:32:18.092 "nvme_io": false, 00:32:18.092 "nvme_io_md": false, 00:32:18.092 "write_zeroes": true, 00:32:18.092 "zcopy": true, 00:32:18.092 "get_zone_info": false, 00:32:18.092 "zone_management": false, 00:32:18.092 "zone_append": false, 00:32:18.092 "compare": false, 00:32:18.092 "compare_and_write": false, 00:32:18.092 "abort": true, 00:32:18.092 "seek_hole": false, 00:32:18.092 "seek_data": false, 00:32:18.092 "copy": true, 00:32:18.092 "nvme_iov_md": false 00:32:18.092 }, 00:32:18.092 "memory_domains": [ 00:32:18.092 { 00:32:18.092 "dma_device_id": "system", 00:32:18.092 "dma_device_type": 1 00:32:18.092 }, 00:32:18.092 { 00:32:18.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:18.092 "dma_device_type": 2 00:32:18.092 } 00:32:18.092 ], 00:32:18.092 "driver_specific": { 00:32:18.092 "passthru": { 00:32:18.092 "name": "pt1", 00:32:18.092 "base_bdev_name": "malloc1" 00:32:18.092 } 00:32:18.092 } 00:32:18.092 }' 00:32:18.092 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:18.092 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:18.092 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:18.092 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:18.092 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:18.092 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:18.092 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:18.092 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:18.092 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:18.092 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:18.092 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:18.092 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:18.092 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:18.092 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:32:18.092 00:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:18.351 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:18.351 "name": "pt2", 00:32:18.351 "aliases": [ 00:32:18.351 "00000000-0000-0000-0000-000000000002" 00:32:18.351 ], 00:32:18.351 "product_name": "passthru", 00:32:18.351 "block_size": 4096, 00:32:18.351 "num_blocks": 8192, 00:32:18.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:18.351 "assigned_rate_limits": { 00:32:18.351 "rw_ios_per_sec": 0, 00:32:18.351 "rw_mbytes_per_sec": 0, 00:32:18.351 "r_mbytes_per_sec": 0, 00:32:18.351 "w_mbytes_per_sec": 0 00:32:18.351 }, 00:32:18.351 "claimed": true, 00:32:18.351 "claim_type": "exclusive_write", 00:32:18.351 "zoned": false, 00:32:18.351 "supported_io_types": { 00:32:18.351 "read": true, 00:32:18.351 "write": true, 00:32:18.351 "unmap": true, 00:32:18.351 "flush": true, 00:32:18.351 "reset": true, 00:32:18.351 "nvme_admin": false, 00:32:18.351 "nvme_io": false, 00:32:18.351 "nvme_io_md": false, 00:32:18.351 "write_zeroes": true, 00:32:18.351 "zcopy": true, 00:32:18.351 "get_zone_info": false, 00:32:18.351 "zone_management": false, 00:32:18.351 "zone_append": false, 00:32:18.351 "compare": false, 00:32:18.351 "compare_and_write": false, 00:32:18.351 "abort": true, 00:32:18.351 "seek_hole": false, 00:32:18.351 "seek_data": false, 00:32:18.351 "copy": true, 00:32:18.351 "nvme_iov_md": false 00:32:18.351 }, 00:32:18.351 "memory_domains": [ 00:32:18.351 { 00:32:18.351 "dma_device_id": "system", 00:32:18.351 "dma_device_type": 1 00:32:18.351 }, 00:32:18.351 { 00:32:18.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:18.351 "dma_device_type": 2 00:32:18.351 } 00:32:18.351 ], 00:32:18.351 "driver_specific": { 00:32:18.351 "passthru": { 00:32:18.351 "name": "pt2", 00:32:18.351 "base_bdev_name": "malloc2" 00:32:18.351 } 00:32:18.351 } 00:32:18.351 }' 00:32:18.351 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:18.351 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:18.351 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:18.351 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:18.351 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:18.351 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:18.351 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:18.610 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:18.610 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:18.610 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:18.610 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:18.610 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:18.610 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:18.610 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:32:18.610 [2024-07-25 00:16:14.429979] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:18.610 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=f38f61de-c668-4a34-a571-b0c313425fc6 00:32:18.610 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' -z f38f61de-c668-4a34-a571-b0c313425fc6 ']' 00:32:18.610 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:18.869 [2024-07-25 00:16:14.697781] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:18.869 [2024-07-25 00:16:14.697817] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:18.869 [2024-07-25 00:16:14.697896] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:18.869 [2024-07-25 00:16:14.697956] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:18.869 [2024-07-25 00:16:14.697977] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state offline 00:32:18.869 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:32:18.869 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:19.128 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:32:19.128 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:32:19.128 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:32:19.128 00:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:32:19.386 00:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:32:19.386 00:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:19.644 00:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:32:19.644 00:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:19.903 00:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:32:19.903 00:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:32:19.903 00:16:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:32:19.903 00:16:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:32:19.903 00:16:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:19.903 00:16:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:19.903 00:16:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:19.903 00:16:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:19.903 00:16:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:19.903 00:16:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:19.903 00:16:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:19.903 00:16:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:19.903 00:16:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:32:20.161 [2024-07-25 00:16:15.846041] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:20.161 [2024-07-25 00:16:15.847830] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:20.161 [2024-07-25 00:16:15.848042] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:20.161 [2024-07-25 00:16:15.848141] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:20.161 [2024-07-25 00:16:15.848164] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:20.161 [2024-07-25 00:16:15.848178] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008480 name raid_bdev1, state configuring 00:32:20.161 request: 00:32:20.161 { 00:32:20.161 "name": "raid_bdev1", 00:32:20.161 "raid_level": "raid1", 00:32:20.161 "base_bdevs": [ 00:32:20.161 "malloc1", 00:32:20.161 "malloc2" 00:32:20.161 ], 00:32:20.161 "superblock": false, 00:32:20.161 "method": "bdev_raid_create", 00:32:20.161 "req_id": 1 00:32:20.161 } 00:32:20.161 Got JSON-RPC error response 00:32:20.161 response: 00:32:20.161 { 00:32:20.161 "code": -17, 00:32:20.161 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:20.161 } 00:32:20.161 00:16:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:32:20.161 00:16:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:20.161 00:16:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:20.161 00:16:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:20.161 00:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:20.161 00:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:32:20.420 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:32:20.420 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:32:20.420 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:20.420 [2024-07-25 00:16:16.230102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:20.420 [2024-07-25 00:16:16.230185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:20.420 [2024-07-25 00:16:16.230210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:32:20.420 [2024-07-25 00:16:16.230224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:20.420 [2024-07-25 00:16:16.232740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:20.420 [2024-07-25 00:16:16.232798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:20.420 [2024-07-25 00:16:16.232924] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:20.420 [2024-07-25 00:16:16.232999] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:20.420 pt1 00:32:20.420 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:32:20.420 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:20.420 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:20.420 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:20.420 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:20.420 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:20.420 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:20.420 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:20.420 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:20.420 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:20.420 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:20.420 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:20.678 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:20.678 "name": "raid_bdev1", 00:32:20.678 "uuid": "f38f61de-c668-4a34-a571-b0c313425fc6", 00:32:20.678 "strip_size_kb": 0, 00:32:20.678 "state": "configuring", 00:32:20.678 "raid_level": "raid1", 00:32:20.678 "superblock": true, 00:32:20.678 "num_base_bdevs": 2, 00:32:20.679 "num_base_bdevs_discovered": 1, 00:32:20.679 "num_base_bdevs_operational": 2, 00:32:20.679 "base_bdevs_list": [ 00:32:20.679 { 00:32:20.679 "name": "pt1", 00:32:20.679 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:20.679 "is_configured": true, 00:32:20.679 "data_offset": 256, 00:32:20.679 "data_size": 7936 00:32:20.679 }, 00:32:20.679 { 00:32:20.679 "name": null, 00:32:20.679 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:20.679 "is_configured": false, 00:32:20.679 "data_offset": 256, 00:32:20.679 "data_size": 7936 00:32:20.679 } 00:32:20.679 ] 00:32:20.679 }' 00:32:20.679 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:20.679 00:16:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:20.937 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:32:20.937 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:32:20.937 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:32:20.937 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:21.196 [2024-07-25 00:16:16.946203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:21.196 [2024-07-25 00:16:16.946269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:21.196 [2024-07-25 00:16:16.946301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:32:21.196 [2024-07-25 00:16:16.946316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:21.196 [2024-07-25 00:16:16.946734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:21.196 [2024-07-25 00:16:16.946759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:21.196 [2024-07-25 00:16:16.946895] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:21.196 [2024-07-25 00:16:16.946956] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:21.196 [2024-07-25 00:16:16.947087] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009080 00:32:21.196 [2024-07-25 00:16:16.947107] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:21.196 [2024-07-25 00:16:16.947236] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:32:21.196 [2024-07-25 00:16:16.947561] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009080 00:32:21.196 [2024-07-25 00:16:16.947582] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009080 00:32:21.196 [2024-07-25 00:16:16.947730] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:21.196 pt2 00:32:21.196 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:32:21.196 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:32:21.196 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:21.196 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:21.196 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:21.196 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:21.196 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:21.196 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:21.196 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:21.196 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:21.196 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:21.196 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:21.196 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:21.196 00:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:21.455 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:21.455 "name": "raid_bdev1", 00:32:21.455 "uuid": "f38f61de-c668-4a34-a571-b0c313425fc6", 00:32:21.455 "strip_size_kb": 0, 00:32:21.455 "state": "online", 00:32:21.455 "raid_level": "raid1", 00:32:21.455 "superblock": true, 00:32:21.455 "num_base_bdevs": 2, 00:32:21.455 "num_base_bdevs_discovered": 2, 00:32:21.455 "num_base_bdevs_operational": 2, 00:32:21.455 "base_bdevs_list": [ 00:32:21.455 { 00:32:21.455 "name": "pt1", 00:32:21.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:21.455 "is_configured": true, 00:32:21.455 "data_offset": 256, 00:32:21.455 "data_size": 7936 00:32:21.455 }, 00:32:21.455 { 00:32:21.455 "name": "pt2", 00:32:21.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:21.455 "is_configured": true, 00:32:21.455 "data_offset": 256, 00:32:21.455 "data_size": 7936 00:32:21.455 } 00:32:21.455 ] 00:32:21.455 }' 00:32:21.455 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:21.455 00:16:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:21.714 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:32:21.714 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:32:21.714 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:21.714 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:21.714 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:21.714 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:32:21.714 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:21.714 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:21.974 [2024-07-25 00:16:17.658648] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:21.974 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:21.974 "name": "raid_bdev1", 00:32:21.974 "aliases": [ 00:32:21.974 "f38f61de-c668-4a34-a571-b0c313425fc6" 00:32:21.974 ], 00:32:21.974 "product_name": "Raid Volume", 00:32:21.974 "block_size": 4096, 00:32:21.974 "num_blocks": 7936, 00:32:21.974 "uuid": "f38f61de-c668-4a34-a571-b0c313425fc6", 00:32:21.974 "assigned_rate_limits": { 00:32:21.974 "rw_ios_per_sec": 0, 00:32:21.974 "rw_mbytes_per_sec": 0, 00:32:21.974 "r_mbytes_per_sec": 0, 00:32:21.974 "w_mbytes_per_sec": 0 00:32:21.974 }, 00:32:21.974 "claimed": false, 00:32:21.974 "zoned": false, 00:32:21.974 "supported_io_types": { 00:32:21.974 "read": true, 00:32:21.974 "write": true, 00:32:21.974 "unmap": false, 00:32:21.974 "flush": false, 00:32:21.974 "reset": true, 00:32:21.974 "nvme_admin": false, 00:32:21.974 "nvme_io": false, 00:32:21.974 "nvme_io_md": false, 00:32:21.974 "write_zeroes": true, 00:32:21.974 "zcopy": false, 00:32:21.974 "get_zone_info": false, 00:32:21.974 "zone_management": false, 00:32:21.974 "zone_append": false, 00:32:21.974 "compare": false, 00:32:21.974 "compare_and_write": false, 00:32:21.974 "abort": false, 00:32:21.974 "seek_hole": false, 00:32:21.974 "seek_data": false, 00:32:21.974 "copy": false, 00:32:21.974 "nvme_iov_md": false 00:32:21.974 }, 00:32:21.974 "memory_domains": [ 00:32:21.974 { 00:32:21.974 "dma_device_id": "system", 00:32:21.974 "dma_device_type": 1 00:32:21.974 }, 00:32:21.974 { 00:32:21.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:21.974 "dma_device_type": 2 00:32:21.974 }, 00:32:21.974 { 00:32:21.974 "dma_device_id": "system", 00:32:21.974 "dma_device_type": 1 00:32:21.974 }, 00:32:21.974 { 00:32:21.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:21.974 "dma_device_type": 2 00:32:21.974 } 00:32:21.974 ], 00:32:21.974 "driver_specific": { 00:32:21.974 "raid": { 00:32:21.974 "uuid": "f38f61de-c668-4a34-a571-b0c313425fc6", 00:32:21.974 "strip_size_kb": 0, 00:32:21.974 "state": "online", 00:32:21.974 "raid_level": "raid1", 00:32:21.974 "superblock": true, 00:32:21.974 "num_base_bdevs": 2, 00:32:21.974 "num_base_bdevs_discovered": 2, 00:32:21.974 "num_base_bdevs_operational": 2, 00:32:21.974 "base_bdevs_list": [ 00:32:21.974 { 00:32:21.974 "name": "pt1", 00:32:21.974 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:21.974 "is_configured": true, 00:32:21.974 "data_offset": 256, 00:32:21.974 "data_size": 7936 00:32:21.974 }, 00:32:21.974 { 00:32:21.974 "name": "pt2", 00:32:21.974 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:21.974 "is_configured": true, 00:32:21.974 "data_offset": 256, 00:32:21.974 "data_size": 7936 00:32:21.974 } 00:32:21.974 ] 00:32:21.974 } 00:32:21.974 } 00:32:21.974 }' 00:32:21.974 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:21.974 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:32:21.974 pt2' 00:32:21.974 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:21.974 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:32:21.974 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:22.233 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:22.233 "name": "pt1", 00:32:22.233 "aliases": [ 00:32:22.233 "00000000-0000-0000-0000-000000000001" 00:32:22.233 ], 00:32:22.233 "product_name": "passthru", 00:32:22.233 "block_size": 4096, 00:32:22.233 "num_blocks": 8192, 00:32:22.233 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:22.233 "assigned_rate_limits": { 00:32:22.233 "rw_ios_per_sec": 0, 00:32:22.233 "rw_mbytes_per_sec": 0, 00:32:22.233 "r_mbytes_per_sec": 0, 00:32:22.233 "w_mbytes_per_sec": 0 00:32:22.233 }, 00:32:22.233 "claimed": true, 00:32:22.233 "claim_type": "exclusive_write", 00:32:22.233 "zoned": false, 00:32:22.233 "supported_io_types": { 00:32:22.233 "read": true, 00:32:22.233 "write": true, 00:32:22.233 "unmap": true, 00:32:22.233 "flush": true, 00:32:22.233 "reset": true, 00:32:22.233 "nvme_admin": false, 00:32:22.233 "nvme_io": false, 00:32:22.233 "nvme_io_md": false, 00:32:22.233 "write_zeroes": true, 00:32:22.233 "zcopy": true, 00:32:22.233 "get_zone_info": false, 00:32:22.233 "zone_management": false, 00:32:22.233 "zone_append": false, 00:32:22.233 "compare": false, 00:32:22.233 "compare_and_write": false, 00:32:22.233 "abort": true, 00:32:22.233 "seek_hole": false, 00:32:22.233 "seek_data": false, 00:32:22.233 "copy": true, 00:32:22.233 "nvme_iov_md": false 00:32:22.233 }, 00:32:22.233 "memory_domains": [ 00:32:22.233 { 00:32:22.233 "dma_device_id": "system", 00:32:22.233 "dma_device_type": 1 00:32:22.233 }, 00:32:22.233 { 00:32:22.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:22.234 "dma_device_type": 2 00:32:22.234 } 00:32:22.234 ], 00:32:22.234 "driver_specific": { 00:32:22.234 "passthru": { 00:32:22.234 "name": "pt1", 00:32:22.234 "base_bdev_name": "malloc1" 00:32:22.234 } 00:32:22.234 } 00:32:22.234 }' 00:32:22.234 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:22.234 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:22.234 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:22.234 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:22.234 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:22.234 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:22.234 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:22.234 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:22.234 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:22.234 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:22.234 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:22.234 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:22.234 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:22.234 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:32:22.234 00:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:22.493 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:22.493 "name": "pt2", 00:32:22.493 "aliases": [ 00:32:22.493 "00000000-0000-0000-0000-000000000002" 00:32:22.493 ], 00:32:22.493 "product_name": "passthru", 00:32:22.493 "block_size": 4096, 00:32:22.493 "num_blocks": 8192, 00:32:22.493 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:22.493 "assigned_rate_limits": { 00:32:22.493 "rw_ios_per_sec": 0, 00:32:22.493 "rw_mbytes_per_sec": 0, 00:32:22.493 "r_mbytes_per_sec": 0, 00:32:22.493 "w_mbytes_per_sec": 0 00:32:22.493 }, 00:32:22.493 "claimed": true, 00:32:22.493 "claim_type": "exclusive_write", 00:32:22.493 "zoned": false, 00:32:22.493 "supported_io_types": { 00:32:22.493 "read": true, 00:32:22.493 "write": true, 00:32:22.493 "unmap": true, 00:32:22.493 "flush": true, 00:32:22.493 "reset": true, 00:32:22.493 "nvme_admin": false, 00:32:22.493 "nvme_io": false, 00:32:22.493 "nvme_io_md": false, 00:32:22.493 "write_zeroes": true, 00:32:22.493 "zcopy": true, 00:32:22.493 "get_zone_info": false, 00:32:22.493 "zone_management": false, 00:32:22.493 "zone_append": false, 00:32:22.493 "compare": false, 00:32:22.493 "compare_and_write": false, 00:32:22.493 "abort": true, 00:32:22.493 "seek_hole": false, 00:32:22.493 "seek_data": false, 00:32:22.493 "copy": true, 00:32:22.493 "nvme_iov_md": false 00:32:22.493 }, 00:32:22.493 "memory_domains": [ 00:32:22.493 { 00:32:22.493 "dma_device_id": "system", 00:32:22.493 "dma_device_type": 1 00:32:22.493 }, 00:32:22.493 { 00:32:22.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:22.493 "dma_device_type": 2 00:32:22.493 } 00:32:22.493 ], 00:32:22.493 "driver_specific": { 00:32:22.493 "passthru": { 00:32:22.493 "name": "pt2", 00:32:22.493 "base_bdev_name": "malloc2" 00:32:22.493 } 00:32:22.493 } 00:32:22.493 }' 00:32:22.493 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:22.493 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:22.493 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:22.493 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:22.493 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:22.493 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:22.493 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:22.493 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:22.493 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:22.493 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:22.493 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:22.493 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:22.493 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:22.493 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:32:22.752 [2024-07-25 00:16:18.454802] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:22.752 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # '[' f38f61de-c668-4a34-a571-b0c313425fc6 '!=' f38f61de-c668-4a34-a571-b0c313425fc6 ']' 00:32:22.752 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:32:22.752 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:22.752 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:32:22.752 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:32:23.011 [2024-07-25 00:16:18.638689] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:32:23.011 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:23.011 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:23.011 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:23.011 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:23.011 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:23.011 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:23.011 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:23.011 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:23.011 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:23.011 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:23.011 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:23.011 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:23.011 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:23.011 "name": "raid_bdev1", 00:32:23.011 "uuid": "f38f61de-c668-4a34-a571-b0c313425fc6", 00:32:23.011 "strip_size_kb": 0, 00:32:23.011 "state": "online", 00:32:23.011 "raid_level": "raid1", 00:32:23.011 "superblock": true, 00:32:23.011 "num_base_bdevs": 2, 00:32:23.011 "num_base_bdevs_discovered": 1, 00:32:23.011 "num_base_bdevs_operational": 1, 00:32:23.011 "base_bdevs_list": [ 00:32:23.011 { 00:32:23.011 "name": null, 00:32:23.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.012 "is_configured": false, 00:32:23.012 "data_offset": 256, 00:32:23.012 "data_size": 7936 00:32:23.012 }, 00:32:23.012 { 00:32:23.012 "name": "pt2", 00:32:23.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:23.012 "is_configured": true, 00:32:23.012 "data_offset": 256, 00:32:23.012 "data_size": 7936 00:32:23.012 } 00:32:23.012 ] 00:32:23.012 }' 00:32:23.012 00:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:23.012 00:16:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:23.581 00:16:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:23.581 [2024-07-25 00:16:19.418826] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:23.581 [2024-07-25 00:16:19.418857] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:23.581 [2024-07-25 00:16:19.418942] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:23.581 [2024-07-25 00:16:19.418995] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:23.581 [2024-07-25 00:16:19.419011] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state offline 00:32:23.581 00:16:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:23.581 00:16:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:32:23.840 00:16:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:32:23.840 00:16:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:32:23.840 00:16:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:32:23.840 00:16:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:32:23.840 00:16:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:24.099 00:16:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:32:24.099 00:16:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:32:24.099 00:16:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:32:24.099 00:16:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:32:24.099 00:16:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@534 -- # i=1 00:32:24.099 00:16:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:24.358 [2024-07-25 00:16:19.990910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:24.358 [2024-07-25 00:16:19.990968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:24.358 [2024-07-25 00:16:19.990988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009680 00:32:24.358 [2024-07-25 00:16:19.991002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:24.358 [2024-07-25 00:16:19.993106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:24.358 [2024-07-25 00:16:19.993142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:24.358 [2024-07-25 00:16:19.993227] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:24.358 [2024-07-25 00:16:19.993278] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:24.358 [2024-07-25 00:16:19.993394] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009c80 00:32:24.358 [2024-07-25 00:16:19.993412] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:24.358 [2024-07-25 00:16:19.993493] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:32:24.358 [2024-07-25 00:16:19.993794] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009c80 00:32:24.358 [2024-07-25 00:16:19.993824] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009c80 00:32:24.358 [2024-07-25 00:16:19.993977] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:24.358 pt2 00:32:24.358 00:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:24.358 00:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:24.358 00:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:24.358 00:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:24.358 00:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:24.358 00:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:24.358 00:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:24.358 00:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:24.358 00:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:24.358 00:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:24.358 00:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:24.358 00:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:24.358 00:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:24.358 "name": "raid_bdev1", 00:32:24.358 "uuid": "f38f61de-c668-4a34-a571-b0c313425fc6", 00:32:24.358 "strip_size_kb": 0, 00:32:24.358 "state": "online", 00:32:24.358 "raid_level": "raid1", 00:32:24.358 "superblock": true, 00:32:24.358 "num_base_bdevs": 2, 00:32:24.358 "num_base_bdevs_discovered": 1, 00:32:24.358 "num_base_bdevs_operational": 1, 00:32:24.358 "base_bdevs_list": [ 00:32:24.358 { 00:32:24.358 "name": null, 00:32:24.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:24.358 "is_configured": false, 00:32:24.358 "data_offset": 256, 00:32:24.358 "data_size": 7936 00:32:24.358 }, 00:32:24.358 { 00:32:24.358 "name": "pt2", 00:32:24.358 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:24.358 "is_configured": true, 00:32:24.358 "data_offset": 256, 00:32:24.358 "data_size": 7936 00:32:24.358 } 00:32:24.358 ] 00:32:24.358 }' 00:32:24.358 00:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:24.358 00:16:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:24.635 00:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:24.901 [2024-07-25 00:16:20.663116] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:24.901 [2024-07-25 00:16:20.663152] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:24.901 [2024-07-25 00:16:20.663217] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:24.902 [2024-07-25 00:16:20.663272] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:24.902 [2024-07-25 00:16:20.663285] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009c80 name raid_bdev1, state offline 00:32:24.902 00:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:24.902 00:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:32:25.160 00:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:32:25.160 00:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:32:25.160 00:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:32:25.160 00:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:25.419 [2024-07-25 00:16:21.111178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:25.419 [2024-07-25 00:16:21.111237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:25.419 [2024-07-25 00:16:21.111262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:32:25.419 [2024-07-25 00:16:21.111282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:25.419 [2024-07-25 00:16:21.113387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:25.419 [2024-07-25 00:16:21.113420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:25.419 [2024-07-25 00:16:21.113507] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:25.419 [2024-07-25 00:16:21.113552] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:25.419 [2024-07-25 00:16:21.113707] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:32:25.419 [2024-07-25 00:16:21.113723] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:25.419 [2024-07-25 00:16:21.113740] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state configuring 00:32:25.419 [2024-07-25 00:16:21.113841] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:25.419 [2024-07-25 00:16:21.113952] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a880 00:32:25.419 [2024-07-25 00:16:21.113965] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:25.419 [2024-07-25 00:16:21.114049] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:32:25.419 [2024-07-25 00:16:21.114385] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a880 00:32:25.419 [2024-07-25 00:16:21.114403] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a880 00:32:25.419 [2024-07-25 00:16:21.114529] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:25.419 pt1 00:32:25.420 00:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:32:25.420 00:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:25.420 00:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:25.420 00:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:25.420 00:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:25.420 00:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:25.420 00:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:25.420 00:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:25.420 00:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:25.420 00:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:25.420 00:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:25.420 00:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:25.420 00:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:25.679 00:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:25.679 "name": "raid_bdev1", 00:32:25.679 "uuid": "f38f61de-c668-4a34-a571-b0c313425fc6", 00:32:25.679 "strip_size_kb": 0, 00:32:25.679 "state": "online", 00:32:25.679 "raid_level": "raid1", 00:32:25.679 "superblock": true, 00:32:25.679 "num_base_bdevs": 2, 00:32:25.679 "num_base_bdevs_discovered": 1, 00:32:25.679 "num_base_bdevs_operational": 1, 00:32:25.679 "base_bdevs_list": [ 00:32:25.679 { 00:32:25.679 "name": null, 00:32:25.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.679 "is_configured": false, 00:32:25.679 "data_offset": 256, 00:32:25.679 "data_size": 7936 00:32:25.679 }, 00:32:25.679 { 00:32:25.679 "name": "pt2", 00:32:25.679 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:25.679 "is_configured": true, 00:32:25.679 "data_offset": 256, 00:32:25.679 "data_size": 7936 00:32:25.679 } 00:32:25.679 ] 00:32:25.679 }' 00:32:25.679 00:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:25.679 00:16:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:25.938 00:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:32:25.938 00:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:32:26.197 00:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:32:26.197 00:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:26.197 00:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:32:26.197 [2024-07-25 00:16:22.003643] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:26.197 00:16:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # '[' f38f61de-c668-4a34-a571-b0c313425fc6 '!=' f38f61de-c668-4a34-a571-b0c313425fc6 ']' 00:32:26.197 00:16:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@578 -- # killprocess 110431 00:32:26.197 00:16:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 110431 ']' 00:32:26.197 00:16:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 110431 00:32:26.197 00:16:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:32:26.197 00:16:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:26.197 00:16:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110431 00:32:26.197 killing process with pid 110431 00:32:26.197 00:16:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:26.197 00:16:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:26.197 00:16:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110431' 00:32:26.197 00:16:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 110431 00:32:26.197 00:16:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 110431 00:32:26.197 [2024-07-25 00:16:22.048443] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:26.197 [2024-07-25 00:16:22.048528] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:26.197 [2024-07-25 00:16:22.048577] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:26.197 [2024-07-25 00:16:22.048594] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state offline 00:32:26.456 [2024-07-25 00:16:22.175078] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:27.393 ************************************ 00:32:27.393 END TEST raid_superblock_test_4k 00:32:27.393 ************************************ 00:32:27.393 00:16:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@580 -- # return 0 00:32:27.393 00:32:27.393 real 0m12.284s 00:32:27.393 user 0m20.936s 00:32:27.393 sys 0m1.951s 00:32:27.393 00:16:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:27.393 00:16:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:27.393 00:16:23 bdev_raid -- bdev/bdev_raid.sh@980 -- # '[' true = true ']' 00:32:27.394 00:16:23 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:32:27.394 00:16:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:32:27.394 00:16:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:27.394 00:16:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:27.394 ************************************ 00:32:27.394 START TEST raid_rebuild_test_sb_4k 00:32:27.394 ************************************ 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # local verify=true 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # local strip_size 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # local create_arg 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@594 -- # local data_offset 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # raid_pid=110891 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # waitforlisten 110891 /var/tmp/spdk-raid.sock 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 110891 ']' 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:27.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:27.394 00:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:27.394 [2024-07-25 00:16:23.181279] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:32:27.394 [2024-07-25 00:16:23.181577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110891 ] 00:32:27.394 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:27.394 Zero copy mechanism will not be used. 00:32:27.652 [2024-07-25 00:16:23.337832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.652 [2024-07-25 00:16:23.486318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.909 [2024-07-25 00:16:23.628505] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:28.476 00:16:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:28.476 00:16:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:32:28.476 00:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:32:28.476 00:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:32:28.476 BaseBdev1_malloc 00:32:28.476 00:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:28.734 [2024-07-25 00:16:24.492924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:28.734 [2024-07-25 00:16:24.493160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:28.734 [2024-07-25 00:16:24.493229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:32:28.734 [2024-07-25 00:16:24.493448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:28.734 [2024-07-25 00:16:24.495607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:28.734 [2024-07-25 00:16:24.495773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:28.734 BaseBdev1 00:32:28.734 00:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:32:28.734 00:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:32:28.993 BaseBdev2_malloc 00:32:28.993 00:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:29.252 [2024-07-25 00:16:24.916164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:29.252 [2024-07-25 00:16:24.916381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:29.252 [2024-07-25 00:16:24.916419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:32:29.252 [2024-07-25 00:16:24.916452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:29.252 [2024-07-25 00:16:24.918592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:29.252 [2024-07-25 00:16:24.918635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:29.252 BaseBdev2 00:32:29.252 00:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:32:29.511 spare_malloc 00:32:29.511 00:16:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:29.511 spare_delay 00:32:29.511 00:16:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:29.770 [2024-07-25 00:16:25.493555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:29.770 [2024-07-25 00:16:25.493613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:29.770 [2024-07-25 00:16:25.493639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:32:29.770 [2024-07-25 00:16:25.493653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:29.770 [2024-07-25 00:16:25.495670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:29.770 [2024-07-25 00:16:25.495712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:29.770 spare 00:32:29.770 00:16:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:32:30.030 [2024-07-25 00:16:25.685653] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:30.030 [2024-07-25 00:16:25.687621] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:30.030 [2024-07-25 00:16:25.687863] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009080 00:32:30.030 [2024-07-25 00:16:25.687900] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:30.030 [2024-07-25 00:16:25.688026] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:32:30.030 [2024-07-25 00:16:25.688482] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009080 00:32:30.030 [2024-07-25 00:16:25.688498] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009080 00:32:30.030 [2024-07-25 00:16:25.688662] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:30.030 00:16:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:30.030 00:16:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:30.030 00:16:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:30.030 00:16:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:30.030 00:16:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:30.030 00:16:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:30.030 00:16:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:30.030 00:16:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:30.030 00:16:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:30.030 00:16:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:30.030 00:16:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:30.030 00:16:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:30.289 00:16:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:30.289 "name": "raid_bdev1", 00:32:30.289 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:30.289 "strip_size_kb": 0, 00:32:30.289 "state": "online", 00:32:30.289 "raid_level": "raid1", 00:32:30.289 "superblock": true, 00:32:30.289 "num_base_bdevs": 2, 00:32:30.289 "num_base_bdevs_discovered": 2, 00:32:30.289 "num_base_bdevs_operational": 2, 00:32:30.289 "base_bdevs_list": [ 00:32:30.289 { 00:32:30.289 "name": "BaseBdev1", 00:32:30.289 "uuid": "469ac3ec-22c9-535a-9ffe-f803a4ceb7ed", 00:32:30.289 "is_configured": true, 00:32:30.289 "data_offset": 256, 00:32:30.289 "data_size": 7936 00:32:30.289 }, 00:32:30.289 { 00:32:30.289 "name": "BaseBdev2", 00:32:30.289 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:30.289 "is_configured": true, 00:32:30.289 "data_offset": 256, 00:32:30.289 "data_size": 7936 00:32:30.289 } 00:32:30.289 ] 00:32:30.289 }' 00:32:30.289 00:16:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:30.289 00:16:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:30.548 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:30.548 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:32:30.807 [2024-07-25 00:16:26.450019] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:30.807 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:32:30.807 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:30.807 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:30.807 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:32:30.807 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:32:30.807 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:32:30.807 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:32:30.807 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:32:30.807 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:30.807 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:30.807 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:30.807 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:30.807 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:30.807 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:32:30.807 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:30.807 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:30.807 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:31.066 [2024-07-25 00:16:26.897962] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:32:31.066 /dev/nbd0 00:32:31.066 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:31.066 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:31.066 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:31.066 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:32:31.066 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:31.066 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:31.067 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:31.067 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:32:31.067 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:31.067 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:31.067 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:31.067 1+0 records in 00:32:31.067 1+0 records out 00:32:31.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306649 s, 13.4 MB/s 00:32:31.067 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:31.326 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:32:31.326 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:31.326 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:31.326 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:32:31.326 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:31.326 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:31.326 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:32:31.326 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:32:31.326 00:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:32:31.893 7936+0 records in 00:32:31.893 7936+0 records out 00:32:31.893 32505856 bytes (33 MB, 31 MiB) copied, 0.73665 s, 44.1 MB/s 00:32:31.893 00:16:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:32:31.893 00:16:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:31.893 00:16:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:31.893 00:16:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:31.893 00:16:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:32:31.893 00:16:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:31.893 00:16:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:32.150 00:16:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:32.150 00:16:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:32.150 00:16:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:32.150 00:16:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:32.150 00:16:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:32.150 00:16:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:32.150 [2024-07-25 00:16:27.888954] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:32.150 00:16:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:32:32.150 00:16:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:32:32.150 00:16:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:32:32.409 [2024-07-25 00:16:28.057055] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:32.409 00:16:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:32.409 00:16:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:32.409 00:16:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:32.409 00:16:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:32.409 00:16:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:32.409 00:16:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:32.409 00:16:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:32.409 00:16:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:32.409 00:16:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:32.409 00:16:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:32.409 00:16:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:32.409 00:16:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:32.667 00:16:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:32.667 "name": "raid_bdev1", 00:32:32.667 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:32.668 "strip_size_kb": 0, 00:32:32.668 "state": "online", 00:32:32.668 "raid_level": "raid1", 00:32:32.668 "superblock": true, 00:32:32.668 "num_base_bdevs": 2, 00:32:32.668 "num_base_bdevs_discovered": 1, 00:32:32.668 "num_base_bdevs_operational": 1, 00:32:32.668 "base_bdevs_list": [ 00:32:32.668 { 00:32:32.668 "name": null, 00:32:32.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:32.668 "is_configured": false, 00:32:32.668 "data_offset": 256, 00:32:32.668 "data_size": 7936 00:32:32.668 }, 00:32:32.668 { 00:32:32.668 "name": "BaseBdev2", 00:32:32.668 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:32.668 "is_configured": true, 00:32:32.668 "data_offset": 256, 00:32:32.668 "data_size": 7936 00:32:32.668 } 00:32:32.668 ] 00:32:32.668 }' 00:32:32.668 00:16:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:32.668 00:16:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:32.926 00:16:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:33.184 [2024-07-25 00:16:28.829357] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:33.184 [2024-07-25 00:16:28.840648] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00019fe30 00:32:33.184 [2024-07-25 00:16:28.842492] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:33.184 00:16:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:32:34.119 00:16:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:34.119 00:16:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:34.119 00:16:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:34.119 00:16:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:34.119 00:16:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:34.119 00:16:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:34.119 00:16:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:34.378 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:34.378 "name": "raid_bdev1", 00:32:34.378 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:34.378 "strip_size_kb": 0, 00:32:34.378 "state": "online", 00:32:34.378 "raid_level": "raid1", 00:32:34.378 "superblock": true, 00:32:34.378 "num_base_bdevs": 2, 00:32:34.378 "num_base_bdevs_discovered": 2, 00:32:34.378 "num_base_bdevs_operational": 2, 00:32:34.378 "process": { 00:32:34.378 "type": "rebuild", 00:32:34.378 "target": "spare", 00:32:34.378 "progress": { 00:32:34.378 "blocks": 3072, 00:32:34.378 "percent": 38 00:32:34.378 } 00:32:34.378 }, 00:32:34.378 "base_bdevs_list": [ 00:32:34.378 { 00:32:34.378 "name": "spare", 00:32:34.378 "uuid": "69f8ac4d-72e9-5ebe-9026-9248638df7f0", 00:32:34.378 "is_configured": true, 00:32:34.378 "data_offset": 256, 00:32:34.378 "data_size": 7936 00:32:34.378 }, 00:32:34.378 { 00:32:34.378 "name": "BaseBdev2", 00:32:34.378 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:34.378 "is_configured": true, 00:32:34.378 "data_offset": 256, 00:32:34.378 "data_size": 7936 00:32:34.378 } 00:32:34.378 ] 00:32:34.378 }' 00:32:34.378 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:34.378 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:34.378 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:34.378 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:34.378 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:34.637 [2024-07-25 00:16:30.344662] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:34.637 [2024-07-25 00:16:30.349046] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:34.637 [2024-07-25 00:16:30.349141] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:34.637 [2024-07-25 00:16:30.349162] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:34.637 [2024-07-25 00:16:30.349174] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:34.637 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:34.637 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:34.637 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:34.637 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:34.637 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:34.637 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:34.637 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:34.637 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:34.637 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:34.637 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:34.637 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:34.637 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:34.896 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:34.896 "name": "raid_bdev1", 00:32:34.896 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:34.896 "strip_size_kb": 0, 00:32:34.896 "state": "online", 00:32:34.896 "raid_level": "raid1", 00:32:34.896 "superblock": true, 00:32:34.896 "num_base_bdevs": 2, 00:32:34.896 "num_base_bdevs_discovered": 1, 00:32:34.896 "num_base_bdevs_operational": 1, 00:32:34.896 "base_bdevs_list": [ 00:32:34.896 { 00:32:34.896 "name": null, 00:32:34.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:34.896 "is_configured": false, 00:32:34.896 "data_offset": 256, 00:32:34.896 "data_size": 7936 00:32:34.896 }, 00:32:34.896 { 00:32:34.896 "name": "BaseBdev2", 00:32:34.896 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:34.896 "is_configured": true, 00:32:34.896 "data_offset": 256, 00:32:34.896 "data_size": 7936 00:32:34.896 } 00:32:34.896 ] 00:32:34.896 }' 00:32:34.896 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:34.896 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:35.155 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:35.155 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:35.155 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:35.155 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:35.155 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:35.155 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:35.155 00:16:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:35.414 00:16:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:35.414 "name": "raid_bdev1", 00:32:35.414 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:35.414 "strip_size_kb": 0, 00:32:35.414 "state": "online", 00:32:35.414 "raid_level": "raid1", 00:32:35.414 "superblock": true, 00:32:35.414 "num_base_bdevs": 2, 00:32:35.414 "num_base_bdevs_discovered": 1, 00:32:35.414 "num_base_bdevs_operational": 1, 00:32:35.414 "base_bdevs_list": [ 00:32:35.414 { 00:32:35.414 "name": null, 00:32:35.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:35.414 "is_configured": false, 00:32:35.414 "data_offset": 256, 00:32:35.414 "data_size": 7936 00:32:35.414 }, 00:32:35.414 { 00:32:35.414 "name": "BaseBdev2", 00:32:35.414 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:35.414 "is_configured": true, 00:32:35.414 "data_offset": 256, 00:32:35.414 "data_size": 7936 00:32:35.414 } 00:32:35.414 ] 00:32:35.414 }' 00:32:35.414 00:16:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:35.414 00:16:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:35.414 00:16:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:35.414 00:16:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:35.414 00:16:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:35.673 [2024-07-25 00:16:31.373174] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:35.673 [2024-07-25 00:16:31.383665] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00019ff00 00:32:35.673 [2024-07-25 00:16:31.385544] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:35.673 00:16:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@678 -- # sleep 1 00:32:36.610 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:36.610 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:36.610 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:36.610 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:36.610 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:36.610 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:36.610 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:36.869 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:36.869 "name": "raid_bdev1", 00:32:36.869 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:36.869 "strip_size_kb": 0, 00:32:36.869 "state": "online", 00:32:36.869 "raid_level": "raid1", 00:32:36.869 "superblock": true, 00:32:36.869 "num_base_bdevs": 2, 00:32:36.869 "num_base_bdevs_discovered": 2, 00:32:36.869 "num_base_bdevs_operational": 2, 00:32:36.869 "process": { 00:32:36.869 "type": "rebuild", 00:32:36.869 "target": "spare", 00:32:36.869 "progress": { 00:32:36.869 "blocks": 3072, 00:32:36.869 "percent": 38 00:32:36.869 } 00:32:36.869 }, 00:32:36.869 "base_bdevs_list": [ 00:32:36.869 { 00:32:36.869 "name": "spare", 00:32:36.869 "uuid": "69f8ac4d-72e9-5ebe-9026-9248638df7f0", 00:32:36.869 "is_configured": true, 00:32:36.869 "data_offset": 256, 00:32:36.869 "data_size": 7936 00:32:36.869 }, 00:32:36.869 { 00:32:36.869 "name": "BaseBdev2", 00:32:36.869 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:36.869 "is_configured": true, 00:32:36.869 "data_offset": 256, 00:32:36.869 "data_size": 7936 00:32:36.869 } 00:32:36.869 ] 00:32:36.869 }' 00:32:36.869 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:36.869 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:36.869 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:36.869 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:36.869 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:32:36.869 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:32:36.869 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:32:36.869 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:32:36.869 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:32:36.869 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:32:36.869 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # local timeout=1182 00:32:36.869 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:36.869 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:36.869 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:36.869 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:36.869 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:36.869 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:36.869 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:36.869 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:37.128 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:37.128 "name": "raid_bdev1", 00:32:37.128 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:37.128 "strip_size_kb": 0, 00:32:37.128 "state": "online", 00:32:37.128 "raid_level": "raid1", 00:32:37.128 "superblock": true, 00:32:37.128 "num_base_bdevs": 2, 00:32:37.128 "num_base_bdevs_discovered": 2, 00:32:37.128 "num_base_bdevs_operational": 2, 00:32:37.128 "process": { 00:32:37.128 "type": "rebuild", 00:32:37.128 "target": "spare", 00:32:37.128 "progress": { 00:32:37.128 "blocks": 3584, 00:32:37.128 "percent": 45 00:32:37.128 } 00:32:37.128 }, 00:32:37.128 "base_bdevs_list": [ 00:32:37.128 { 00:32:37.128 "name": "spare", 00:32:37.128 "uuid": "69f8ac4d-72e9-5ebe-9026-9248638df7f0", 00:32:37.128 "is_configured": true, 00:32:37.128 "data_offset": 256, 00:32:37.128 "data_size": 7936 00:32:37.128 }, 00:32:37.128 { 00:32:37.128 "name": "BaseBdev2", 00:32:37.128 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:37.128 "is_configured": true, 00:32:37.128 "data_offset": 256, 00:32:37.128 "data_size": 7936 00:32:37.128 } 00:32:37.128 ] 00:32:37.128 }' 00:32:37.128 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:37.128 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:37.128 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:37.128 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:37.128 00:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@726 -- # sleep 1 00:32:38.077 00:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:38.077 00:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:38.077 00:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:38.077 00:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:38.077 00:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:38.077 00:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:38.077 00:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:38.077 00:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:38.350 00:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:38.350 "name": "raid_bdev1", 00:32:38.350 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:38.350 "strip_size_kb": 0, 00:32:38.350 "state": "online", 00:32:38.350 "raid_level": "raid1", 00:32:38.350 "superblock": true, 00:32:38.350 "num_base_bdevs": 2, 00:32:38.350 "num_base_bdevs_discovered": 2, 00:32:38.350 "num_base_bdevs_operational": 2, 00:32:38.350 "process": { 00:32:38.350 "type": "rebuild", 00:32:38.350 "target": "spare", 00:32:38.350 "progress": { 00:32:38.350 "blocks": 6912, 00:32:38.350 "percent": 87 00:32:38.350 } 00:32:38.350 }, 00:32:38.350 "base_bdevs_list": [ 00:32:38.350 { 00:32:38.350 "name": "spare", 00:32:38.350 "uuid": "69f8ac4d-72e9-5ebe-9026-9248638df7f0", 00:32:38.350 "is_configured": true, 00:32:38.350 "data_offset": 256, 00:32:38.350 "data_size": 7936 00:32:38.350 }, 00:32:38.350 { 00:32:38.350 "name": "BaseBdev2", 00:32:38.350 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:38.350 "is_configured": true, 00:32:38.350 "data_offset": 256, 00:32:38.350 "data_size": 7936 00:32:38.350 } 00:32:38.350 ] 00:32:38.350 }' 00:32:38.350 00:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:38.350 00:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:38.350 00:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:38.350 00:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:38.350 00:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@726 -- # sleep 1 00:32:38.927 [2024-07-25 00:16:34.498417] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:38.927 [2024-07-25 00:16:34.498491] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:38.927 [2024-07-25 00:16:34.498610] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:39.494 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:39.494 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:39.494 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:39.494 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:39.494 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:39.494 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:39.494 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:39.494 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:39.752 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:39.752 "name": "raid_bdev1", 00:32:39.753 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:39.753 "strip_size_kb": 0, 00:32:39.753 "state": "online", 00:32:39.753 "raid_level": "raid1", 00:32:39.753 "superblock": true, 00:32:39.753 "num_base_bdevs": 2, 00:32:39.753 "num_base_bdevs_discovered": 2, 00:32:39.753 "num_base_bdevs_operational": 2, 00:32:39.753 "base_bdevs_list": [ 00:32:39.753 { 00:32:39.753 "name": "spare", 00:32:39.753 "uuid": "69f8ac4d-72e9-5ebe-9026-9248638df7f0", 00:32:39.753 "is_configured": true, 00:32:39.753 "data_offset": 256, 00:32:39.753 "data_size": 7936 00:32:39.753 }, 00:32:39.753 { 00:32:39.753 "name": "BaseBdev2", 00:32:39.753 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:39.753 "is_configured": true, 00:32:39.753 "data_offset": 256, 00:32:39.753 "data_size": 7936 00:32:39.753 } 00:32:39.753 ] 00:32:39.753 }' 00:32:39.753 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:39.753 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:39.753 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:39.753 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:32:39.753 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@724 -- # break 00:32:39.753 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:39.753 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:39.753 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:39.753 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:39.753 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:39.753 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:39.753 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:40.012 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:40.012 "name": "raid_bdev1", 00:32:40.012 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:40.012 "strip_size_kb": 0, 00:32:40.012 "state": "online", 00:32:40.012 "raid_level": "raid1", 00:32:40.012 "superblock": true, 00:32:40.012 "num_base_bdevs": 2, 00:32:40.012 "num_base_bdevs_discovered": 2, 00:32:40.012 "num_base_bdevs_operational": 2, 00:32:40.012 "base_bdevs_list": [ 00:32:40.012 { 00:32:40.012 "name": "spare", 00:32:40.012 "uuid": "69f8ac4d-72e9-5ebe-9026-9248638df7f0", 00:32:40.012 "is_configured": true, 00:32:40.012 "data_offset": 256, 00:32:40.012 "data_size": 7936 00:32:40.012 }, 00:32:40.012 { 00:32:40.012 "name": "BaseBdev2", 00:32:40.012 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:40.012 "is_configured": true, 00:32:40.012 "data_offset": 256, 00:32:40.012 "data_size": 7936 00:32:40.012 } 00:32:40.012 ] 00:32:40.012 }' 00:32:40.012 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:40.012 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:40.012 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:40.012 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:40.012 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:40.012 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:40.012 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:40.012 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:40.012 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:40.012 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:40.012 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:40.012 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:40.012 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:40.012 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:40.012 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:40.012 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:40.271 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:40.271 "name": "raid_bdev1", 00:32:40.271 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:40.271 "strip_size_kb": 0, 00:32:40.271 "state": "online", 00:32:40.271 "raid_level": "raid1", 00:32:40.271 "superblock": true, 00:32:40.271 "num_base_bdevs": 2, 00:32:40.271 "num_base_bdevs_discovered": 2, 00:32:40.271 "num_base_bdevs_operational": 2, 00:32:40.271 "base_bdevs_list": [ 00:32:40.271 { 00:32:40.271 "name": "spare", 00:32:40.271 "uuid": "69f8ac4d-72e9-5ebe-9026-9248638df7f0", 00:32:40.271 "is_configured": true, 00:32:40.271 "data_offset": 256, 00:32:40.271 "data_size": 7936 00:32:40.271 }, 00:32:40.271 { 00:32:40.271 "name": "BaseBdev2", 00:32:40.271 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:40.271 "is_configured": true, 00:32:40.271 "data_offset": 256, 00:32:40.271 "data_size": 7936 00:32:40.271 } 00:32:40.271 ] 00:32:40.271 }' 00:32:40.271 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:40.271 00:16:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:40.530 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:40.789 [2024-07-25 00:16:36.423140] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:40.789 [2024-07-25 00:16:36.423174] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:40.789 [2024-07-25 00:16:36.423260] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:40.789 [2024-07-25 00:16:36.423340] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:40.789 [2024-07-25 00:16:36.423355] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state offline 00:32:40.789 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:40.789 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # jq length 00:32:41.048 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:32:41.048 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:32:41.048 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:32:41.048 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:41.048 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:41.048 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:41.048 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:41.048 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:41.048 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:41.048 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:32:41.048 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:41.048 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:41.048 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:41.306 /dev/nbd0 00:32:41.306 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:41.306 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:41.306 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:41.306 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:32:41.306 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:41.306 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:41.306 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:41.306 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:32:41.306 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:41.306 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:41.307 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:41.307 1+0 records in 00:32:41.307 1+0 records out 00:32:41.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416959 s, 9.8 MB/s 00:32:41.307 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:41.307 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:32:41.307 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:41.307 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:41.307 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:32:41.307 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:41.307 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:41.307 00:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:32:41.307 /dev/nbd1 00:32:41.565 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:41.565 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:41.565 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:32:41.565 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:32:41.565 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:41.565 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:41.565 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:32:41.565 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:32:41.565 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:41.566 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:41.566 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:41.566 1+0 records in 00:32:41.566 1+0 records out 00:32:41.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306577 s, 13.4 MB/s 00:32:41.566 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:41.566 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:32:41.566 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:41.566 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:41.566 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:32:41.566 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:41.566 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:41.566 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:32:41.566 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:32:41.566 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:41.566 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:41.566 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:41.566 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:32:41.566 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:41.566 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:41.824 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:41.824 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:41.824 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:41.824 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:41.824 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:41.824 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:41.824 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:32:41.824 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:32:41.824 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:41.824 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:32:42.083 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:42.083 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:42.083 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:42.083 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:42.083 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:42.083 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:42.083 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:32:42.083 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:32:42.083 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:32:42.083 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:42.342 00:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:42.342 [2024-07-25 00:16:38.141338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:42.342 [2024-07-25 00:16:38.141398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:42.342 [2024-07-25 00:16:38.141432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:32:42.342 [2024-07-25 00:16:38.141445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:42.342 [2024-07-25 00:16:38.143698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:42.342 [2024-07-25 00:16:38.143736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:42.342 [2024-07-25 00:16:38.143846] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:42.342 [2024-07-25 00:16:38.143899] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:42.342 [2024-07-25 00:16:38.144046] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:42.342 spare 00:32:42.342 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:42.342 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:42.342 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:42.342 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:42.342 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:42.342 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:42.342 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:42.342 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:42.342 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:42.342 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:42.342 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:42.342 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:42.601 [2024-07-25 00:16:38.244154] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:32:42.601 [2024-07-25 00:16:38.244204] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:42.601 [2024-07-25 00:16:38.244338] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0001c1670 00:32:42.601 [2024-07-25 00:16:38.244710] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:32:42.601 [2024-07-25 00:16:38.244724] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:32:42.601 [2024-07-25 00:16:38.244907] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:42.601 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:42.601 "name": "raid_bdev1", 00:32:42.601 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:42.601 "strip_size_kb": 0, 00:32:42.601 "state": "online", 00:32:42.601 "raid_level": "raid1", 00:32:42.601 "superblock": true, 00:32:42.601 "num_base_bdevs": 2, 00:32:42.601 "num_base_bdevs_discovered": 2, 00:32:42.601 "num_base_bdevs_operational": 2, 00:32:42.601 "base_bdevs_list": [ 00:32:42.601 { 00:32:42.601 "name": "spare", 00:32:42.601 "uuid": "69f8ac4d-72e9-5ebe-9026-9248638df7f0", 00:32:42.601 "is_configured": true, 00:32:42.601 "data_offset": 256, 00:32:42.601 "data_size": 7936 00:32:42.601 }, 00:32:42.601 { 00:32:42.601 "name": "BaseBdev2", 00:32:42.601 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:42.601 "is_configured": true, 00:32:42.601 "data_offset": 256, 00:32:42.601 "data_size": 7936 00:32:42.601 } 00:32:42.601 ] 00:32:42.601 }' 00:32:42.601 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:42.601 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:42.859 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:42.859 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:42.859 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:42.859 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:42.859 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:42.859 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:42.859 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:43.118 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:43.118 "name": "raid_bdev1", 00:32:43.118 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:43.118 "strip_size_kb": 0, 00:32:43.118 "state": "online", 00:32:43.118 "raid_level": "raid1", 00:32:43.118 "superblock": true, 00:32:43.118 "num_base_bdevs": 2, 00:32:43.118 "num_base_bdevs_discovered": 2, 00:32:43.118 "num_base_bdevs_operational": 2, 00:32:43.118 "base_bdevs_list": [ 00:32:43.118 { 00:32:43.118 "name": "spare", 00:32:43.118 "uuid": "69f8ac4d-72e9-5ebe-9026-9248638df7f0", 00:32:43.118 "is_configured": true, 00:32:43.118 "data_offset": 256, 00:32:43.118 "data_size": 7936 00:32:43.118 }, 00:32:43.118 { 00:32:43.118 "name": "BaseBdev2", 00:32:43.118 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:43.118 "is_configured": true, 00:32:43.118 "data_offset": 256, 00:32:43.118 "data_size": 7936 00:32:43.118 } 00:32:43.118 ] 00:32:43.118 }' 00:32:43.118 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:43.118 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:43.118 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:43.118 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:43.118 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:43.118 00:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:43.377 00:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:32:43.377 00:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:43.636 [2024-07-25 00:16:39.353644] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:43.636 00:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:43.636 00:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:43.636 00:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:43.636 00:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:43.636 00:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:43.636 00:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:43.636 00:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:43.636 00:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:43.636 00:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:43.636 00:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:43.636 00:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:43.636 00:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:43.895 00:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:43.895 "name": "raid_bdev1", 00:32:43.895 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:43.895 "strip_size_kb": 0, 00:32:43.895 "state": "online", 00:32:43.895 "raid_level": "raid1", 00:32:43.895 "superblock": true, 00:32:43.895 "num_base_bdevs": 2, 00:32:43.895 "num_base_bdevs_discovered": 1, 00:32:43.895 "num_base_bdevs_operational": 1, 00:32:43.895 "base_bdevs_list": [ 00:32:43.895 { 00:32:43.895 "name": null, 00:32:43.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:43.895 "is_configured": false, 00:32:43.895 "data_offset": 256, 00:32:43.895 "data_size": 7936 00:32:43.895 }, 00:32:43.895 { 00:32:43.895 "name": "BaseBdev2", 00:32:43.895 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:43.895 "is_configured": true, 00:32:43.895 "data_offset": 256, 00:32:43.895 "data_size": 7936 00:32:43.895 } 00:32:43.895 ] 00:32:43.895 }' 00:32:43.895 00:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:43.895 00:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:44.154 00:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:44.412 [2024-07-25 00:16:40.125847] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:44.412 [2024-07-25 00:16:40.126040] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:44.412 [2024-07-25 00:16:40.126062] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:44.412 [2024-07-25 00:16:40.126107] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:44.412 [2024-07-25 00:16:40.136822] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0001c1740 00:32:44.412 [2024-07-25 00:16:40.138685] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:44.412 00:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # sleep 1 00:32:45.347 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:45.347 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:45.347 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:45.347 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:45.347 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:45.347 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:45.347 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:45.606 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:45.606 "name": "raid_bdev1", 00:32:45.606 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:45.606 "strip_size_kb": 0, 00:32:45.606 "state": "online", 00:32:45.606 "raid_level": "raid1", 00:32:45.606 "superblock": true, 00:32:45.606 "num_base_bdevs": 2, 00:32:45.606 "num_base_bdevs_discovered": 2, 00:32:45.606 "num_base_bdevs_operational": 2, 00:32:45.606 "process": { 00:32:45.606 "type": "rebuild", 00:32:45.606 "target": "spare", 00:32:45.606 "progress": { 00:32:45.606 "blocks": 3072, 00:32:45.606 "percent": 38 00:32:45.606 } 00:32:45.606 }, 00:32:45.606 "base_bdevs_list": [ 00:32:45.606 { 00:32:45.606 "name": "spare", 00:32:45.606 "uuid": "69f8ac4d-72e9-5ebe-9026-9248638df7f0", 00:32:45.606 "is_configured": true, 00:32:45.606 "data_offset": 256, 00:32:45.606 "data_size": 7936 00:32:45.606 }, 00:32:45.606 { 00:32:45.606 "name": "BaseBdev2", 00:32:45.606 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:45.606 "is_configured": true, 00:32:45.606 "data_offset": 256, 00:32:45.606 "data_size": 7936 00:32:45.606 } 00:32:45.606 ] 00:32:45.606 }' 00:32:45.606 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:45.606 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:45.606 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:45.606 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:45.606 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:45.865 [2024-07-25 00:16:41.660950] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:46.123 [2024-07-25 00:16:41.745628] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:46.123 [2024-07-25 00:16:41.745712] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:46.123 [2024-07-25 00:16:41.745733] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:46.123 [2024-07-25 00:16:41.745745] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:46.123 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:46.123 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:46.123 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:46.123 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:46.123 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:46.123 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:46.123 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:46.123 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:46.123 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:46.123 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:46.123 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:46.123 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:46.381 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:46.381 "name": "raid_bdev1", 00:32:46.381 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:46.381 "strip_size_kb": 0, 00:32:46.381 "state": "online", 00:32:46.381 "raid_level": "raid1", 00:32:46.381 "superblock": true, 00:32:46.381 "num_base_bdevs": 2, 00:32:46.381 "num_base_bdevs_discovered": 1, 00:32:46.381 "num_base_bdevs_operational": 1, 00:32:46.381 "base_bdevs_list": [ 00:32:46.381 { 00:32:46.381 "name": null, 00:32:46.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:46.381 "is_configured": false, 00:32:46.381 "data_offset": 256, 00:32:46.381 "data_size": 7936 00:32:46.381 }, 00:32:46.381 { 00:32:46.382 "name": "BaseBdev2", 00:32:46.382 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:46.382 "is_configured": true, 00:32:46.382 "data_offset": 256, 00:32:46.382 "data_size": 7936 00:32:46.382 } 00:32:46.382 ] 00:32:46.382 }' 00:32:46.382 00:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:46.382 00:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:46.640 00:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:46.898 [2024-07-25 00:16:42.509816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:46.898 [2024-07-25 00:16:42.509908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:46.898 [2024-07-25 00:16:42.509939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:32:46.898 [2024-07-25 00:16:42.509955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:46.898 [2024-07-25 00:16:42.510533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:46.898 [2024-07-25 00:16:42.510563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:46.898 [2024-07-25 00:16:42.510664] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:46.898 [2024-07-25 00:16:42.510683] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:46.898 [2024-07-25 00:16:42.510696] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:46.898 [2024-07-25 00:16:42.510721] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:46.898 spare 00:32:46.898 [2024-07-25 00:16:42.521852] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0001c1810 00:32:46.898 [2024-07-25 00:16:42.523640] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:46.898 00:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # sleep 1 00:32:47.833 00:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:47.833 00:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:47.833 00:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:47.833 00:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:47.833 00:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:47.833 00:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:47.833 00:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:48.090 00:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:48.090 "name": "raid_bdev1", 00:32:48.090 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:48.090 "strip_size_kb": 0, 00:32:48.090 "state": "online", 00:32:48.090 "raid_level": "raid1", 00:32:48.090 "superblock": true, 00:32:48.090 "num_base_bdevs": 2, 00:32:48.090 "num_base_bdevs_discovered": 2, 00:32:48.090 "num_base_bdevs_operational": 2, 00:32:48.090 "process": { 00:32:48.090 "type": "rebuild", 00:32:48.090 "target": "spare", 00:32:48.090 "progress": { 00:32:48.090 "blocks": 3072, 00:32:48.090 "percent": 38 00:32:48.090 } 00:32:48.090 }, 00:32:48.090 "base_bdevs_list": [ 00:32:48.090 { 00:32:48.090 "name": "spare", 00:32:48.090 "uuid": "69f8ac4d-72e9-5ebe-9026-9248638df7f0", 00:32:48.090 "is_configured": true, 00:32:48.090 "data_offset": 256, 00:32:48.090 "data_size": 7936 00:32:48.090 }, 00:32:48.090 { 00:32:48.090 "name": "BaseBdev2", 00:32:48.090 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:48.090 "is_configured": true, 00:32:48.090 "data_offset": 256, 00:32:48.090 "data_size": 7936 00:32:48.090 } 00:32:48.090 ] 00:32:48.090 }' 00:32:48.090 00:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:48.090 00:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:48.090 00:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:48.090 00:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:48.090 00:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:48.348 [2024-07-25 00:16:43.973729] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:48.348 [2024-07-25 00:16:44.030057] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:48.348 [2024-07-25 00:16:44.030116] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:48.348 [2024-07-25 00:16:44.030139] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:48.348 [2024-07-25 00:16:44.030149] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:48.348 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:48.348 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:48.348 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:48.348 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:48.348 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:48.348 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:48.348 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:48.348 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:48.348 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:48.348 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:48.348 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:48.348 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:48.606 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:48.606 "name": "raid_bdev1", 00:32:48.606 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:48.606 "strip_size_kb": 0, 00:32:48.606 "state": "online", 00:32:48.606 "raid_level": "raid1", 00:32:48.606 "superblock": true, 00:32:48.606 "num_base_bdevs": 2, 00:32:48.606 "num_base_bdevs_discovered": 1, 00:32:48.606 "num_base_bdevs_operational": 1, 00:32:48.606 "base_bdevs_list": [ 00:32:48.606 { 00:32:48.606 "name": null, 00:32:48.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:48.606 "is_configured": false, 00:32:48.606 "data_offset": 256, 00:32:48.606 "data_size": 7936 00:32:48.606 }, 00:32:48.606 { 00:32:48.606 "name": "BaseBdev2", 00:32:48.606 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:48.606 "is_configured": true, 00:32:48.606 "data_offset": 256, 00:32:48.606 "data_size": 7936 00:32:48.606 } 00:32:48.606 ] 00:32:48.606 }' 00:32:48.606 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:48.606 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:48.865 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:48.865 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:48.865 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:48.865 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:48.865 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:48.865 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:48.865 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:49.123 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:49.123 "name": "raid_bdev1", 00:32:49.123 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:49.123 "strip_size_kb": 0, 00:32:49.123 "state": "online", 00:32:49.123 "raid_level": "raid1", 00:32:49.123 "superblock": true, 00:32:49.123 "num_base_bdevs": 2, 00:32:49.123 "num_base_bdevs_discovered": 1, 00:32:49.123 "num_base_bdevs_operational": 1, 00:32:49.123 "base_bdevs_list": [ 00:32:49.123 { 00:32:49.123 "name": null, 00:32:49.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:49.123 "is_configured": false, 00:32:49.123 "data_offset": 256, 00:32:49.123 "data_size": 7936 00:32:49.123 }, 00:32:49.123 { 00:32:49.123 "name": "BaseBdev2", 00:32:49.123 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:49.123 "is_configured": true, 00:32:49.123 "data_offset": 256, 00:32:49.123 "data_size": 7936 00:32:49.123 } 00:32:49.123 ] 00:32:49.123 }' 00:32:49.123 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:49.123 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:49.123 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:49.123 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:49.123 00:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:32:49.381 00:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:49.639 [2024-07-25 00:16:45.329230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:49.639 [2024-07-25 00:16:45.329286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:49.639 [2024-07-25 00:16:45.329316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:32:49.639 [2024-07-25 00:16:45.329329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:49.639 [2024-07-25 00:16:45.329737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:49.639 [2024-07-25 00:16:45.329758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:49.639 [2024-07-25 00:16:45.329880] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:49.639 [2024-07-25 00:16:45.329898] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:49.639 [2024-07-25 00:16:45.329910] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:49.639 BaseBdev1 00:32:49.639 00:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@789 -- # sleep 1 00:32:50.574 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:50.574 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:50.574 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:50.574 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:50.574 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:50.574 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:50.574 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:50.574 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:50.574 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:50.574 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:50.574 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:50.574 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:50.832 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:50.832 "name": "raid_bdev1", 00:32:50.832 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:50.832 "strip_size_kb": 0, 00:32:50.832 "state": "online", 00:32:50.832 "raid_level": "raid1", 00:32:50.832 "superblock": true, 00:32:50.832 "num_base_bdevs": 2, 00:32:50.832 "num_base_bdevs_discovered": 1, 00:32:50.832 "num_base_bdevs_operational": 1, 00:32:50.832 "base_bdevs_list": [ 00:32:50.832 { 00:32:50.832 "name": null, 00:32:50.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:50.832 "is_configured": false, 00:32:50.832 "data_offset": 256, 00:32:50.832 "data_size": 7936 00:32:50.832 }, 00:32:50.832 { 00:32:50.832 "name": "BaseBdev2", 00:32:50.832 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:50.832 "is_configured": true, 00:32:50.832 "data_offset": 256, 00:32:50.832 "data_size": 7936 00:32:50.832 } 00:32:50.832 ] 00:32:50.832 }' 00:32:50.832 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:50.832 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:51.091 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:51.091 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:51.091 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:51.091 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:51.091 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:51.091 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:51.091 00:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:51.351 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:51.351 "name": "raid_bdev1", 00:32:51.351 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:51.351 "strip_size_kb": 0, 00:32:51.351 "state": "online", 00:32:51.351 "raid_level": "raid1", 00:32:51.351 "superblock": true, 00:32:51.351 "num_base_bdevs": 2, 00:32:51.351 "num_base_bdevs_discovered": 1, 00:32:51.351 "num_base_bdevs_operational": 1, 00:32:51.351 "base_bdevs_list": [ 00:32:51.351 { 00:32:51.351 "name": null, 00:32:51.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:51.351 "is_configured": false, 00:32:51.351 "data_offset": 256, 00:32:51.351 "data_size": 7936 00:32:51.351 }, 00:32:51.351 { 00:32:51.351 "name": "BaseBdev2", 00:32:51.351 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:51.351 "is_configured": true, 00:32:51.351 "data_offset": 256, 00:32:51.351 "data_size": 7936 00:32:51.351 } 00:32:51.351 ] 00:32:51.351 }' 00:32:51.351 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:51.351 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:51.351 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:51.351 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:51.351 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:51.351 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:32:51.351 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:51.351 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:51.351 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:51.351 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:51.351 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:51.351 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:51.351 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:51.351 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:51.351 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:51.351 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:51.610 [2024-07-25 00:16:47.369699] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:51.610 [2024-07-25 00:16:47.369893] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:51.610 [2024-07-25 00:16:47.369927] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:51.610 request: 00:32:51.610 { 00:32:51.610 "base_bdev": "BaseBdev1", 00:32:51.610 "raid_bdev": "raid_bdev1", 00:32:51.610 "method": "bdev_raid_add_base_bdev", 00:32:51.610 "req_id": 1 00:32:51.610 } 00:32:51.610 Got JSON-RPC error response 00:32:51.610 response: 00:32:51.610 { 00:32:51.610 "code": -22, 00:32:51.610 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:51.610 } 00:32:51.610 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:32:51.610 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:51.610 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:51.610 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:51.610 00:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@793 -- # sleep 1 00:32:52.547 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:52.547 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:52.547 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:52.547 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:52.547 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:52.547 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:52.547 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:52.547 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:52.547 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:52.547 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:52.547 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:52.547 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:52.806 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:52.806 "name": "raid_bdev1", 00:32:52.806 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:52.806 "strip_size_kb": 0, 00:32:52.806 "state": "online", 00:32:52.806 "raid_level": "raid1", 00:32:52.806 "superblock": true, 00:32:52.806 "num_base_bdevs": 2, 00:32:52.806 "num_base_bdevs_discovered": 1, 00:32:52.806 "num_base_bdevs_operational": 1, 00:32:52.806 "base_bdevs_list": [ 00:32:52.806 { 00:32:52.806 "name": null, 00:32:52.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.806 "is_configured": false, 00:32:52.806 "data_offset": 256, 00:32:52.806 "data_size": 7936 00:32:52.806 }, 00:32:52.806 { 00:32:52.806 "name": "BaseBdev2", 00:32:52.806 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:52.806 "is_configured": true, 00:32:52.806 "data_offset": 256, 00:32:52.806 "data_size": 7936 00:32:52.806 } 00:32:52.806 ] 00:32:52.806 }' 00:32:52.806 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:52.806 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:53.401 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:53.401 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:53.401 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:53.401 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:53.401 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:53.401 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:53.401 00:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:53.401 00:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:53.401 "name": "raid_bdev1", 00:32:53.401 "uuid": "a01b64cd-fc6b-48dc-8cc5-d93022afbe6a", 00:32:53.401 "strip_size_kb": 0, 00:32:53.401 "state": "online", 00:32:53.401 "raid_level": "raid1", 00:32:53.401 "superblock": true, 00:32:53.401 "num_base_bdevs": 2, 00:32:53.401 "num_base_bdevs_discovered": 1, 00:32:53.401 "num_base_bdevs_operational": 1, 00:32:53.401 "base_bdevs_list": [ 00:32:53.401 { 00:32:53.401 "name": null, 00:32:53.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:53.401 "is_configured": false, 00:32:53.401 "data_offset": 256, 00:32:53.401 "data_size": 7936 00:32:53.401 }, 00:32:53.401 { 00:32:53.401 "name": "BaseBdev2", 00:32:53.401 "uuid": "646b2df3-4953-5987-9f0b-acbe6b33fb5d", 00:32:53.401 "is_configured": true, 00:32:53.401 "data_offset": 256, 00:32:53.401 "data_size": 7936 00:32:53.401 } 00:32:53.401 ] 00:32:53.401 }' 00:32:53.401 00:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:53.401 00:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:53.401 00:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:53.401 00:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:53.401 00:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@798 -- # killprocess 110891 00:32:53.401 00:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 110891 ']' 00:32:53.401 00:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 110891 00:32:53.401 00:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:32:53.401 00:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:53.401 00:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110891 00:32:53.660 killing process with pid 110891 00:32:53.660 Received shutdown signal, test time was about 60.000000 seconds 00:32:53.660 00:32:53.660 Latency(us) 00:32:53.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.660 =================================================================================================================== 00:32:53.660 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:53.660 00:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:53.660 00:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:53.660 00:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110891' 00:32:53.660 00:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 110891 00:32:53.660 [2024-07-25 00:16:49.286670] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:53.660 00:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 110891 00:32:53.660 [2024-07-25 00:16:49.286774] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:53.660 [2024-07-25 00:16:49.286868] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:53.660 [2024-07-25 00:16:49.286883] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:32:53.660 [2024-07-25 00:16:49.474069] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:54.598 00:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@800 -- # return 0 00:32:54.598 00:32:54.598 real 0m27.270s 00:32:54.598 user 0m40.352s 00:32:54.598 sys 0m3.433s 00:32:54.598 ************************************ 00:32:54.598 END TEST raid_rebuild_test_sb_4k 00:32:54.598 ************************************ 00:32:54.598 00:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:54.598 00:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:54.598 00:16:50 bdev_raid -- bdev/bdev_raid.sh@984 -- # base_malloc_params='-m 32' 00:32:54.598 00:16:50 bdev_raid -- bdev/bdev_raid.sh@985 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:32:54.598 00:16:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:32:54.598 00:16:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:54.598 00:16:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:54.598 ************************************ 00:32:54.598 START TEST raid_state_function_test_sb_md_separate 00:32:54.598 ************************************ 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:32:54.598 Process raid pid: 111684 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=111684 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 111684' 00:32:54.598 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:32:54.599 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 111684 /var/tmp/spdk-raid.sock 00:32:54.599 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 111684 ']' 00:32:54.599 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:54.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:54.599 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:54.599 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:54.599 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:54.599 00:16:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:54.858 [2024-07-25 00:16:50.539507] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:32:54.858 [2024-07-25 00:16:50.539777] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:55.117 [2024-07-25 00:16:50.734830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.117 [2024-07-25 00:16:50.924178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.376 [2024-07-25 00:16:51.065622] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:55.635 00:16:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:55.635 00:16:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:32:55.635 00:16:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:55.894 [2024-07-25 00:16:51.685725] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:55.895 [2024-07-25 00:16:51.685778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:55.895 [2024-07-25 00:16:51.685792] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:55.895 [2024-07-25 00:16:51.685838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:55.895 00:16:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:55.895 00:16:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:55.895 00:16:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:55.895 00:16:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:55.895 00:16:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:55.895 00:16:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:55.895 00:16:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:55.895 00:16:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:55.895 00:16:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:55.895 00:16:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:55.895 00:16:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:55.895 00:16:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:56.154 00:16:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:56.154 "name": "Existed_Raid", 00:32:56.154 "uuid": "88e137d9-d73e-4378-adc6-c219035a55f3", 00:32:56.154 "strip_size_kb": 0, 00:32:56.154 "state": "configuring", 00:32:56.154 "raid_level": "raid1", 00:32:56.154 "superblock": true, 00:32:56.154 "num_base_bdevs": 2, 00:32:56.154 "num_base_bdevs_discovered": 0, 00:32:56.154 "num_base_bdevs_operational": 2, 00:32:56.154 "base_bdevs_list": [ 00:32:56.154 { 00:32:56.154 "name": "BaseBdev1", 00:32:56.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:56.154 "is_configured": false, 00:32:56.154 "data_offset": 0, 00:32:56.154 "data_size": 0 00:32:56.154 }, 00:32:56.154 { 00:32:56.154 "name": "BaseBdev2", 00:32:56.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:56.154 "is_configured": false, 00:32:56.154 "data_offset": 0, 00:32:56.154 "data_size": 0 00:32:56.154 } 00:32:56.154 ] 00:32:56.154 }' 00:32:56.154 00:16:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:56.154 00:16:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:56.412 00:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:56.671 [2024-07-25 00:16:52.421785] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:56.671 [2024-07-25 00:16:52.421833] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:32:56.671 00:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:56.930 [2024-07-25 00:16:52.677889] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:56.930 [2024-07-25 00:16:52.678107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:56.930 [2024-07-25 00:16:52.678242] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:56.930 [2024-07-25 00:16:52.678297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:56.930 00:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:32:57.189 [2024-07-25 00:16:52.899063] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:57.189 BaseBdev1 00:32:57.189 00:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:32:57.189 00:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:32:57.189 00:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:57.189 00:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:32:57.189 00:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:57.189 00:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:57.189 00:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:57.448 00:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:57.705 [ 00:32:57.705 { 00:32:57.705 "name": "BaseBdev1", 00:32:57.705 "aliases": [ 00:32:57.705 "ab63bee7-55ce-4cb9-9252-e43c7d0b1a65" 00:32:57.705 ], 00:32:57.705 "product_name": "Malloc disk", 00:32:57.705 "block_size": 4096, 00:32:57.705 "num_blocks": 8192, 00:32:57.705 "uuid": "ab63bee7-55ce-4cb9-9252-e43c7d0b1a65", 00:32:57.705 "md_size": 32, 00:32:57.705 "md_interleave": false, 00:32:57.705 "dif_type": 0, 00:32:57.705 "assigned_rate_limits": { 00:32:57.705 "rw_ios_per_sec": 0, 00:32:57.705 "rw_mbytes_per_sec": 0, 00:32:57.705 "r_mbytes_per_sec": 0, 00:32:57.705 "w_mbytes_per_sec": 0 00:32:57.705 }, 00:32:57.705 "claimed": true, 00:32:57.705 "claim_type": "exclusive_write", 00:32:57.705 "zoned": false, 00:32:57.706 "supported_io_types": { 00:32:57.706 "read": true, 00:32:57.706 "write": true, 00:32:57.706 "unmap": true, 00:32:57.706 "flush": true, 00:32:57.706 "reset": true, 00:32:57.706 "nvme_admin": false, 00:32:57.706 "nvme_io": false, 00:32:57.706 "nvme_io_md": false, 00:32:57.706 "write_zeroes": true, 00:32:57.706 "zcopy": true, 00:32:57.706 "get_zone_info": false, 00:32:57.706 "zone_management": false, 00:32:57.706 "zone_append": false, 00:32:57.706 "compare": false, 00:32:57.706 "compare_and_write": false, 00:32:57.706 "abort": true, 00:32:57.706 "seek_hole": false, 00:32:57.706 "seek_data": false, 00:32:57.706 "copy": true, 00:32:57.706 "nvme_iov_md": false 00:32:57.706 }, 00:32:57.706 "memory_domains": [ 00:32:57.706 { 00:32:57.706 "dma_device_id": "system", 00:32:57.706 "dma_device_type": 1 00:32:57.706 }, 00:32:57.706 { 00:32:57.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:57.706 "dma_device_type": 2 00:32:57.706 } 00:32:57.706 ], 00:32:57.706 "driver_specific": {} 00:32:57.706 } 00:32:57.706 ] 00:32:57.706 00:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:32:57.706 00:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:57.706 00:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:57.706 00:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:57.706 00:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:57.706 00:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:57.706 00:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:57.706 00:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:57.706 00:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:57.706 00:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:57.706 00:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:57.706 00:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:57.706 00:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:57.706 00:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:57.706 "name": "Existed_Raid", 00:32:57.706 "uuid": "552aac18-1393-4e21-87b0-a775cb4f7369", 00:32:57.706 "strip_size_kb": 0, 00:32:57.706 "state": "configuring", 00:32:57.706 "raid_level": "raid1", 00:32:57.706 "superblock": true, 00:32:57.706 "num_base_bdevs": 2, 00:32:57.706 "num_base_bdevs_discovered": 1, 00:32:57.706 "num_base_bdevs_operational": 2, 00:32:57.706 "base_bdevs_list": [ 00:32:57.706 { 00:32:57.706 "name": "BaseBdev1", 00:32:57.706 "uuid": "ab63bee7-55ce-4cb9-9252-e43c7d0b1a65", 00:32:57.706 "is_configured": true, 00:32:57.706 "data_offset": 256, 00:32:57.706 "data_size": 7936 00:32:57.706 }, 00:32:57.706 { 00:32:57.706 "name": "BaseBdev2", 00:32:57.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:57.706 "is_configured": false, 00:32:57.706 "data_offset": 0, 00:32:57.706 "data_size": 0 00:32:57.706 } 00:32:57.706 ] 00:32:57.706 }' 00:32:57.706 00:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:57.706 00:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:58.271 00:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:58.529 [2024-07-25 00:16:54.143424] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:58.529 [2024-07-25 00:16:54.143506] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:32:58.529 00:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:58.529 [2024-07-25 00:16:54.399532] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:58.787 [2024-07-25 00:16:54.401991] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:58.787 [2024-07-25 00:16:54.402222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:58.787 00:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:32:58.787 00:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:58.787 00:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:58.787 00:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:58.787 00:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:58.787 00:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:58.787 00:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:58.787 00:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:58.787 00:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:58.787 00:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:58.787 00:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:58.787 00:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:58.787 00:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:58.787 00:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:59.045 00:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:59.045 "name": "Existed_Raid", 00:32:59.045 "uuid": "35dcb197-190f-4c0f-9137-4532004492cf", 00:32:59.045 "strip_size_kb": 0, 00:32:59.045 "state": "configuring", 00:32:59.045 "raid_level": "raid1", 00:32:59.045 "superblock": true, 00:32:59.045 "num_base_bdevs": 2, 00:32:59.045 "num_base_bdevs_discovered": 1, 00:32:59.045 "num_base_bdevs_operational": 2, 00:32:59.045 "base_bdevs_list": [ 00:32:59.045 { 00:32:59.045 "name": "BaseBdev1", 00:32:59.045 "uuid": "ab63bee7-55ce-4cb9-9252-e43c7d0b1a65", 00:32:59.045 "is_configured": true, 00:32:59.045 "data_offset": 256, 00:32:59.045 "data_size": 7936 00:32:59.045 }, 00:32:59.045 { 00:32:59.045 "name": "BaseBdev2", 00:32:59.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.045 "is_configured": false, 00:32:59.045 "data_offset": 0, 00:32:59.045 "data_size": 0 00:32:59.045 } 00:32:59.045 ] 00:32:59.045 }' 00:32:59.045 00:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:59.045 00:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:59.303 00:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:32:59.561 [2024-07-25 00:16:55.218487] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:59.561 [2024-07-25 00:16:55.218910] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:32:59.561 [2024-07-25 00:16:55.219046] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:59.561 [2024-07-25 00:16:55.219214] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:32:59.561 [2024-07-25 00:16:55.219434] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:32:59.561 BaseBdev2 00:32:59.561 [2024-07-25 00:16:55.219555] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:32:59.561 [2024-07-25 00:16:55.219677] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:59.561 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:32:59.561 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:32:59.561 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:59.561 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:32:59.561 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:59.561 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:59.561 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:59.820 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:59.820 [ 00:32:59.820 { 00:32:59.820 "name": "BaseBdev2", 00:32:59.820 "aliases": [ 00:32:59.820 "32996907-6b88-4e5b-8a20-43edff80f535" 00:32:59.820 ], 00:32:59.820 "product_name": "Malloc disk", 00:32:59.820 "block_size": 4096, 00:32:59.820 "num_blocks": 8192, 00:32:59.820 "uuid": "32996907-6b88-4e5b-8a20-43edff80f535", 00:32:59.820 "md_size": 32, 00:32:59.820 "md_interleave": false, 00:32:59.820 "dif_type": 0, 00:32:59.820 "assigned_rate_limits": { 00:32:59.820 "rw_ios_per_sec": 0, 00:32:59.820 "rw_mbytes_per_sec": 0, 00:32:59.820 "r_mbytes_per_sec": 0, 00:32:59.820 "w_mbytes_per_sec": 0 00:32:59.820 }, 00:32:59.820 "claimed": true, 00:32:59.820 "claim_type": "exclusive_write", 00:32:59.820 "zoned": false, 00:32:59.820 "supported_io_types": { 00:32:59.820 "read": true, 00:32:59.820 "write": true, 00:32:59.820 "unmap": true, 00:32:59.820 "flush": true, 00:32:59.820 "reset": true, 00:32:59.820 "nvme_admin": false, 00:32:59.820 "nvme_io": false, 00:32:59.820 "nvme_io_md": false, 00:32:59.820 "write_zeroes": true, 00:32:59.820 "zcopy": true, 00:32:59.820 "get_zone_info": false, 00:32:59.820 "zone_management": false, 00:32:59.820 "zone_append": false, 00:32:59.820 "compare": false, 00:32:59.820 "compare_and_write": false, 00:32:59.820 "abort": true, 00:32:59.820 "seek_hole": false, 00:32:59.820 "seek_data": false, 00:32:59.820 "copy": true, 00:32:59.820 "nvme_iov_md": false 00:32:59.820 }, 00:32:59.820 "memory_domains": [ 00:32:59.820 { 00:32:59.820 "dma_device_id": "system", 00:32:59.820 "dma_device_type": 1 00:32:59.820 }, 00:32:59.820 { 00:32:59.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:59.820 "dma_device_type": 2 00:32:59.820 } 00:32:59.820 ], 00:32:59.820 "driver_specific": {} 00:32:59.820 } 00:32:59.820 ] 00:32:59.820 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:32:59.820 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:59.820 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:59.820 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:32:59.820 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:59.820 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:59.820 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:59.820 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:59.820 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:59.820 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:59.820 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:59.820 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:59.820 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:59.820 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:59.820 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:00.078 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:00.078 "name": "Existed_Raid", 00:33:00.078 "uuid": "35dcb197-190f-4c0f-9137-4532004492cf", 00:33:00.078 "strip_size_kb": 0, 00:33:00.078 "state": "online", 00:33:00.078 "raid_level": "raid1", 00:33:00.078 "superblock": true, 00:33:00.078 "num_base_bdevs": 2, 00:33:00.078 "num_base_bdevs_discovered": 2, 00:33:00.078 "num_base_bdevs_operational": 2, 00:33:00.078 "base_bdevs_list": [ 00:33:00.078 { 00:33:00.078 "name": "BaseBdev1", 00:33:00.078 "uuid": "ab63bee7-55ce-4cb9-9252-e43c7d0b1a65", 00:33:00.078 "is_configured": true, 00:33:00.078 "data_offset": 256, 00:33:00.078 "data_size": 7936 00:33:00.078 }, 00:33:00.078 { 00:33:00.078 "name": "BaseBdev2", 00:33:00.078 "uuid": "32996907-6b88-4e5b-8a20-43edff80f535", 00:33:00.078 "is_configured": true, 00:33:00.078 "data_offset": 256, 00:33:00.078 "data_size": 7936 00:33:00.078 } 00:33:00.078 ] 00:33:00.078 }' 00:33:00.078 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:00.078 00:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:00.336 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:33:00.336 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:33:00.336 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:00.336 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:00.336 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:00.336 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:33:00.336 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:33:00.336 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:00.594 [2024-07-25 00:16:56.339125] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:00.594 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:00.594 "name": "Existed_Raid", 00:33:00.594 "aliases": [ 00:33:00.594 "35dcb197-190f-4c0f-9137-4532004492cf" 00:33:00.594 ], 00:33:00.594 "product_name": "Raid Volume", 00:33:00.594 "block_size": 4096, 00:33:00.594 "num_blocks": 7936, 00:33:00.594 "uuid": "35dcb197-190f-4c0f-9137-4532004492cf", 00:33:00.594 "md_size": 32, 00:33:00.594 "md_interleave": false, 00:33:00.594 "dif_type": 0, 00:33:00.594 "assigned_rate_limits": { 00:33:00.594 "rw_ios_per_sec": 0, 00:33:00.594 "rw_mbytes_per_sec": 0, 00:33:00.594 "r_mbytes_per_sec": 0, 00:33:00.594 "w_mbytes_per_sec": 0 00:33:00.594 }, 00:33:00.594 "claimed": false, 00:33:00.594 "zoned": false, 00:33:00.594 "supported_io_types": { 00:33:00.594 "read": true, 00:33:00.594 "write": true, 00:33:00.594 "unmap": false, 00:33:00.594 "flush": false, 00:33:00.594 "reset": true, 00:33:00.594 "nvme_admin": false, 00:33:00.594 "nvme_io": false, 00:33:00.594 "nvme_io_md": false, 00:33:00.594 "write_zeroes": true, 00:33:00.594 "zcopy": false, 00:33:00.594 "get_zone_info": false, 00:33:00.594 "zone_management": false, 00:33:00.594 "zone_append": false, 00:33:00.594 "compare": false, 00:33:00.594 "compare_and_write": false, 00:33:00.594 "abort": false, 00:33:00.594 "seek_hole": false, 00:33:00.594 "seek_data": false, 00:33:00.594 "copy": false, 00:33:00.594 "nvme_iov_md": false 00:33:00.594 }, 00:33:00.594 "memory_domains": [ 00:33:00.594 { 00:33:00.594 "dma_device_id": "system", 00:33:00.594 "dma_device_type": 1 00:33:00.594 }, 00:33:00.594 { 00:33:00.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:00.594 "dma_device_type": 2 00:33:00.594 }, 00:33:00.594 { 00:33:00.594 "dma_device_id": "system", 00:33:00.594 "dma_device_type": 1 00:33:00.594 }, 00:33:00.594 { 00:33:00.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:00.594 "dma_device_type": 2 00:33:00.594 } 00:33:00.594 ], 00:33:00.594 "driver_specific": { 00:33:00.594 "raid": { 00:33:00.594 "uuid": "35dcb197-190f-4c0f-9137-4532004492cf", 00:33:00.594 "strip_size_kb": 0, 00:33:00.594 "state": "online", 00:33:00.594 "raid_level": "raid1", 00:33:00.594 "superblock": true, 00:33:00.594 "num_base_bdevs": 2, 00:33:00.594 "num_base_bdevs_discovered": 2, 00:33:00.594 "num_base_bdevs_operational": 2, 00:33:00.594 "base_bdevs_list": [ 00:33:00.594 { 00:33:00.594 "name": "BaseBdev1", 00:33:00.594 "uuid": "ab63bee7-55ce-4cb9-9252-e43c7d0b1a65", 00:33:00.594 "is_configured": true, 00:33:00.594 "data_offset": 256, 00:33:00.594 "data_size": 7936 00:33:00.594 }, 00:33:00.594 { 00:33:00.594 "name": "BaseBdev2", 00:33:00.594 "uuid": "32996907-6b88-4e5b-8a20-43edff80f535", 00:33:00.594 "is_configured": true, 00:33:00.594 "data_offset": 256, 00:33:00.594 "data_size": 7936 00:33:00.594 } 00:33:00.594 ] 00:33:00.594 } 00:33:00.594 } 00:33:00.594 }' 00:33:00.594 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:00.594 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:33:00.594 BaseBdev2' 00:33:00.594 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:00.594 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:00.594 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:33:00.853 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:00.853 "name": "BaseBdev1", 00:33:00.853 "aliases": [ 00:33:00.853 "ab63bee7-55ce-4cb9-9252-e43c7d0b1a65" 00:33:00.853 ], 00:33:00.853 "product_name": "Malloc disk", 00:33:00.853 "block_size": 4096, 00:33:00.853 "num_blocks": 8192, 00:33:00.853 "uuid": "ab63bee7-55ce-4cb9-9252-e43c7d0b1a65", 00:33:00.853 "md_size": 32, 00:33:00.853 "md_interleave": false, 00:33:00.853 "dif_type": 0, 00:33:00.853 "assigned_rate_limits": { 00:33:00.853 "rw_ios_per_sec": 0, 00:33:00.853 "rw_mbytes_per_sec": 0, 00:33:00.853 "r_mbytes_per_sec": 0, 00:33:00.853 "w_mbytes_per_sec": 0 00:33:00.853 }, 00:33:00.853 "claimed": true, 00:33:00.853 "claim_type": "exclusive_write", 00:33:00.853 "zoned": false, 00:33:00.853 "supported_io_types": { 00:33:00.853 "read": true, 00:33:00.853 "write": true, 00:33:00.853 "unmap": true, 00:33:00.853 "flush": true, 00:33:00.853 "reset": true, 00:33:00.853 "nvme_admin": false, 00:33:00.853 "nvme_io": false, 00:33:00.853 "nvme_io_md": false, 00:33:00.853 "write_zeroes": true, 00:33:00.853 "zcopy": true, 00:33:00.853 "get_zone_info": false, 00:33:00.853 "zone_management": false, 00:33:00.853 "zone_append": false, 00:33:00.853 "compare": false, 00:33:00.853 "compare_and_write": false, 00:33:00.853 "abort": true, 00:33:00.853 "seek_hole": false, 00:33:00.853 "seek_data": false, 00:33:00.853 "copy": true, 00:33:00.853 "nvme_iov_md": false 00:33:00.853 }, 00:33:00.853 "memory_domains": [ 00:33:00.853 { 00:33:00.853 "dma_device_id": "system", 00:33:00.853 "dma_device_type": 1 00:33:00.854 }, 00:33:00.854 { 00:33:00.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:00.854 "dma_device_type": 2 00:33:00.854 } 00:33:00.854 ], 00:33:00.854 "driver_specific": {} 00:33:00.854 }' 00:33:00.854 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:00.854 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:00.854 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:33:00.854 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:00.854 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:00.854 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:00.854 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:00.854 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:00.854 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:33:00.854 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:00.854 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:00.854 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:00.854 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:00.854 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:33:00.854 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:01.113 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:01.113 "name": "BaseBdev2", 00:33:01.113 "aliases": [ 00:33:01.113 "32996907-6b88-4e5b-8a20-43edff80f535" 00:33:01.113 ], 00:33:01.113 "product_name": "Malloc disk", 00:33:01.113 "block_size": 4096, 00:33:01.113 "num_blocks": 8192, 00:33:01.113 "uuid": "32996907-6b88-4e5b-8a20-43edff80f535", 00:33:01.113 "md_size": 32, 00:33:01.113 "md_interleave": false, 00:33:01.113 "dif_type": 0, 00:33:01.113 "assigned_rate_limits": { 00:33:01.113 "rw_ios_per_sec": 0, 00:33:01.113 "rw_mbytes_per_sec": 0, 00:33:01.113 "r_mbytes_per_sec": 0, 00:33:01.113 "w_mbytes_per_sec": 0 00:33:01.113 }, 00:33:01.113 "claimed": true, 00:33:01.113 "claim_type": "exclusive_write", 00:33:01.113 "zoned": false, 00:33:01.113 "supported_io_types": { 00:33:01.113 "read": true, 00:33:01.113 "write": true, 00:33:01.113 "unmap": true, 00:33:01.113 "flush": true, 00:33:01.113 "reset": true, 00:33:01.113 "nvme_admin": false, 00:33:01.113 "nvme_io": false, 00:33:01.113 "nvme_io_md": false, 00:33:01.113 "write_zeroes": true, 00:33:01.113 "zcopy": true, 00:33:01.113 "get_zone_info": false, 00:33:01.113 "zone_management": false, 00:33:01.113 "zone_append": false, 00:33:01.113 "compare": false, 00:33:01.113 "compare_and_write": false, 00:33:01.113 "abort": true, 00:33:01.113 "seek_hole": false, 00:33:01.113 "seek_data": false, 00:33:01.113 "copy": true, 00:33:01.113 "nvme_iov_md": false 00:33:01.113 }, 00:33:01.113 "memory_domains": [ 00:33:01.113 { 00:33:01.113 "dma_device_id": "system", 00:33:01.113 "dma_device_type": 1 00:33:01.113 }, 00:33:01.113 { 00:33:01.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:01.113 "dma_device_type": 2 00:33:01.113 } 00:33:01.113 ], 00:33:01.113 "driver_specific": {} 00:33:01.113 }' 00:33:01.113 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:01.113 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:01.113 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:33:01.113 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:01.113 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:01.113 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:01.113 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:01.113 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:01.113 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:33:01.113 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:01.372 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:01.372 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:01.372 00:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:01.372 [2024-07-25 00:16:57.223049] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:01.631 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:33:01.631 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:33:01.631 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:01.631 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:33:01.631 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:33:01.631 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:33:01.631 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:01.631 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:01.631 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:01.631 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:01.631 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:01.631 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:01.631 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:01.631 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:01.631 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:01.631 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:01.631 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:01.891 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:01.891 "name": "Existed_Raid", 00:33:01.891 "uuid": "35dcb197-190f-4c0f-9137-4532004492cf", 00:33:01.891 "strip_size_kb": 0, 00:33:01.891 "state": "online", 00:33:01.891 "raid_level": "raid1", 00:33:01.891 "superblock": true, 00:33:01.891 "num_base_bdevs": 2, 00:33:01.891 "num_base_bdevs_discovered": 1, 00:33:01.891 "num_base_bdevs_operational": 1, 00:33:01.891 "base_bdevs_list": [ 00:33:01.891 { 00:33:01.891 "name": null, 00:33:01.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.891 "is_configured": false, 00:33:01.891 "data_offset": 256, 00:33:01.891 "data_size": 7936 00:33:01.891 }, 00:33:01.891 { 00:33:01.891 "name": "BaseBdev2", 00:33:01.891 "uuid": "32996907-6b88-4e5b-8a20-43edff80f535", 00:33:01.891 "is_configured": true, 00:33:01.891 "data_offset": 256, 00:33:01.891 "data_size": 7936 00:33:01.891 } 00:33:01.891 ] 00:33:01.891 }' 00:33:01.891 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:01.891 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:02.150 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:33:02.150 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:02.150 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:02.150 00:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:02.409 00:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:02.409 00:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:02.409 00:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:33:02.668 [2024-07-25 00:16:58.287563] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:02.668 [2024-07-25 00:16:58.287671] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:02.668 [2024-07-25 00:16:58.354495] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:02.668 [2024-07-25 00:16:58.354543] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:02.668 [2024-07-25 00:16:58.354561] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:33:02.668 00:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:02.668 00:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:02.668 00:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:02.668 00:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:33:02.928 00:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:33:02.928 00:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:33:02.928 00:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:33:02.928 00:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 111684 00:33:02.928 00:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 111684 ']' 00:33:02.928 00:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 111684 00:33:02.928 00:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:33:02.928 00:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:02.928 00:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 111684 00:33:02.928 killing process with pid 111684 00:33:02.928 00:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:02.928 00:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:02.928 00:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 111684' 00:33:02.928 00:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 111684 00:33:02.928 [2024-07-25 00:16:58.651705] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:02.928 [2024-07-25 00:16:58.651798] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:02.928 00:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 111684 00:33:03.866 ************************************ 00:33:03.866 END TEST raid_state_function_test_sb_md_separate 00:33:03.866 ************************************ 00:33:03.866 00:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:33:03.866 00:33:03.866 real 0m9.126s 00:33:03.866 user 0m15.064s 00:33:03.866 sys 0m1.472s 00:33:03.866 00:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:03.866 00:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:03.866 00:16:59 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:33:03.866 00:16:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:03.866 00:16:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:03.866 00:16:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:03.866 ************************************ 00:33:03.866 START TEST raid_superblock_test_md_separate 00:33:03.866 ************************************ 00:33:03.866 00:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:33:03.866 00:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:33:03.866 00:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:33:03.866 00:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:33:03.866 00:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:33:03.866 00:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:33:03.866 00:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:33:03.866 00:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:33:03.866 00:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:33:03.866 00:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:33:03.866 00:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@414 -- # local strip_size 00:33:03.866 00:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:33:03.866 00:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:33:03.866 00:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:33:03.866 00:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:33:03.866 00:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:33:03.866 00:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@427 -- # raid_pid=112014 00:33:03.866 00:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:33:03.866 00:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@428 -- # waitforlisten 112014 /var/tmp/spdk-raid.sock 00:33:03.866 00:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 112014 ']' 00:33:03.867 00:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:03.867 00:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:03.867 00:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:03.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:03.867 00:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:03.867 00:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:03.867 [2024-07-25 00:16:59.672497] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:33:03.867 [2024-07-25 00:16:59.672854] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112014 ] 00:33:04.126 [2024-07-25 00:16:59.824413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.126 [2024-07-25 00:16:59.975062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.385 [2024-07-25 00:17:00.123653] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:04.952 00:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:04.952 00:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:33:04.952 00:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:33:04.952 00:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:33:04.952 00:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:33:04.952 00:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:33:04.952 00:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:04.952 00:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:04.952 00:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:33:04.952 00:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:04.952 00:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:33:04.952 malloc1 00:33:05.211 00:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:05.211 [2024-07-25 00:17:01.048576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:05.211 [2024-07-25 00:17:01.048763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:05.211 [2024-07-25 00:17:01.048888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:33:05.211 [2024-07-25 00:17:01.049158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:05.211 [2024-07-25 00:17:01.051035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:05.211 [2024-07-25 00:17:01.051222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:05.211 pt1 00:33:05.211 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:33:05.211 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:33:05.211 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:33:05.211 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:33:05.211 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:05.211 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:05.211 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:33:05.211 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:05.211 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:33:05.470 malloc2 00:33:05.470 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:05.728 [2024-07-25 00:17:01.466732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:05.728 [2024-07-25 00:17:01.466978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:05.728 [2024-07-25 00:17:01.467018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:33:05.728 [2024-07-25 00:17:01.467033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:05.728 [2024-07-25 00:17:01.468988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:05.728 [2024-07-25 00:17:01.469026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:05.728 pt2 00:33:05.728 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:33:05.728 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:33:05.728 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:33:05.987 [2024-07-25 00:17:01.666809] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:05.987 [2024-07-25 00:17:01.668643] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:05.987 [2024-07-25 00:17:01.668986] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007e80 00:33:05.987 [2024-07-25 00:17:01.669103] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:05.987 [2024-07-25 00:17:01.669266] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:33:05.987 [2024-07-25 00:17:01.669470] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007e80 00:33:05.987 [2024-07-25 00:17:01.669576] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007e80 00:33:05.987 [2024-07-25 00:17:01.669787] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:05.987 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:05.987 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:05.987 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:05.987 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:05.987 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:05.987 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:05.987 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:05.987 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:05.987 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:05.987 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:05.987 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:05.987 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:06.246 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:06.246 "name": "raid_bdev1", 00:33:06.246 "uuid": "44322516-ee8c-4934-9ee1-f6d068b19c0f", 00:33:06.246 "strip_size_kb": 0, 00:33:06.246 "state": "online", 00:33:06.246 "raid_level": "raid1", 00:33:06.246 "superblock": true, 00:33:06.246 "num_base_bdevs": 2, 00:33:06.246 "num_base_bdevs_discovered": 2, 00:33:06.246 "num_base_bdevs_operational": 2, 00:33:06.246 "base_bdevs_list": [ 00:33:06.246 { 00:33:06.246 "name": "pt1", 00:33:06.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:06.246 "is_configured": true, 00:33:06.246 "data_offset": 256, 00:33:06.246 "data_size": 7936 00:33:06.246 }, 00:33:06.246 { 00:33:06.246 "name": "pt2", 00:33:06.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:06.246 "is_configured": true, 00:33:06.246 "data_offset": 256, 00:33:06.246 "data_size": 7936 00:33:06.246 } 00:33:06.246 ] 00:33:06.246 }' 00:33:06.246 00:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:06.246 00:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:06.505 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:33:06.505 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:33:06.505 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:06.505 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:06.505 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:06.505 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:33:06.505 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:06.505 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:06.764 [2024-07-25 00:17:02.451156] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:06.764 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:06.764 "name": "raid_bdev1", 00:33:06.764 "aliases": [ 00:33:06.764 "44322516-ee8c-4934-9ee1-f6d068b19c0f" 00:33:06.764 ], 00:33:06.764 "product_name": "Raid Volume", 00:33:06.764 "block_size": 4096, 00:33:06.764 "num_blocks": 7936, 00:33:06.764 "uuid": "44322516-ee8c-4934-9ee1-f6d068b19c0f", 00:33:06.764 "md_size": 32, 00:33:06.764 "md_interleave": false, 00:33:06.764 "dif_type": 0, 00:33:06.764 "assigned_rate_limits": { 00:33:06.764 "rw_ios_per_sec": 0, 00:33:06.764 "rw_mbytes_per_sec": 0, 00:33:06.764 "r_mbytes_per_sec": 0, 00:33:06.764 "w_mbytes_per_sec": 0 00:33:06.764 }, 00:33:06.764 "claimed": false, 00:33:06.764 "zoned": false, 00:33:06.764 "supported_io_types": { 00:33:06.764 "read": true, 00:33:06.764 "write": true, 00:33:06.764 "unmap": false, 00:33:06.764 "flush": false, 00:33:06.764 "reset": true, 00:33:06.764 "nvme_admin": false, 00:33:06.764 "nvme_io": false, 00:33:06.764 "nvme_io_md": false, 00:33:06.764 "write_zeroes": true, 00:33:06.764 "zcopy": false, 00:33:06.764 "get_zone_info": false, 00:33:06.764 "zone_management": false, 00:33:06.764 "zone_append": false, 00:33:06.764 "compare": false, 00:33:06.764 "compare_and_write": false, 00:33:06.764 "abort": false, 00:33:06.764 "seek_hole": false, 00:33:06.764 "seek_data": false, 00:33:06.764 "copy": false, 00:33:06.764 "nvme_iov_md": false 00:33:06.764 }, 00:33:06.764 "memory_domains": [ 00:33:06.764 { 00:33:06.764 "dma_device_id": "system", 00:33:06.764 "dma_device_type": 1 00:33:06.764 }, 00:33:06.764 { 00:33:06.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:06.764 "dma_device_type": 2 00:33:06.764 }, 00:33:06.764 { 00:33:06.764 "dma_device_id": "system", 00:33:06.764 "dma_device_type": 1 00:33:06.764 }, 00:33:06.764 { 00:33:06.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:06.764 "dma_device_type": 2 00:33:06.764 } 00:33:06.764 ], 00:33:06.764 "driver_specific": { 00:33:06.764 "raid": { 00:33:06.764 "uuid": "44322516-ee8c-4934-9ee1-f6d068b19c0f", 00:33:06.764 "strip_size_kb": 0, 00:33:06.764 "state": "online", 00:33:06.764 "raid_level": "raid1", 00:33:06.764 "superblock": true, 00:33:06.764 "num_base_bdevs": 2, 00:33:06.764 "num_base_bdevs_discovered": 2, 00:33:06.764 "num_base_bdevs_operational": 2, 00:33:06.764 "base_bdevs_list": [ 00:33:06.764 { 00:33:06.764 "name": "pt1", 00:33:06.764 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:06.764 "is_configured": true, 00:33:06.764 "data_offset": 256, 00:33:06.764 "data_size": 7936 00:33:06.764 }, 00:33:06.764 { 00:33:06.764 "name": "pt2", 00:33:06.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:06.765 "is_configured": true, 00:33:06.765 "data_offset": 256, 00:33:06.765 "data_size": 7936 00:33:06.765 } 00:33:06.765 ] 00:33:06.765 } 00:33:06.765 } 00:33:06.765 }' 00:33:06.765 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:06.765 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:33:06.765 pt2' 00:33:06.765 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:06.765 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:33:06.765 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:07.045 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:07.045 "name": "pt1", 00:33:07.045 "aliases": [ 00:33:07.045 "00000000-0000-0000-0000-000000000001" 00:33:07.045 ], 00:33:07.045 "product_name": "passthru", 00:33:07.045 "block_size": 4096, 00:33:07.045 "num_blocks": 8192, 00:33:07.045 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:07.045 "md_size": 32, 00:33:07.045 "md_interleave": false, 00:33:07.045 "dif_type": 0, 00:33:07.045 "assigned_rate_limits": { 00:33:07.045 "rw_ios_per_sec": 0, 00:33:07.045 "rw_mbytes_per_sec": 0, 00:33:07.045 "r_mbytes_per_sec": 0, 00:33:07.045 "w_mbytes_per_sec": 0 00:33:07.045 }, 00:33:07.045 "claimed": true, 00:33:07.045 "claim_type": "exclusive_write", 00:33:07.045 "zoned": false, 00:33:07.045 "supported_io_types": { 00:33:07.045 "read": true, 00:33:07.045 "write": true, 00:33:07.045 "unmap": true, 00:33:07.045 "flush": true, 00:33:07.045 "reset": true, 00:33:07.045 "nvme_admin": false, 00:33:07.045 "nvme_io": false, 00:33:07.045 "nvme_io_md": false, 00:33:07.045 "write_zeroes": true, 00:33:07.045 "zcopy": true, 00:33:07.045 "get_zone_info": false, 00:33:07.045 "zone_management": false, 00:33:07.045 "zone_append": false, 00:33:07.045 "compare": false, 00:33:07.045 "compare_and_write": false, 00:33:07.045 "abort": true, 00:33:07.045 "seek_hole": false, 00:33:07.045 "seek_data": false, 00:33:07.045 "copy": true, 00:33:07.045 "nvme_iov_md": false 00:33:07.045 }, 00:33:07.045 "memory_domains": [ 00:33:07.045 { 00:33:07.045 "dma_device_id": "system", 00:33:07.045 "dma_device_type": 1 00:33:07.045 }, 00:33:07.045 { 00:33:07.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:07.045 "dma_device_type": 2 00:33:07.045 } 00:33:07.045 ], 00:33:07.045 "driver_specific": { 00:33:07.045 "passthru": { 00:33:07.045 "name": "pt1", 00:33:07.045 "base_bdev_name": "malloc1" 00:33:07.045 } 00:33:07.045 } 00:33:07.045 }' 00:33:07.045 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:07.045 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:07.045 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:33:07.045 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:07.045 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:07.045 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:07.045 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:07.045 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:07.045 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:33:07.045 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:07.045 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:07.045 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:07.045 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:07.045 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:33:07.045 00:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:07.311 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:07.311 "name": "pt2", 00:33:07.311 "aliases": [ 00:33:07.311 "00000000-0000-0000-0000-000000000002" 00:33:07.311 ], 00:33:07.311 "product_name": "passthru", 00:33:07.311 "block_size": 4096, 00:33:07.311 "num_blocks": 8192, 00:33:07.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:07.311 "md_size": 32, 00:33:07.311 "md_interleave": false, 00:33:07.311 "dif_type": 0, 00:33:07.311 "assigned_rate_limits": { 00:33:07.312 "rw_ios_per_sec": 0, 00:33:07.312 "rw_mbytes_per_sec": 0, 00:33:07.312 "r_mbytes_per_sec": 0, 00:33:07.312 "w_mbytes_per_sec": 0 00:33:07.312 }, 00:33:07.312 "claimed": true, 00:33:07.312 "claim_type": "exclusive_write", 00:33:07.312 "zoned": false, 00:33:07.312 "supported_io_types": { 00:33:07.312 "read": true, 00:33:07.312 "write": true, 00:33:07.312 "unmap": true, 00:33:07.312 "flush": true, 00:33:07.312 "reset": true, 00:33:07.312 "nvme_admin": false, 00:33:07.312 "nvme_io": false, 00:33:07.312 "nvme_io_md": false, 00:33:07.312 "write_zeroes": true, 00:33:07.312 "zcopy": true, 00:33:07.312 "get_zone_info": false, 00:33:07.312 "zone_management": false, 00:33:07.312 "zone_append": false, 00:33:07.312 "compare": false, 00:33:07.312 "compare_and_write": false, 00:33:07.312 "abort": true, 00:33:07.312 "seek_hole": false, 00:33:07.312 "seek_data": false, 00:33:07.312 "copy": true, 00:33:07.312 "nvme_iov_md": false 00:33:07.312 }, 00:33:07.312 "memory_domains": [ 00:33:07.312 { 00:33:07.312 "dma_device_id": "system", 00:33:07.312 "dma_device_type": 1 00:33:07.312 }, 00:33:07.312 { 00:33:07.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:07.312 "dma_device_type": 2 00:33:07.312 } 00:33:07.312 ], 00:33:07.312 "driver_specific": { 00:33:07.312 "passthru": { 00:33:07.312 "name": "pt2", 00:33:07.312 "base_bdev_name": "malloc2" 00:33:07.312 } 00:33:07.312 } 00:33:07.312 }' 00:33:07.312 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:07.312 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:07.312 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:33:07.312 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:07.312 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:07.312 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:07.312 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:07.312 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:07.312 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:33:07.312 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:07.312 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:07.312 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:07.312 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:07.312 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:33:07.570 [2024-07-25 00:17:03.347381] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:07.570 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=44322516-ee8c-4934-9ee1-f6d068b19c0f 00:33:07.570 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' -z 44322516-ee8c-4934-9ee1-f6d068b19c0f ']' 00:33:07.570 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:07.827 [2024-07-25 00:17:03.615203] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:07.827 [2024-07-25 00:17:03.615232] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:07.827 [2024-07-25 00:17:03.615307] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:07.827 [2024-07-25 00:17:03.615368] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:07.827 [2024-07-25 00:17:03.615386] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state offline 00:33:07.827 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:07.827 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:33:08.085 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:33:08.085 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:33:08.086 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:33:08.086 00:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:08.344 00:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:33:08.344 00:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:08.602 00:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:08.602 00:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:33:08.861 00:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:33:08.861 00:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:33:08.861 00:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:33:08.861 00:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:33:08.861 00:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:08.861 00:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:08.861 00:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:08.861 00:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:08.861 00:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:08.861 00:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:08.861 00:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:08.861 00:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:08.861 00:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:33:09.120 [2024-07-25 00:17:04.807511] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:09.120 [2024-07-25 00:17:04.809414] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:09.120 [2024-07-25 00:17:04.809489] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:33:09.120 [2024-07-25 00:17:04.809571] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:33:09.120 [2024-07-25 00:17:04.809593] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:09.120 [2024-07-25 00:17:04.809607] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008480 name raid_bdev1, state configuring 00:33:09.120 request: 00:33:09.120 { 00:33:09.120 "name": "raid_bdev1", 00:33:09.120 "raid_level": "raid1", 00:33:09.120 "base_bdevs": [ 00:33:09.120 "malloc1", 00:33:09.120 "malloc2" 00:33:09.120 ], 00:33:09.120 "superblock": false, 00:33:09.120 "method": "bdev_raid_create", 00:33:09.120 "req_id": 1 00:33:09.120 } 00:33:09.120 Got JSON-RPC error response 00:33:09.120 response: 00:33:09.120 { 00:33:09.120 "code": -17, 00:33:09.120 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:09.120 } 00:33:09.120 00:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:33:09.120 00:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:09.120 00:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:09.120 00:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:09.120 00:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:09.120 00:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:33:09.379 00:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:33:09.379 00:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:33:09.379 00:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:09.379 [2024-07-25 00:17:05.243556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:09.379 [2024-07-25 00:17:05.243621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:09.379 [2024-07-25 00:17:05.243645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:33:09.379 [2024-07-25 00:17:05.243659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:09.379 [2024-07-25 00:17:05.245821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:09.379 [2024-07-25 00:17:05.245905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:09.379 [2024-07-25 00:17:05.246012] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:09.379 [2024-07-25 00:17:05.246077] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:09.637 pt1 00:33:09.637 00:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:33:09.637 00:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:09.637 00:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:09.637 00:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:09.637 00:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:09.637 00:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:09.637 00:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:09.637 00:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:09.637 00:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:09.637 00:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:09.637 00:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:09.637 00:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:09.637 00:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:09.637 "name": "raid_bdev1", 00:33:09.638 "uuid": "44322516-ee8c-4934-9ee1-f6d068b19c0f", 00:33:09.638 "strip_size_kb": 0, 00:33:09.638 "state": "configuring", 00:33:09.638 "raid_level": "raid1", 00:33:09.638 "superblock": true, 00:33:09.638 "num_base_bdevs": 2, 00:33:09.638 "num_base_bdevs_discovered": 1, 00:33:09.638 "num_base_bdevs_operational": 2, 00:33:09.638 "base_bdevs_list": [ 00:33:09.638 { 00:33:09.638 "name": "pt1", 00:33:09.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:09.638 "is_configured": true, 00:33:09.638 "data_offset": 256, 00:33:09.638 "data_size": 7936 00:33:09.638 }, 00:33:09.638 { 00:33:09.638 "name": null, 00:33:09.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:09.638 "is_configured": false, 00:33:09.638 "data_offset": 256, 00:33:09.638 "data_size": 7936 00:33:09.638 } 00:33:09.638 ] 00:33:09.638 }' 00:33:09.638 00:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:09.638 00:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:10.204 00:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:33:10.204 00:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:33:10.204 00:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:33:10.204 00:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:10.204 [2024-07-25 00:17:05.995750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:10.204 [2024-07-25 00:17:05.995846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:10.204 [2024-07-25 00:17:05.995891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:33:10.204 [2024-07-25 00:17:05.995906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:10.204 [2024-07-25 00:17:05.996178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:10.204 [2024-07-25 00:17:05.996283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:10.204 [2024-07-25 00:17:05.996389] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:10.204 [2024-07-25 00:17:05.996420] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:10.204 [2024-07-25 00:17:05.996526] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009080 00:33:10.204 [2024-07-25 00:17:05.996544] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:10.204 [2024-07-25 00:17:05.996653] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:33:10.204 [2024-07-25 00:17:05.996771] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009080 00:33:10.204 [2024-07-25 00:17:05.996783] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009080 00:33:10.204 [2024-07-25 00:17:05.996891] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:10.204 pt2 00:33:10.205 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:33:10.205 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:33:10.205 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:10.205 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:10.205 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:10.205 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:10.205 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:10.205 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:10.205 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:10.205 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:10.205 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:10.205 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:10.205 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:10.205 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:10.463 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:10.463 "name": "raid_bdev1", 00:33:10.463 "uuid": "44322516-ee8c-4934-9ee1-f6d068b19c0f", 00:33:10.463 "strip_size_kb": 0, 00:33:10.463 "state": "online", 00:33:10.463 "raid_level": "raid1", 00:33:10.463 "superblock": true, 00:33:10.463 "num_base_bdevs": 2, 00:33:10.463 "num_base_bdevs_discovered": 2, 00:33:10.463 "num_base_bdevs_operational": 2, 00:33:10.463 "base_bdevs_list": [ 00:33:10.463 { 00:33:10.463 "name": "pt1", 00:33:10.463 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:10.463 "is_configured": true, 00:33:10.463 "data_offset": 256, 00:33:10.463 "data_size": 7936 00:33:10.463 }, 00:33:10.463 { 00:33:10.463 "name": "pt2", 00:33:10.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:10.463 "is_configured": true, 00:33:10.463 "data_offset": 256, 00:33:10.463 "data_size": 7936 00:33:10.463 } 00:33:10.463 ] 00:33:10.463 }' 00:33:10.463 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:10.463 00:17:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:10.722 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:33:10.722 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:33:10.722 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:10.722 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:10.722 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:10.722 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:33:10.722 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:10.722 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:10.980 [2024-07-25 00:17:06.672132] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:10.980 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:10.980 "name": "raid_bdev1", 00:33:10.980 "aliases": [ 00:33:10.980 "44322516-ee8c-4934-9ee1-f6d068b19c0f" 00:33:10.980 ], 00:33:10.980 "product_name": "Raid Volume", 00:33:10.980 "block_size": 4096, 00:33:10.980 "num_blocks": 7936, 00:33:10.980 "uuid": "44322516-ee8c-4934-9ee1-f6d068b19c0f", 00:33:10.980 "md_size": 32, 00:33:10.980 "md_interleave": false, 00:33:10.980 "dif_type": 0, 00:33:10.980 "assigned_rate_limits": { 00:33:10.980 "rw_ios_per_sec": 0, 00:33:10.980 "rw_mbytes_per_sec": 0, 00:33:10.980 "r_mbytes_per_sec": 0, 00:33:10.980 "w_mbytes_per_sec": 0 00:33:10.980 }, 00:33:10.980 "claimed": false, 00:33:10.980 "zoned": false, 00:33:10.980 "supported_io_types": { 00:33:10.980 "read": true, 00:33:10.980 "write": true, 00:33:10.980 "unmap": false, 00:33:10.980 "flush": false, 00:33:10.980 "reset": true, 00:33:10.980 "nvme_admin": false, 00:33:10.980 "nvme_io": false, 00:33:10.980 "nvme_io_md": false, 00:33:10.980 "write_zeroes": true, 00:33:10.980 "zcopy": false, 00:33:10.980 "get_zone_info": false, 00:33:10.980 "zone_management": false, 00:33:10.980 "zone_append": false, 00:33:10.980 "compare": false, 00:33:10.980 "compare_and_write": false, 00:33:10.980 "abort": false, 00:33:10.980 "seek_hole": false, 00:33:10.980 "seek_data": false, 00:33:10.980 "copy": false, 00:33:10.980 "nvme_iov_md": false 00:33:10.980 }, 00:33:10.980 "memory_domains": [ 00:33:10.980 { 00:33:10.980 "dma_device_id": "system", 00:33:10.980 "dma_device_type": 1 00:33:10.980 }, 00:33:10.980 { 00:33:10.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:10.980 "dma_device_type": 2 00:33:10.980 }, 00:33:10.980 { 00:33:10.980 "dma_device_id": "system", 00:33:10.980 "dma_device_type": 1 00:33:10.980 }, 00:33:10.980 { 00:33:10.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:10.980 "dma_device_type": 2 00:33:10.980 } 00:33:10.980 ], 00:33:10.980 "driver_specific": { 00:33:10.980 "raid": { 00:33:10.980 "uuid": "44322516-ee8c-4934-9ee1-f6d068b19c0f", 00:33:10.980 "strip_size_kb": 0, 00:33:10.980 "state": "online", 00:33:10.980 "raid_level": "raid1", 00:33:10.980 "superblock": true, 00:33:10.980 "num_base_bdevs": 2, 00:33:10.980 "num_base_bdevs_discovered": 2, 00:33:10.980 "num_base_bdevs_operational": 2, 00:33:10.980 "base_bdevs_list": [ 00:33:10.980 { 00:33:10.980 "name": "pt1", 00:33:10.980 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:10.980 "is_configured": true, 00:33:10.980 "data_offset": 256, 00:33:10.980 "data_size": 7936 00:33:10.980 }, 00:33:10.980 { 00:33:10.980 "name": "pt2", 00:33:10.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:10.980 "is_configured": true, 00:33:10.980 "data_offset": 256, 00:33:10.980 "data_size": 7936 00:33:10.980 } 00:33:10.980 ] 00:33:10.980 } 00:33:10.980 } 00:33:10.980 }' 00:33:10.980 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:10.980 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:33:10.980 pt2' 00:33:10.980 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:10.980 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:10.980 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:33:11.239 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:11.239 "name": "pt1", 00:33:11.239 "aliases": [ 00:33:11.239 "00000000-0000-0000-0000-000000000001" 00:33:11.239 ], 00:33:11.239 "product_name": "passthru", 00:33:11.239 "block_size": 4096, 00:33:11.239 "num_blocks": 8192, 00:33:11.239 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:11.239 "md_size": 32, 00:33:11.239 "md_interleave": false, 00:33:11.239 "dif_type": 0, 00:33:11.239 "assigned_rate_limits": { 00:33:11.239 "rw_ios_per_sec": 0, 00:33:11.239 "rw_mbytes_per_sec": 0, 00:33:11.239 "r_mbytes_per_sec": 0, 00:33:11.239 "w_mbytes_per_sec": 0 00:33:11.239 }, 00:33:11.239 "claimed": true, 00:33:11.239 "claim_type": "exclusive_write", 00:33:11.239 "zoned": false, 00:33:11.239 "supported_io_types": { 00:33:11.239 "read": true, 00:33:11.239 "write": true, 00:33:11.239 "unmap": true, 00:33:11.239 "flush": true, 00:33:11.239 "reset": true, 00:33:11.239 "nvme_admin": false, 00:33:11.239 "nvme_io": false, 00:33:11.239 "nvme_io_md": false, 00:33:11.239 "write_zeroes": true, 00:33:11.239 "zcopy": true, 00:33:11.239 "get_zone_info": false, 00:33:11.239 "zone_management": false, 00:33:11.239 "zone_append": false, 00:33:11.239 "compare": false, 00:33:11.239 "compare_and_write": false, 00:33:11.239 "abort": true, 00:33:11.239 "seek_hole": false, 00:33:11.239 "seek_data": false, 00:33:11.239 "copy": true, 00:33:11.239 "nvme_iov_md": false 00:33:11.239 }, 00:33:11.239 "memory_domains": [ 00:33:11.239 { 00:33:11.239 "dma_device_id": "system", 00:33:11.239 "dma_device_type": 1 00:33:11.239 }, 00:33:11.239 { 00:33:11.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:11.239 "dma_device_type": 2 00:33:11.239 } 00:33:11.239 ], 00:33:11.239 "driver_specific": { 00:33:11.239 "passthru": { 00:33:11.239 "name": "pt1", 00:33:11.239 "base_bdev_name": "malloc1" 00:33:11.239 } 00:33:11.239 } 00:33:11.239 }' 00:33:11.239 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:11.239 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:11.239 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:33:11.239 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:11.239 00:17:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:11.239 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:11.239 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:11.239 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:11.239 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:33:11.239 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:11.239 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:11.239 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:11.239 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:11.239 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:11.239 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:33:11.498 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:11.498 "name": "pt2", 00:33:11.498 "aliases": [ 00:33:11.498 "00000000-0000-0000-0000-000000000002" 00:33:11.498 ], 00:33:11.498 "product_name": "passthru", 00:33:11.498 "block_size": 4096, 00:33:11.498 "num_blocks": 8192, 00:33:11.498 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:11.498 "md_size": 32, 00:33:11.498 "md_interleave": false, 00:33:11.498 "dif_type": 0, 00:33:11.498 "assigned_rate_limits": { 00:33:11.498 "rw_ios_per_sec": 0, 00:33:11.498 "rw_mbytes_per_sec": 0, 00:33:11.498 "r_mbytes_per_sec": 0, 00:33:11.498 "w_mbytes_per_sec": 0 00:33:11.498 }, 00:33:11.498 "claimed": true, 00:33:11.498 "claim_type": "exclusive_write", 00:33:11.498 "zoned": false, 00:33:11.498 "supported_io_types": { 00:33:11.498 "read": true, 00:33:11.498 "write": true, 00:33:11.498 "unmap": true, 00:33:11.498 "flush": true, 00:33:11.498 "reset": true, 00:33:11.498 "nvme_admin": false, 00:33:11.498 "nvme_io": false, 00:33:11.498 "nvme_io_md": false, 00:33:11.498 "write_zeroes": true, 00:33:11.498 "zcopy": true, 00:33:11.498 "get_zone_info": false, 00:33:11.498 "zone_management": false, 00:33:11.498 "zone_append": false, 00:33:11.498 "compare": false, 00:33:11.498 "compare_and_write": false, 00:33:11.498 "abort": true, 00:33:11.498 "seek_hole": false, 00:33:11.498 "seek_data": false, 00:33:11.498 "copy": true, 00:33:11.498 "nvme_iov_md": false 00:33:11.498 }, 00:33:11.498 "memory_domains": [ 00:33:11.498 { 00:33:11.498 "dma_device_id": "system", 00:33:11.498 "dma_device_type": 1 00:33:11.498 }, 00:33:11.498 { 00:33:11.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:11.498 "dma_device_type": 2 00:33:11.498 } 00:33:11.498 ], 00:33:11.498 "driver_specific": { 00:33:11.498 "passthru": { 00:33:11.498 "name": "pt2", 00:33:11.498 "base_bdev_name": "malloc2" 00:33:11.498 } 00:33:11.498 } 00:33:11.498 }' 00:33:11.498 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:11.498 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:11.498 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:33:11.498 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:11.498 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:11.498 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:11.498 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:11.498 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:11.498 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:33:11.498 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:11.498 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:11.498 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:11.498 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:11.498 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:33:11.757 [2024-07-25 00:17:07.496410] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:11.757 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # '[' 44322516-ee8c-4934-9ee1-f6d068b19c0f '!=' 44322516-ee8c-4934-9ee1-f6d068b19c0f ']' 00:33:11.757 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:33:11.757 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:11.757 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:33:11.757 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:12.016 [2024-07-25 00:17:07.764286] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:33:12.016 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:12.016 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:12.016 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:12.016 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:12.016 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:12.016 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:12.016 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:12.016 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:12.016 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:12.016 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:12.017 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:12.017 00:17:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:12.276 00:17:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:12.276 "name": "raid_bdev1", 00:33:12.276 "uuid": "44322516-ee8c-4934-9ee1-f6d068b19c0f", 00:33:12.276 "strip_size_kb": 0, 00:33:12.276 "state": "online", 00:33:12.276 "raid_level": "raid1", 00:33:12.276 "superblock": true, 00:33:12.276 "num_base_bdevs": 2, 00:33:12.276 "num_base_bdevs_discovered": 1, 00:33:12.276 "num_base_bdevs_operational": 1, 00:33:12.276 "base_bdevs_list": [ 00:33:12.276 { 00:33:12.276 "name": null, 00:33:12.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:12.276 "is_configured": false, 00:33:12.276 "data_offset": 256, 00:33:12.276 "data_size": 7936 00:33:12.276 }, 00:33:12.276 { 00:33:12.276 "name": "pt2", 00:33:12.276 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:12.276 "is_configured": true, 00:33:12.276 "data_offset": 256, 00:33:12.276 "data_size": 7936 00:33:12.276 } 00:33:12.276 ] 00:33:12.276 }' 00:33:12.276 00:17:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:12.276 00:17:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:12.535 00:17:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:12.794 [2024-07-25 00:17:08.548474] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:12.794 [2024-07-25 00:17:08.548685] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:12.794 [2024-07-25 00:17:08.548876] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:12.794 [2024-07-25 00:17:08.549060] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:12.794 [2024-07-25 00:17:08.549194] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state offline 00:33:12.794 00:17:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:12.794 00:17:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:33:13.053 00:17:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:33:13.053 00:17:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:33:13.053 00:17:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:33:13.053 00:17:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:33:13.053 00:17:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:13.313 00:17:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:33:13.313 00:17:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:33:13.313 00:17:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:33:13.313 00:17:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:33:13.313 00:17:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@534 -- # i=1 00:33:13.313 00:17:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:13.313 [2024-07-25 00:17:09.176643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:13.313 [2024-07-25 00:17:09.176711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:13.313 [2024-07-25 00:17:09.176736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009680 00:33:13.313 [2024-07-25 00:17:09.176751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:13.313 [2024-07-25 00:17:09.178991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:13.313 [2024-07-25 00:17:09.179039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:13.313 [2024-07-25 00:17:09.179154] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:13.313 [2024-07-25 00:17:09.179265] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:13.313 [2024-07-25 00:17:09.179358] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009c80 00:33:13.313 [2024-07-25 00:17:09.179377] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:13.313 [2024-07-25 00:17:09.179492] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:33:13.313 [2024-07-25 00:17:09.179630] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009c80 00:33:13.313 [2024-07-25 00:17:09.179644] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009c80 00:33:13.313 [2024-07-25 00:17:09.179741] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:13.313 pt2 00:33:13.572 00:17:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:13.572 00:17:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:13.572 00:17:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:13.572 00:17:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:13.572 00:17:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:13.572 00:17:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:13.572 00:17:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:13.572 00:17:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:13.572 00:17:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:13.572 00:17:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:13.572 00:17:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:13.572 00:17:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:13.831 00:17:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:13.831 "name": "raid_bdev1", 00:33:13.831 "uuid": "44322516-ee8c-4934-9ee1-f6d068b19c0f", 00:33:13.831 "strip_size_kb": 0, 00:33:13.831 "state": "online", 00:33:13.831 "raid_level": "raid1", 00:33:13.831 "superblock": true, 00:33:13.831 "num_base_bdevs": 2, 00:33:13.831 "num_base_bdevs_discovered": 1, 00:33:13.831 "num_base_bdevs_operational": 1, 00:33:13.831 "base_bdevs_list": [ 00:33:13.831 { 00:33:13.831 "name": null, 00:33:13.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.831 "is_configured": false, 00:33:13.831 "data_offset": 256, 00:33:13.831 "data_size": 7936 00:33:13.831 }, 00:33:13.831 { 00:33:13.831 "name": "pt2", 00:33:13.831 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:13.831 "is_configured": true, 00:33:13.831 "data_offset": 256, 00:33:13.831 "data_size": 7936 00:33:13.831 } 00:33:13.831 ] 00:33:13.831 }' 00:33:13.831 00:17:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:13.831 00:17:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:14.090 00:17:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:14.349 [2024-07-25 00:17:09.972853] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:14.349 [2024-07-25 00:17:09.972899] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:14.349 [2024-07-25 00:17:09.972968] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:14.349 [2024-07-25 00:17:09.973027] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:14.349 [2024-07-25 00:17:09.973041] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009c80 name raid_bdev1, state offline 00:33:14.349 00:17:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:14.349 00:17:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:33:14.608 00:17:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:33:14.608 00:17:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:33:14.608 00:17:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:33:14.608 00:17:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:14.868 [2024-07-25 00:17:10.481173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:14.868 [2024-07-25 00:17:10.481264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:14.868 [2024-07-25 00:17:10.481295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:33:14.868 [2024-07-25 00:17:10.481310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:14.868 [2024-07-25 00:17:10.483527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:14.868 [2024-07-25 00:17:10.483566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:14.868 [2024-07-25 00:17:10.483679] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:14.868 [2024-07-25 00:17:10.483727] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:14.868 [2024-07-25 00:17:10.483878] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:33:14.868 [2024-07-25 00:17:10.483895] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:14.868 [2024-07-25 00:17:10.483917] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state configuring 00:33:14.868 [2024-07-25 00:17:10.483975] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:14.868 [2024-07-25 00:17:10.484072] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a880 00:33:14.868 [2024-07-25 00:17:10.484102] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:14.868 [2024-07-25 00:17:10.484201] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:33:14.868 [2024-07-25 00:17:10.484352] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a880 00:33:14.868 [2024-07-25 00:17:10.484373] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a880 00:33:14.868 [2024-07-25 00:17:10.484508] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:14.868 pt1 00:33:14.868 00:17:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:33:14.868 00:17:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:14.868 00:17:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:14.868 00:17:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:14.868 00:17:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:14.868 00:17:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:14.868 00:17:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:14.868 00:17:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:14.868 00:17:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:14.868 00:17:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:14.868 00:17:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:14.868 00:17:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:14.868 00:17:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:15.127 00:17:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:15.127 "name": "raid_bdev1", 00:33:15.127 "uuid": "44322516-ee8c-4934-9ee1-f6d068b19c0f", 00:33:15.127 "strip_size_kb": 0, 00:33:15.127 "state": "online", 00:33:15.127 "raid_level": "raid1", 00:33:15.127 "superblock": true, 00:33:15.127 "num_base_bdevs": 2, 00:33:15.127 "num_base_bdevs_discovered": 1, 00:33:15.127 "num_base_bdevs_operational": 1, 00:33:15.127 "base_bdevs_list": [ 00:33:15.127 { 00:33:15.127 "name": null, 00:33:15.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:15.127 "is_configured": false, 00:33:15.127 "data_offset": 256, 00:33:15.127 "data_size": 7936 00:33:15.127 }, 00:33:15.127 { 00:33:15.127 "name": "pt2", 00:33:15.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:15.127 "is_configured": true, 00:33:15.127 "data_offset": 256, 00:33:15.127 "data_size": 7936 00:33:15.127 } 00:33:15.127 ] 00:33:15.127 }' 00:33:15.127 00:17:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:15.127 00:17:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:15.386 00:17:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:33:15.386 00:17:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:15.645 00:17:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:33:15.645 00:17:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:15.645 00:17:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:33:15.904 [2024-07-25 00:17:11.614684] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:15.904 00:17:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # '[' 44322516-ee8c-4934-9ee1-f6d068b19c0f '!=' 44322516-ee8c-4934-9ee1-f6d068b19c0f ']' 00:33:15.904 00:17:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@578 -- # killprocess 112014 00:33:15.904 00:17:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 112014 ']' 00:33:15.904 00:17:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 112014 00:33:15.904 00:17:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:33:15.904 00:17:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:15.904 00:17:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 112014 00:33:15.904 killing process with pid 112014 00:33:15.904 00:17:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:15.904 00:17:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:15.904 00:17:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 112014' 00:33:15.904 00:17:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 112014 00:33:15.904 [2024-07-25 00:17:11.660741] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:15.904 00:17:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 112014 00:33:15.904 [2024-07-25 00:17:11.660861] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:15.904 [2024-07-25 00:17:11.660916] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:15.904 [2024-07-25 00:17:11.660933] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state offline 00:33:16.163 [2024-07-25 00:17:11.811109] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:17.101 ************************************ 00:33:17.101 END TEST raid_superblock_test_md_separate 00:33:17.101 ************************************ 00:33:17.101 00:17:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@580 -- # return 0 00:33:17.101 00:33:17.101 real 0m13.096s 00:33:17.101 user 0m22.414s 00:33:17.101 sys 0m2.047s 00:33:17.101 00:17:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:17.101 00:17:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:17.101 00:17:12 bdev_raid -- bdev/bdev_raid.sh@987 -- # '[' true = true ']' 00:33:17.101 00:17:12 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:33:17.101 00:17:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:33:17.101 00:17:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:17.101 00:17:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:17.101 ************************************ 00:33:17.101 START TEST raid_rebuild_test_sb_md_separate 00:33:17.101 ************************************ 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # local verify=true 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # local strip_size 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # local create_arg 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@594 -- # local data_offset 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # raid_pid=112479 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # waitforlisten 112479 /var/tmp/spdk-raid.sock 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 112479 ']' 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:17.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:17.101 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:17.102 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:17.102 00:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:17.102 [2024-07-25 00:17:12.840272] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:33:17.102 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:17.102 Zero copy mechanism will not be used. 00:33:17.102 [2024-07-25 00:17:12.841110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112479 ] 00:33:17.361 [2024-07-25 00:17:13.009068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.361 [2024-07-25 00:17:13.231112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.620 [2024-07-25 00:17:13.373054] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:18.188 00:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:18.188 00:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:33:18.188 00:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:33:18.188 00:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:33:18.188 BaseBdev1_malloc 00:33:18.188 00:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:18.448 [2024-07-25 00:17:14.140652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:18.448 [2024-07-25 00:17:14.140728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:18.448 [2024-07-25 00:17:14.140753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:33:18.448 [2024-07-25 00:17:14.140768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:18.448 [2024-07-25 00:17:14.142673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:18.448 [2024-07-25 00:17:14.142765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:18.448 BaseBdev1 00:33:18.448 00:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:33:18.448 00:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:33:18.707 BaseBdev2_malloc 00:33:18.707 00:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:18.707 [2024-07-25 00:17:14.555707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:18.707 [2024-07-25 00:17:14.555808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:18.707 [2024-07-25 00:17:14.555850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:33:18.707 [2024-07-25 00:17:14.555868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:18.707 [2024-07-25 00:17:14.557890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:18.707 [2024-07-25 00:17:14.557950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:18.707 BaseBdev2 00:33:18.707 00:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:33:18.966 spare_malloc 00:33:18.966 00:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:19.225 spare_delay 00:33:19.225 00:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:19.484 [2024-07-25 00:17:15.244401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:19.484 [2024-07-25 00:17:15.244476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:19.484 [2024-07-25 00:17:15.244500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:33:19.484 [2024-07-25 00:17:15.244515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:19.484 [2024-07-25 00:17:15.246433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:19.484 [2024-07-25 00:17:15.246506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:19.484 spare 00:33:19.484 00:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:33:19.743 [2024-07-25 00:17:15.432524] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:19.743 [2024-07-25 00:17:15.434390] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:19.743 [2024-07-25 00:17:15.434588] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009080 00:33:19.743 [2024-07-25 00:17:15.434605] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:19.743 [2024-07-25 00:17:15.434760] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:33:19.743 [2024-07-25 00:17:15.434896] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009080 00:33:19.743 [2024-07-25 00:17:15.434923] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009080 00:33:19.743 [2024-07-25 00:17:15.435026] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:19.743 00:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:19.743 00:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:19.743 00:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:19.743 00:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:19.743 00:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:19.743 00:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:19.743 00:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:19.743 00:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:19.743 00:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:19.743 00:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:19.743 00:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:19.743 00:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.003 00:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:20.003 "name": "raid_bdev1", 00:33:20.003 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:20.003 "strip_size_kb": 0, 00:33:20.003 "state": "online", 00:33:20.003 "raid_level": "raid1", 00:33:20.003 "superblock": true, 00:33:20.003 "num_base_bdevs": 2, 00:33:20.003 "num_base_bdevs_discovered": 2, 00:33:20.003 "num_base_bdevs_operational": 2, 00:33:20.003 "base_bdevs_list": [ 00:33:20.003 { 00:33:20.003 "name": "BaseBdev1", 00:33:20.003 "uuid": "233c4186-23a5-59ad-b93e-db5b91558f7a", 00:33:20.003 "is_configured": true, 00:33:20.003 "data_offset": 256, 00:33:20.003 "data_size": 7936 00:33:20.003 }, 00:33:20.003 { 00:33:20.003 "name": "BaseBdev2", 00:33:20.003 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:20.003 "is_configured": true, 00:33:20.003 "data_offset": 256, 00:33:20.003 "data_size": 7936 00:33:20.003 } 00:33:20.003 ] 00:33:20.003 }' 00:33:20.003 00:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:20.003 00:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:20.264 00:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:20.264 00:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:33:20.537 [2024-07-25 00:17:16.149049] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:20.537 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:33:20.537 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:20.537 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:20.537 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:33:20.537 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:33:20.537 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:33:20.537 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:33:20.537 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:33:20.537 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:20.537 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:33:20.537 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:20.537 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:20.537 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:20.537 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:33:20.537 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:20.537 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:20.537 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:33:20.807 [2024-07-25 00:17:16.520857] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:33:20.807 /dev/nbd0 00:33:20.807 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:20.807 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:20.807 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:33:20.807 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:33:20.807 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:20.807 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:20.807 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:33:20.807 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:33:20.807 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:20.807 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:20.807 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:20.807 1+0 records in 00:33:20.807 1+0 records out 00:33:20.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019269 s, 21.3 MB/s 00:33:20.807 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:20.807 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:33:20.807 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:20.808 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:20.808 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:33:20.808 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:20.808 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:20.808 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:33:20.808 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:33:20.808 00:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:33:21.744 7936+0 records in 00:33:21.744 7936+0 records out 00:33:21.744 32505856 bytes (33 MB, 31 MiB) copied, 0.827394 s, 39.3 MB/s 00:33:21.744 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:33:21.744 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:21.744 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:21.744 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:21.744 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:33:21.744 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:21.744 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:21.744 [2024-07-25 00:17:17.569842] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:21.744 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:21.744 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:21.744 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:21.744 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:21.744 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:21.745 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:21.745 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:33:21.745 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:33:21.745 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:33:22.004 [2024-07-25 00:17:17.823221] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:22.004 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:22.004 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:22.004 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:22.004 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:22.004 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:22.004 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:22.004 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:22.004 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:22.004 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:22.004 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:22.004 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:22.004 00:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:22.263 00:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:22.263 "name": "raid_bdev1", 00:33:22.263 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:22.263 "strip_size_kb": 0, 00:33:22.263 "state": "online", 00:33:22.263 "raid_level": "raid1", 00:33:22.263 "superblock": true, 00:33:22.263 "num_base_bdevs": 2, 00:33:22.263 "num_base_bdevs_discovered": 1, 00:33:22.263 "num_base_bdevs_operational": 1, 00:33:22.263 "base_bdevs_list": [ 00:33:22.263 { 00:33:22.263 "name": null, 00:33:22.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:22.263 "is_configured": false, 00:33:22.263 "data_offset": 256, 00:33:22.263 "data_size": 7936 00:33:22.263 }, 00:33:22.263 { 00:33:22.263 "name": "BaseBdev2", 00:33:22.263 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:22.263 "is_configured": true, 00:33:22.263 "data_offset": 256, 00:33:22.263 "data_size": 7936 00:33:22.263 } 00:33:22.263 ] 00:33:22.263 }' 00:33:22.263 00:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:22.263 00:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:22.522 00:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:22.781 [2024-07-25 00:17:18.479494] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:22.781 [2024-07-25 00:17:18.490283] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00019fe30 00:33:22.781 [2024-07-25 00:17:18.492159] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:22.781 00:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:33:23.717 00:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:23.717 00:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:23.717 00:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:23.718 00:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:23.718 00:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:23.718 00:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:23.718 00:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.977 00:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:23.977 "name": "raid_bdev1", 00:33:23.977 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:23.977 "strip_size_kb": 0, 00:33:23.977 "state": "online", 00:33:23.977 "raid_level": "raid1", 00:33:23.977 "superblock": true, 00:33:23.977 "num_base_bdevs": 2, 00:33:23.977 "num_base_bdevs_discovered": 2, 00:33:23.977 "num_base_bdevs_operational": 2, 00:33:23.977 "process": { 00:33:23.977 "type": "rebuild", 00:33:23.977 "target": "spare", 00:33:23.977 "progress": { 00:33:23.977 "blocks": 3072, 00:33:23.977 "percent": 38 00:33:23.977 } 00:33:23.977 }, 00:33:23.977 "base_bdevs_list": [ 00:33:23.977 { 00:33:23.977 "name": "spare", 00:33:23.977 "uuid": "27e54d80-c389-557e-8b02-18ea909b7c76", 00:33:23.977 "is_configured": true, 00:33:23.977 "data_offset": 256, 00:33:23.977 "data_size": 7936 00:33:23.977 }, 00:33:23.977 { 00:33:23.977 "name": "BaseBdev2", 00:33:23.977 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:23.977 "is_configured": true, 00:33:23.977 "data_offset": 256, 00:33:23.977 "data_size": 7936 00:33:23.977 } 00:33:23.977 ] 00:33:23.977 }' 00:33:23.977 00:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:23.977 00:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:23.977 00:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:23.977 00:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:23.977 00:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:24.236 [2024-07-25 00:17:20.022623] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:24.236 [2024-07-25 00:17:20.099246] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:24.236 [2024-07-25 00:17:20.099307] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:24.236 [2024-07-25 00:17:20.099326] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:24.236 [2024-07-25 00:17:20.099340] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:24.495 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:24.495 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:24.495 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:24.495 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:24.495 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:24.495 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:24.495 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:24.495 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:24.495 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:24.495 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:24.495 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:24.495 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:24.754 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:24.754 "name": "raid_bdev1", 00:33:24.754 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:24.754 "strip_size_kb": 0, 00:33:24.754 "state": "online", 00:33:24.754 "raid_level": "raid1", 00:33:24.754 "superblock": true, 00:33:24.754 "num_base_bdevs": 2, 00:33:24.754 "num_base_bdevs_discovered": 1, 00:33:24.754 "num_base_bdevs_operational": 1, 00:33:24.754 "base_bdevs_list": [ 00:33:24.754 { 00:33:24.754 "name": null, 00:33:24.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:24.754 "is_configured": false, 00:33:24.754 "data_offset": 256, 00:33:24.754 "data_size": 7936 00:33:24.754 }, 00:33:24.754 { 00:33:24.754 "name": "BaseBdev2", 00:33:24.754 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:24.754 "is_configured": true, 00:33:24.754 "data_offset": 256, 00:33:24.754 "data_size": 7936 00:33:24.754 } 00:33:24.754 ] 00:33:24.754 }' 00:33:24.754 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:24.754 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:25.013 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:25.013 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:25.013 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:25.013 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:25.013 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:25.013 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:25.013 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:25.273 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:25.273 "name": "raid_bdev1", 00:33:25.273 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:25.273 "strip_size_kb": 0, 00:33:25.273 "state": "online", 00:33:25.273 "raid_level": "raid1", 00:33:25.273 "superblock": true, 00:33:25.273 "num_base_bdevs": 2, 00:33:25.273 "num_base_bdevs_discovered": 1, 00:33:25.273 "num_base_bdevs_operational": 1, 00:33:25.273 "base_bdevs_list": [ 00:33:25.273 { 00:33:25.273 "name": null, 00:33:25.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:25.273 "is_configured": false, 00:33:25.273 "data_offset": 256, 00:33:25.273 "data_size": 7936 00:33:25.273 }, 00:33:25.273 { 00:33:25.273 "name": "BaseBdev2", 00:33:25.273 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:25.273 "is_configured": true, 00:33:25.273 "data_offset": 256, 00:33:25.273 "data_size": 7936 00:33:25.273 } 00:33:25.273 ] 00:33:25.273 }' 00:33:25.273 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:25.273 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:25.273 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:25.273 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:25.273 00:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:25.273 [2024-07-25 00:17:21.111054] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:25.273 [2024-07-25 00:17:21.121024] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00019ff00 00:33:25.273 [2024-07-25 00:17:21.122932] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:25.273 00:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@678 -- # sleep 1 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:26.650 "name": "raid_bdev1", 00:33:26.650 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:26.650 "strip_size_kb": 0, 00:33:26.650 "state": "online", 00:33:26.650 "raid_level": "raid1", 00:33:26.650 "superblock": true, 00:33:26.650 "num_base_bdevs": 2, 00:33:26.650 "num_base_bdevs_discovered": 2, 00:33:26.650 "num_base_bdevs_operational": 2, 00:33:26.650 "process": { 00:33:26.650 "type": "rebuild", 00:33:26.650 "target": "spare", 00:33:26.650 "progress": { 00:33:26.650 "blocks": 3072, 00:33:26.650 "percent": 38 00:33:26.650 } 00:33:26.650 }, 00:33:26.650 "base_bdevs_list": [ 00:33:26.650 { 00:33:26.650 "name": "spare", 00:33:26.650 "uuid": "27e54d80-c389-557e-8b02-18ea909b7c76", 00:33:26.650 "is_configured": true, 00:33:26.650 "data_offset": 256, 00:33:26.650 "data_size": 7936 00:33:26.650 }, 00:33:26.650 { 00:33:26.650 "name": "BaseBdev2", 00:33:26.650 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:26.650 "is_configured": true, 00:33:26.650 "data_offset": 256, 00:33:26.650 "data_size": 7936 00:33:26.650 } 00:33:26.650 ] 00:33:26.650 }' 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:33:26.650 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # local timeout=1232 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:26.650 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:26.908 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:26.908 "name": "raid_bdev1", 00:33:26.908 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:26.908 "strip_size_kb": 0, 00:33:26.908 "state": "online", 00:33:26.908 "raid_level": "raid1", 00:33:26.908 "superblock": true, 00:33:26.908 "num_base_bdevs": 2, 00:33:26.908 "num_base_bdevs_discovered": 2, 00:33:26.908 "num_base_bdevs_operational": 2, 00:33:26.908 "process": { 00:33:26.908 "type": "rebuild", 00:33:26.908 "target": "spare", 00:33:26.908 "progress": { 00:33:26.908 "blocks": 3584, 00:33:26.908 "percent": 45 00:33:26.908 } 00:33:26.908 }, 00:33:26.908 "base_bdevs_list": [ 00:33:26.908 { 00:33:26.908 "name": "spare", 00:33:26.908 "uuid": "27e54d80-c389-557e-8b02-18ea909b7c76", 00:33:26.908 "is_configured": true, 00:33:26.908 "data_offset": 256, 00:33:26.908 "data_size": 7936 00:33:26.908 }, 00:33:26.908 { 00:33:26.908 "name": "BaseBdev2", 00:33:26.908 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:26.908 "is_configured": true, 00:33:26.908 "data_offset": 256, 00:33:26.908 "data_size": 7936 00:33:26.908 } 00:33:26.908 ] 00:33:26.908 }' 00:33:26.908 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:26.908 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:26.908 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:26.908 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:26.908 00:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@726 -- # sleep 1 00:33:27.844 00:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:33:27.844 00:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:27.844 00:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:27.844 00:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:27.844 00:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:27.844 00:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:27.844 00:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:27.844 00:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:28.103 00:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:28.103 "name": "raid_bdev1", 00:33:28.103 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:28.103 "strip_size_kb": 0, 00:33:28.103 "state": "online", 00:33:28.103 "raid_level": "raid1", 00:33:28.103 "superblock": true, 00:33:28.103 "num_base_bdevs": 2, 00:33:28.103 "num_base_bdevs_discovered": 2, 00:33:28.103 "num_base_bdevs_operational": 2, 00:33:28.103 "process": { 00:33:28.103 "type": "rebuild", 00:33:28.103 "target": "spare", 00:33:28.103 "progress": { 00:33:28.103 "blocks": 6912, 00:33:28.103 "percent": 87 00:33:28.103 } 00:33:28.103 }, 00:33:28.103 "base_bdevs_list": [ 00:33:28.103 { 00:33:28.103 "name": "spare", 00:33:28.103 "uuid": "27e54d80-c389-557e-8b02-18ea909b7c76", 00:33:28.103 "is_configured": true, 00:33:28.103 "data_offset": 256, 00:33:28.103 "data_size": 7936 00:33:28.103 }, 00:33:28.103 { 00:33:28.103 "name": "BaseBdev2", 00:33:28.103 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:28.103 "is_configured": true, 00:33:28.103 "data_offset": 256, 00:33:28.103 "data_size": 7936 00:33:28.103 } 00:33:28.103 ] 00:33:28.103 }' 00:33:28.103 00:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:28.103 00:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:28.103 00:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:28.103 00:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:28.103 00:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@726 -- # sleep 1 00:33:28.670 [2024-07-25 00:17:24.236082] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:28.670 [2024-07-25 00:17:24.236151] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:28.670 [2024-07-25 00:17:24.236270] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:29.236 00:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:33:29.236 00:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:29.236 00:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:29.236 00:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:29.236 00:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:29.236 00:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:29.236 00:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:29.236 00:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:29.494 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:29.494 "name": "raid_bdev1", 00:33:29.494 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:29.494 "strip_size_kb": 0, 00:33:29.494 "state": "online", 00:33:29.494 "raid_level": "raid1", 00:33:29.494 "superblock": true, 00:33:29.494 "num_base_bdevs": 2, 00:33:29.494 "num_base_bdevs_discovered": 2, 00:33:29.494 "num_base_bdevs_operational": 2, 00:33:29.494 "base_bdevs_list": [ 00:33:29.494 { 00:33:29.494 "name": "spare", 00:33:29.494 "uuid": "27e54d80-c389-557e-8b02-18ea909b7c76", 00:33:29.494 "is_configured": true, 00:33:29.494 "data_offset": 256, 00:33:29.494 "data_size": 7936 00:33:29.494 }, 00:33:29.494 { 00:33:29.494 "name": "BaseBdev2", 00:33:29.494 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:29.494 "is_configured": true, 00:33:29.494 "data_offset": 256, 00:33:29.494 "data_size": 7936 00:33:29.494 } 00:33:29.494 ] 00:33:29.494 }' 00:33:29.494 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:29.494 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:29.494 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:29.494 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:33:29.494 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@724 -- # break 00:33:29.494 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:29.494 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:29.494 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:29.494 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:29.494 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:29.494 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:29.494 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:29.753 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:29.753 "name": "raid_bdev1", 00:33:29.753 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:29.753 "strip_size_kb": 0, 00:33:29.753 "state": "online", 00:33:29.753 "raid_level": "raid1", 00:33:29.753 "superblock": true, 00:33:29.753 "num_base_bdevs": 2, 00:33:29.753 "num_base_bdevs_discovered": 2, 00:33:29.753 "num_base_bdevs_operational": 2, 00:33:29.753 "base_bdevs_list": [ 00:33:29.753 { 00:33:29.753 "name": "spare", 00:33:29.753 "uuid": "27e54d80-c389-557e-8b02-18ea909b7c76", 00:33:29.753 "is_configured": true, 00:33:29.753 "data_offset": 256, 00:33:29.753 "data_size": 7936 00:33:29.753 }, 00:33:29.753 { 00:33:29.753 "name": "BaseBdev2", 00:33:29.753 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:29.753 "is_configured": true, 00:33:29.753 "data_offset": 256, 00:33:29.753 "data_size": 7936 00:33:29.753 } 00:33:29.753 ] 00:33:29.753 }' 00:33:29.753 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:29.753 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:29.753 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:29.753 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:29.753 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:29.753 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:29.753 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:29.753 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:29.753 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:29.753 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:29.753 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:29.753 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:29.753 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:29.753 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:29.753 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:29.753 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:30.012 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:30.012 "name": "raid_bdev1", 00:33:30.012 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:30.012 "strip_size_kb": 0, 00:33:30.012 "state": "online", 00:33:30.012 "raid_level": "raid1", 00:33:30.012 "superblock": true, 00:33:30.012 "num_base_bdevs": 2, 00:33:30.012 "num_base_bdevs_discovered": 2, 00:33:30.012 "num_base_bdevs_operational": 2, 00:33:30.012 "base_bdevs_list": [ 00:33:30.012 { 00:33:30.012 "name": "spare", 00:33:30.012 "uuid": "27e54d80-c389-557e-8b02-18ea909b7c76", 00:33:30.012 "is_configured": true, 00:33:30.012 "data_offset": 256, 00:33:30.012 "data_size": 7936 00:33:30.012 }, 00:33:30.012 { 00:33:30.012 "name": "BaseBdev2", 00:33:30.012 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:30.012 "is_configured": true, 00:33:30.012 "data_offset": 256, 00:33:30.012 "data_size": 7936 00:33:30.012 } 00:33:30.012 ] 00:33:30.012 }' 00:33:30.012 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:30.012 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:30.271 00:17:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:30.530 [2024-07-25 00:17:26.235112] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:30.530 [2024-07-25 00:17:26.235178] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:30.530 [2024-07-25 00:17:26.235267] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:30.530 [2024-07-25 00:17:26.235348] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:30.530 [2024-07-25 00:17:26.235362] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state offline 00:33:30.530 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:30.530 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # jq length 00:33:30.789 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:33:30.789 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:33:30.789 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:33:30.789 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:33:30.789 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:30.789 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:33:30.789 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:30.789 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:30.789 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:30.789 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:33:30.789 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:30.789 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:30.789 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:33:30.789 /dev/nbd0 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:31.049 1+0 records in 00:33:31.049 1+0 records out 00:33:31.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273835 s, 15.0 MB/s 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:33:31.049 /dev/nbd1 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:31.049 1+0 records in 00:33:31.049 1+0 records out 00:33:31.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265601 s, 15.4 MB/s 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:31.049 00:17:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:33:31.308 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:33:31.308 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:31.308 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:31.308 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:31.308 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:33:31.308 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:31.308 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:31.567 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:31.567 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:31.567 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:31.567 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:31.567 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:31.567 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:31.567 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:33:31.567 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:33:31.567 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:31.567 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:33:31.826 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:31.826 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:31.826 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:31.826 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:31.826 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:31.826 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:31.826 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:33:31.826 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:33:31.826 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:33:31.826 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:32.085 00:17:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:32.344 [2024-07-25 00:17:28.037340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:32.344 [2024-07-25 00:17:28.037438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:32.344 [2024-07-25 00:17:28.037476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:33:32.344 [2024-07-25 00:17:28.037489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:32.344 [2024-07-25 00:17:28.039459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:32.344 [2024-07-25 00:17:28.039517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:32.344 [2024-07-25 00:17:28.039639] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:32.344 [2024-07-25 00:17:28.039689] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:32.344 [2024-07-25 00:17:28.039837] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:32.344 spare 00:33:32.344 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:32.344 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:32.344 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:32.344 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:32.344 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:32.344 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:32.344 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:32.344 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:32.344 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:32.344 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:32.344 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:32.344 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:32.344 [2024-07-25 00:17:28.139945] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:33:32.344 [2024-07-25 00:17:28.140001] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:32.344 [2024-07-25 00:17:28.140137] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0001c1670 00:33:32.344 [2024-07-25 00:17:28.140346] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:33:32.344 [2024-07-25 00:17:28.140362] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:33:32.344 [2024-07-25 00:17:28.140475] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:32.604 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:32.604 "name": "raid_bdev1", 00:33:32.604 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:32.604 "strip_size_kb": 0, 00:33:32.604 "state": "online", 00:33:32.604 "raid_level": "raid1", 00:33:32.604 "superblock": true, 00:33:32.604 "num_base_bdevs": 2, 00:33:32.604 "num_base_bdevs_discovered": 2, 00:33:32.604 "num_base_bdevs_operational": 2, 00:33:32.604 "base_bdevs_list": [ 00:33:32.604 { 00:33:32.604 "name": "spare", 00:33:32.604 "uuid": "27e54d80-c389-557e-8b02-18ea909b7c76", 00:33:32.604 "is_configured": true, 00:33:32.604 "data_offset": 256, 00:33:32.604 "data_size": 7936 00:33:32.604 }, 00:33:32.604 { 00:33:32.604 "name": "BaseBdev2", 00:33:32.604 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:32.604 "is_configured": true, 00:33:32.604 "data_offset": 256, 00:33:32.604 "data_size": 7936 00:33:32.604 } 00:33:32.604 ] 00:33:32.604 }' 00:33:32.604 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:32.604 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:32.863 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:32.863 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:32.863 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:32.863 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:32.863 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:32.863 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:32.863 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:33.122 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:33.122 "name": "raid_bdev1", 00:33:33.122 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:33.122 "strip_size_kb": 0, 00:33:33.122 "state": "online", 00:33:33.122 "raid_level": "raid1", 00:33:33.122 "superblock": true, 00:33:33.122 "num_base_bdevs": 2, 00:33:33.122 "num_base_bdevs_discovered": 2, 00:33:33.122 "num_base_bdevs_operational": 2, 00:33:33.122 "base_bdevs_list": [ 00:33:33.122 { 00:33:33.122 "name": "spare", 00:33:33.122 "uuid": "27e54d80-c389-557e-8b02-18ea909b7c76", 00:33:33.122 "is_configured": true, 00:33:33.122 "data_offset": 256, 00:33:33.122 "data_size": 7936 00:33:33.122 }, 00:33:33.122 { 00:33:33.122 "name": "BaseBdev2", 00:33:33.122 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:33.122 "is_configured": true, 00:33:33.122 "data_offset": 256, 00:33:33.122 "data_size": 7936 00:33:33.122 } 00:33:33.122 ] 00:33:33.122 }' 00:33:33.122 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:33.122 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:33.122 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:33.122 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:33.122 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:33.122 00:17:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:33:33.382 00:17:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:33:33.382 00:17:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:33.641 [2024-07-25 00:17:29.261709] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:33.641 00:17:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:33.641 00:17:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:33.641 00:17:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:33.641 00:17:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:33.641 00:17:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:33.641 00:17:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:33.641 00:17:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:33.641 00:17:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:33.641 00:17:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:33.641 00:17:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:33.641 00:17:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:33.641 00:17:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:33.641 00:17:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:33.641 "name": "raid_bdev1", 00:33:33.641 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:33.641 "strip_size_kb": 0, 00:33:33.641 "state": "online", 00:33:33.641 "raid_level": "raid1", 00:33:33.641 "superblock": true, 00:33:33.641 "num_base_bdevs": 2, 00:33:33.641 "num_base_bdevs_discovered": 1, 00:33:33.641 "num_base_bdevs_operational": 1, 00:33:33.641 "base_bdevs_list": [ 00:33:33.641 { 00:33:33.641 "name": null, 00:33:33.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:33.641 "is_configured": false, 00:33:33.641 "data_offset": 256, 00:33:33.641 "data_size": 7936 00:33:33.641 }, 00:33:33.641 { 00:33:33.641 "name": "BaseBdev2", 00:33:33.641 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:33.641 "is_configured": true, 00:33:33.641 "data_offset": 256, 00:33:33.641 "data_size": 7936 00:33:33.641 } 00:33:33.641 ] 00:33:33.641 }' 00:33:33.641 00:17:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:33.641 00:17:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:34.210 00:17:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:34.210 [2024-07-25 00:17:30.061963] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:34.210 [2024-07-25 00:17:30.062161] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:34.210 [2024-07-25 00:17:30.062183] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:34.210 [2024-07-25 00:17:30.062243] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:34.210 [2024-07-25 00:17:30.071948] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0001c1740 00:33:34.210 [2024-07-25 00:17:30.073960] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:34.469 00:17:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # sleep 1 00:33:35.506 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:35.506 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:35.506 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:35.506 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:35.506 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:35.506 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:35.506 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:35.506 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:35.506 "name": "raid_bdev1", 00:33:35.506 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:35.506 "strip_size_kb": 0, 00:33:35.506 "state": "online", 00:33:35.506 "raid_level": "raid1", 00:33:35.506 "superblock": true, 00:33:35.506 "num_base_bdevs": 2, 00:33:35.506 "num_base_bdevs_discovered": 2, 00:33:35.506 "num_base_bdevs_operational": 2, 00:33:35.506 "process": { 00:33:35.506 "type": "rebuild", 00:33:35.506 "target": "spare", 00:33:35.506 "progress": { 00:33:35.506 "blocks": 3072, 00:33:35.506 "percent": 38 00:33:35.506 } 00:33:35.506 }, 00:33:35.506 "base_bdevs_list": [ 00:33:35.506 { 00:33:35.506 "name": "spare", 00:33:35.506 "uuid": "27e54d80-c389-557e-8b02-18ea909b7c76", 00:33:35.506 "is_configured": true, 00:33:35.506 "data_offset": 256, 00:33:35.506 "data_size": 7936 00:33:35.506 }, 00:33:35.506 { 00:33:35.506 "name": "BaseBdev2", 00:33:35.506 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:35.506 "is_configured": true, 00:33:35.506 "data_offset": 256, 00:33:35.506 "data_size": 7936 00:33:35.506 } 00:33:35.506 ] 00:33:35.506 }' 00:33:35.506 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:35.506 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:35.506 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:35.506 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:35.506 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:35.777 [2024-07-25 00:17:31.596700] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:36.037 [2024-07-25 00:17:31.681095] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:36.037 [2024-07-25 00:17:31.681175] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:36.037 [2024-07-25 00:17:31.681195] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:36.037 [2024-07-25 00:17:31.681206] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:36.037 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:36.037 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:36.037 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:36.037 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:36.037 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:36.037 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:36.037 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:36.037 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:36.037 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:36.037 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:36.037 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:36.037 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:36.296 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:36.296 "name": "raid_bdev1", 00:33:36.296 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:36.296 "strip_size_kb": 0, 00:33:36.296 "state": "online", 00:33:36.296 "raid_level": "raid1", 00:33:36.296 "superblock": true, 00:33:36.296 "num_base_bdevs": 2, 00:33:36.296 "num_base_bdevs_discovered": 1, 00:33:36.296 "num_base_bdevs_operational": 1, 00:33:36.296 "base_bdevs_list": [ 00:33:36.296 { 00:33:36.296 "name": null, 00:33:36.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:36.296 "is_configured": false, 00:33:36.296 "data_offset": 256, 00:33:36.296 "data_size": 7936 00:33:36.296 }, 00:33:36.296 { 00:33:36.296 "name": "BaseBdev2", 00:33:36.296 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:36.296 "is_configured": true, 00:33:36.296 "data_offset": 256, 00:33:36.296 "data_size": 7936 00:33:36.296 } 00:33:36.296 ] 00:33:36.296 }' 00:33:36.296 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:36.296 00:17:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:36.556 00:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:36.815 [2024-07-25 00:17:32.515611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:36.815 [2024-07-25 00:17:32.515690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:36.815 [2024-07-25 00:17:32.515719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:33:36.815 [2024-07-25 00:17:32.515734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:36.815 [2024-07-25 00:17:32.516021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:36.815 [2024-07-25 00:17:32.516058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:36.815 [2024-07-25 00:17:32.516155] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:36.815 [2024-07-25 00:17:32.516177] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:36.815 [2024-07-25 00:17:32.516189] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:36.815 [2024-07-25 00:17:32.516213] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:36.815 [2024-07-25 00:17:32.525840] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0001c1810 00:33:36.815 spare 00:33:36.815 [2024-07-25 00:17:32.527775] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:36.815 00:17:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # sleep 1 00:33:37.751 00:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:37.751 00:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:37.751 00:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:37.751 00:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:37.751 00:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:37.751 00:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:37.751 00:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:38.010 00:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:38.010 "name": "raid_bdev1", 00:33:38.010 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:38.010 "strip_size_kb": 0, 00:33:38.010 "state": "online", 00:33:38.010 "raid_level": "raid1", 00:33:38.010 "superblock": true, 00:33:38.010 "num_base_bdevs": 2, 00:33:38.010 "num_base_bdevs_discovered": 2, 00:33:38.010 "num_base_bdevs_operational": 2, 00:33:38.010 "process": { 00:33:38.010 "type": "rebuild", 00:33:38.010 "target": "spare", 00:33:38.010 "progress": { 00:33:38.010 "blocks": 3072, 00:33:38.010 "percent": 38 00:33:38.010 } 00:33:38.010 }, 00:33:38.010 "base_bdevs_list": [ 00:33:38.010 { 00:33:38.010 "name": "spare", 00:33:38.010 "uuid": "27e54d80-c389-557e-8b02-18ea909b7c76", 00:33:38.010 "is_configured": true, 00:33:38.010 "data_offset": 256, 00:33:38.010 "data_size": 7936 00:33:38.010 }, 00:33:38.010 { 00:33:38.010 "name": "BaseBdev2", 00:33:38.010 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:38.010 "is_configured": true, 00:33:38.010 "data_offset": 256, 00:33:38.010 "data_size": 7936 00:33:38.010 } 00:33:38.010 ] 00:33:38.010 }' 00:33:38.010 00:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:38.010 00:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:38.010 00:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:38.010 00:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:38.010 00:17:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:38.268 [2024-07-25 00:17:34.054367] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:38.268 [2024-07-25 00:17:34.134736] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:38.268 [2024-07-25 00:17:34.134833] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:38.268 [2024-07-25 00:17:34.134860] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:38.268 [2024-07-25 00:17:34.134870] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:38.527 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:38.527 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:38.527 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:38.527 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:38.527 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:38.527 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:38.527 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:38.527 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:38.527 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:38.527 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:38.527 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:38.527 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:38.786 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:38.786 "name": "raid_bdev1", 00:33:38.786 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:38.786 "strip_size_kb": 0, 00:33:38.786 "state": "online", 00:33:38.786 "raid_level": "raid1", 00:33:38.786 "superblock": true, 00:33:38.786 "num_base_bdevs": 2, 00:33:38.786 "num_base_bdevs_discovered": 1, 00:33:38.786 "num_base_bdevs_operational": 1, 00:33:38.786 "base_bdevs_list": [ 00:33:38.786 { 00:33:38.786 "name": null, 00:33:38.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:38.786 "is_configured": false, 00:33:38.786 "data_offset": 256, 00:33:38.786 "data_size": 7936 00:33:38.786 }, 00:33:38.786 { 00:33:38.786 "name": "BaseBdev2", 00:33:38.786 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:38.786 "is_configured": true, 00:33:38.786 "data_offset": 256, 00:33:38.786 "data_size": 7936 00:33:38.786 } 00:33:38.786 ] 00:33:38.786 }' 00:33:38.786 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:38.786 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:39.046 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:39.046 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:39.046 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:39.046 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:39.046 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:39.046 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:39.046 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:39.304 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:39.304 "name": "raid_bdev1", 00:33:39.304 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:39.304 "strip_size_kb": 0, 00:33:39.304 "state": "online", 00:33:39.304 "raid_level": "raid1", 00:33:39.304 "superblock": true, 00:33:39.304 "num_base_bdevs": 2, 00:33:39.304 "num_base_bdevs_discovered": 1, 00:33:39.304 "num_base_bdevs_operational": 1, 00:33:39.304 "base_bdevs_list": [ 00:33:39.304 { 00:33:39.304 "name": null, 00:33:39.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:39.304 "is_configured": false, 00:33:39.304 "data_offset": 256, 00:33:39.304 "data_size": 7936 00:33:39.304 }, 00:33:39.304 { 00:33:39.304 "name": "BaseBdev2", 00:33:39.304 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:39.304 "is_configured": true, 00:33:39.304 "data_offset": 256, 00:33:39.304 "data_size": 7936 00:33:39.304 } 00:33:39.304 ] 00:33:39.304 }' 00:33:39.304 00:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:39.304 00:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:39.304 00:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:39.304 00:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:39.304 00:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:33:39.562 00:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:39.820 [2024-07-25 00:17:35.438834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:39.820 [2024-07-25 00:17:35.438894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:39.820 [2024-07-25 00:17:35.438927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:33:39.820 [2024-07-25 00:17:35.438940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:39.820 [2024-07-25 00:17:35.439133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:39.820 [2024-07-25 00:17:35.439152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:39.820 [2024-07-25 00:17:35.439235] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:39.820 [2024-07-25 00:17:35.439251] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:39.820 [2024-07-25 00:17:35.439262] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:39.820 BaseBdev1 00:33:39.820 00:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@789 -- # sleep 1 00:33:40.755 00:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:40.755 00:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:40.755 00:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:40.755 00:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:40.755 00:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:40.755 00:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:40.755 00:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:40.755 00:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:40.755 00:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:40.755 00:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:40.755 00:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:40.755 00:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:41.014 00:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:41.014 "name": "raid_bdev1", 00:33:41.014 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:41.014 "strip_size_kb": 0, 00:33:41.014 "state": "online", 00:33:41.014 "raid_level": "raid1", 00:33:41.014 "superblock": true, 00:33:41.014 "num_base_bdevs": 2, 00:33:41.014 "num_base_bdevs_discovered": 1, 00:33:41.014 "num_base_bdevs_operational": 1, 00:33:41.014 "base_bdevs_list": [ 00:33:41.014 { 00:33:41.014 "name": null, 00:33:41.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:41.014 "is_configured": false, 00:33:41.014 "data_offset": 256, 00:33:41.014 "data_size": 7936 00:33:41.014 }, 00:33:41.014 { 00:33:41.014 "name": "BaseBdev2", 00:33:41.014 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:41.014 "is_configured": true, 00:33:41.014 "data_offset": 256, 00:33:41.014 "data_size": 7936 00:33:41.014 } 00:33:41.014 ] 00:33:41.014 }' 00:33:41.014 00:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:41.014 00:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:41.272 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:41.272 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:41.272 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:41.272 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:41.272 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:41.272 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:41.272 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:41.531 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:41.531 "name": "raid_bdev1", 00:33:41.531 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:41.531 "strip_size_kb": 0, 00:33:41.531 "state": "online", 00:33:41.531 "raid_level": "raid1", 00:33:41.531 "superblock": true, 00:33:41.531 "num_base_bdevs": 2, 00:33:41.531 "num_base_bdevs_discovered": 1, 00:33:41.531 "num_base_bdevs_operational": 1, 00:33:41.531 "base_bdevs_list": [ 00:33:41.531 { 00:33:41.531 "name": null, 00:33:41.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:41.531 "is_configured": false, 00:33:41.531 "data_offset": 256, 00:33:41.531 "data_size": 7936 00:33:41.531 }, 00:33:41.531 { 00:33:41.531 "name": "BaseBdev2", 00:33:41.531 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:41.531 "is_configured": true, 00:33:41.531 "data_offset": 256, 00:33:41.531 "data_size": 7936 00:33:41.531 } 00:33:41.531 ] 00:33:41.531 }' 00:33:41.531 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:41.531 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:41.531 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:41.531 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:41.531 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:41.531 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:33:41.531 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:41.531 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:41.531 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:41.531 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:41.531 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:41.531 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:41.531 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:41.531 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:41.531 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:41.531 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:41.790 [2024-07-25 00:17:37.511315] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:41.790 [2024-07-25 00:17:37.511477] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:41.790 [2024-07-25 00:17:37.511494] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:41.790 request: 00:33:41.790 { 00:33:41.790 "base_bdev": "BaseBdev1", 00:33:41.790 "raid_bdev": "raid_bdev1", 00:33:41.790 "method": "bdev_raid_add_base_bdev", 00:33:41.790 "req_id": 1 00:33:41.790 } 00:33:41.790 Got JSON-RPC error response 00:33:41.790 response: 00:33:41.790 { 00:33:41.790 "code": -22, 00:33:41.790 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:33:41.790 } 00:33:41.790 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:33:41.790 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:41.790 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:41.790 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:41.790 00:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@793 -- # sleep 1 00:33:42.726 00:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:42.726 00:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:42.726 00:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:42.726 00:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:42.726 00:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:42.726 00:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:42.726 00:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:42.726 00:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:42.726 00:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:42.726 00:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:42.726 00:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:42.726 00:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:42.984 00:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:42.984 "name": "raid_bdev1", 00:33:42.984 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:42.984 "strip_size_kb": 0, 00:33:42.984 "state": "online", 00:33:42.984 "raid_level": "raid1", 00:33:42.984 "superblock": true, 00:33:42.984 "num_base_bdevs": 2, 00:33:42.984 "num_base_bdevs_discovered": 1, 00:33:42.984 "num_base_bdevs_operational": 1, 00:33:42.984 "base_bdevs_list": [ 00:33:42.984 { 00:33:42.984 "name": null, 00:33:42.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:42.984 "is_configured": false, 00:33:42.984 "data_offset": 256, 00:33:42.984 "data_size": 7936 00:33:42.984 }, 00:33:42.984 { 00:33:42.984 "name": "BaseBdev2", 00:33:42.984 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:42.984 "is_configured": true, 00:33:42.984 "data_offset": 256, 00:33:42.984 "data_size": 7936 00:33:42.984 } 00:33:42.984 ] 00:33:42.984 }' 00:33:42.984 00:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:42.984 00:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:43.242 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:43.242 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:43.242 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:43.242 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:43.242 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:43.242 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:43.242 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:43.501 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:43.501 "name": "raid_bdev1", 00:33:43.501 "uuid": "a19c88aa-43bf-4cfb-8f10-eb626ac9a2a2", 00:33:43.501 "strip_size_kb": 0, 00:33:43.501 "state": "online", 00:33:43.501 "raid_level": "raid1", 00:33:43.501 "superblock": true, 00:33:43.501 "num_base_bdevs": 2, 00:33:43.501 "num_base_bdevs_discovered": 1, 00:33:43.501 "num_base_bdevs_operational": 1, 00:33:43.501 "base_bdevs_list": [ 00:33:43.501 { 00:33:43.501 "name": null, 00:33:43.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:43.501 "is_configured": false, 00:33:43.501 "data_offset": 256, 00:33:43.501 "data_size": 7936 00:33:43.501 }, 00:33:43.501 { 00:33:43.501 "name": "BaseBdev2", 00:33:43.501 "uuid": "ee79970e-aa50-585d-be0b-54683ba33997", 00:33:43.501 "is_configured": true, 00:33:43.501 "data_offset": 256, 00:33:43.501 "data_size": 7936 00:33:43.501 } 00:33:43.501 ] 00:33:43.501 }' 00:33:43.501 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:43.501 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:43.501 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:43.759 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:43.760 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@798 -- # killprocess 112479 00:33:43.760 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 112479 ']' 00:33:43.760 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 112479 00:33:43.760 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:33:43.760 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:43.760 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 112479 00:33:43.760 killing process with pid 112479 00:33:43.760 Received shutdown signal, test time was about 60.000000 seconds 00:33:43.760 00:33:43.760 Latency(us) 00:33:43.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.760 =================================================================================================================== 00:33:43.760 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:43.760 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:43.760 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:43.760 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 112479' 00:33:43.760 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 112479 00:33:43.760 [2024-07-25 00:17:39.404328] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:43.760 00:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 112479 00:33:43.760 [2024-07-25 00:17:39.404515] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:43.760 [2024-07-25 00:17:39.404580] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:43.760 [2024-07-25 00:17:39.404596] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:33:43.760 [2024-07-25 00:17:39.607456] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:44.696 00:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@800 -- # return 0 00:33:44.696 ************************************ 00:33:44.696 END TEST raid_rebuild_test_sb_md_separate 00:33:44.696 ************************************ 00:33:44.696 00:33:44.696 real 0m27.756s 00:33:44.696 user 0m40.881s 00:33:44.696 sys 0m3.589s 00:33:44.696 00:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:44.696 00:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:44.954 00:17:40 bdev_raid -- bdev/bdev_raid.sh@991 -- # base_malloc_params='-m 32 -i' 00:33:44.955 00:17:40 bdev_raid -- bdev/bdev_raid.sh@992 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:33:44.955 00:17:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:33:44.955 00:17:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:44.955 00:17:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:44.955 ************************************ 00:33:44.955 START TEST raid_state_function_test_sb_md_interleaved 00:33:44.955 ************************************ 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:33:44.955 Process raid pid: 113276 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=113276 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 113276' 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 113276 /var/tmp/spdk-raid.sock 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 113276 ']' 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:44.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:44.955 00:17:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:44.955 [2024-07-25 00:17:40.656380] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:33:44.955 [2024-07-25 00:17:40.656573] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:44.955 [2024-07-25 00:17:40.823790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.214 [2024-07-25 00:17:40.972342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:45.473 [2024-07-25 00:17:41.113618] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:45.732 00:17:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:45.732 00:17:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:33:45.732 00:17:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:33:45.991 [2024-07-25 00:17:41.781045] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:45.991 [2024-07-25 00:17:41.781117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:45.991 [2024-07-25 00:17:41.781131] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:45.991 [2024-07-25 00:17:41.781145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:45.991 00:17:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:45.991 00:17:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:45.991 00:17:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:45.991 00:17:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:45.991 00:17:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:45.991 00:17:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:45.991 00:17:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:45.991 00:17:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:45.991 00:17:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:45.992 00:17:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:45.992 00:17:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:45.992 00:17:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:46.250 00:17:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:46.250 "name": "Existed_Raid", 00:33:46.250 "uuid": "66882e71-7e62-41fe-ae05-421eda8e48a0", 00:33:46.250 "strip_size_kb": 0, 00:33:46.251 "state": "configuring", 00:33:46.251 "raid_level": "raid1", 00:33:46.251 "superblock": true, 00:33:46.251 "num_base_bdevs": 2, 00:33:46.251 "num_base_bdevs_discovered": 0, 00:33:46.251 "num_base_bdevs_operational": 2, 00:33:46.251 "base_bdevs_list": [ 00:33:46.251 { 00:33:46.251 "name": "BaseBdev1", 00:33:46.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:46.251 "is_configured": false, 00:33:46.251 "data_offset": 0, 00:33:46.251 "data_size": 0 00:33:46.251 }, 00:33:46.251 { 00:33:46.251 "name": "BaseBdev2", 00:33:46.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:46.251 "is_configured": false, 00:33:46.251 "data_offset": 0, 00:33:46.251 "data_size": 0 00:33:46.251 } 00:33:46.251 ] 00:33:46.251 }' 00:33:46.251 00:17:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:46.251 00:17:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:46.509 00:17:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:46.769 [2024-07-25 00:17:42.477109] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:46.769 [2024-07-25 00:17:42.477147] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:33:46.769 00:17:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:33:47.027 [2024-07-25 00:17:42.669155] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:47.027 [2024-07-25 00:17:42.669216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:47.027 [2024-07-25 00:17:42.669229] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:47.027 [2024-07-25 00:17:42.669241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:47.027 00:17:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:33:47.027 [2024-07-25 00:17:42.881318] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:47.027 BaseBdev1 00:33:47.287 00:17:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:33:47.287 00:17:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:33:47.287 00:17:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:47.287 00:17:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:33:47.287 00:17:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:47.287 00:17:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:47.287 00:17:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:47.287 00:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:47.546 [ 00:33:47.546 { 00:33:47.546 "name": "BaseBdev1", 00:33:47.546 "aliases": [ 00:33:47.546 "de97ab5c-2519-48dc-96ff-8ba1bba94730" 00:33:47.546 ], 00:33:47.546 "product_name": "Malloc disk", 00:33:47.546 "block_size": 4128, 00:33:47.546 "num_blocks": 8192, 00:33:47.546 "uuid": "de97ab5c-2519-48dc-96ff-8ba1bba94730", 00:33:47.546 "md_size": 32, 00:33:47.546 "md_interleave": true, 00:33:47.546 "dif_type": 0, 00:33:47.546 "assigned_rate_limits": { 00:33:47.546 "rw_ios_per_sec": 0, 00:33:47.546 "rw_mbytes_per_sec": 0, 00:33:47.546 "r_mbytes_per_sec": 0, 00:33:47.546 "w_mbytes_per_sec": 0 00:33:47.546 }, 00:33:47.546 "claimed": true, 00:33:47.546 "claim_type": "exclusive_write", 00:33:47.546 "zoned": false, 00:33:47.546 "supported_io_types": { 00:33:47.546 "read": true, 00:33:47.546 "write": true, 00:33:47.546 "unmap": true, 00:33:47.546 "flush": true, 00:33:47.546 "reset": true, 00:33:47.546 "nvme_admin": false, 00:33:47.546 "nvme_io": false, 00:33:47.546 "nvme_io_md": false, 00:33:47.546 "write_zeroes": true, 00:33:47.546 "zcopy": true, 00:33:47.546 "get_zone_info": false, 00:33:47.546 "zone_management": false, 00:33:47.546 "zone_append": false, 00:33:47.546 "compare": false, 00:33:47.546 "compare_and_write": false, 00:33:47.546 "abort": true, 00:33:47.546 "seek_hole": false, 00:33:47.546 "seek_data": false, 00:33:47.546 "copy": true, 00:33:47.546 "nvme_iov_md": false 00:33:47.546 }, 00:33:47.546 "memory_domains": [ 00:33:47.546 { 00:33:47.546 "dma_device_id": "system", 00:33:47.546 "dma_device_type": 1 00:33:47.546 }, 00:33:47.546 { 00:33:47.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:47.546 "dma_device_type": 2 00:33:47.546 } 00:33:47.546 ], 00:33:47.546 "driver_specific": {} 00:33:47.546 } 00:33:47.546 ] 00:33:47.546 00:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:33:47.546 00:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:47.546 00:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:47.546 00:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:47.546 00:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:47.546 00:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:47.546 00:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:47.546 00:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:47.546 00:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:47.546 00:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:47.546 00:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:47.546 00:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:47.546 00:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:47.805 00:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:47.805 "name": "Existed_Raid", 00:33:47.805 "uuid": "4d1b6769-2677-409f-8c5c-cb968988e80f", 00:33:47.805 "strip_size_kb": 0, 00:33:47.805 "state": "configuring", 00:33:47.805 "raid_level": "raid1", 00:33:47.805 "superblock": true, 00:33:47.805 "num_base_bdevs": 2, 00:33:47.805 "num_base_bdevs_discovered": 1, 00:33:47.805 "num_base_bdevs_operational": 2, 00:33:47.805 "base_bdevs_list": [ 00:33:47.805 { 00:33:47.805 "name": "BaseBdev1", 00:33:47.805 "uuid": "de97ab5c-2519-48dc-96ff-8ba1bba94730", 00:33:47.805 "is_configured": true, 00:33:47.805 "data_offset": 256, 00:33:47.805 "data_size": 7936 00:33:47.805 }, 00:33:47.805 { 00:33:47.805 "name": "BaseBdev2", 00:33:47.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:47.805 "is_configured": false, 00:33:47.805 "data_offset": 0, 00:33:47.805 "data_size": 0 00:33:47.805 } 00:33:47.805 ] 00:33:47.805 }' 00:33:47.805 00:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:47.805 00:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:48.063 00:17:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:48.323 [2024-07-25 00:17:43.997599] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:48.323 [2024-07-25 00:17:43.997647] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006980 name Existed_Raid, state configuring 00:33:48.323 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:33:48.323 [2024-07-25 00:17:44.173656] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:48.323 [2024-07-25 00:17:44.175382] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:48.323 [2024-07-25 00:17:44.175447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:48.323 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:33:48.323 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:48.323 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:48.323 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:48.323 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:48.323 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:48.323 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:48.323 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:48.323 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:48.323 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:48.323 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:48.323 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:48.582 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:48.582 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:48.582 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:48.582 "name": "Existed_Raid", 00:33:48.582 "uuid": "f70f4219-a11e-4c37-9f2e-755e57167850", 00:33:48.582 "strip_size_kb": 0, 00:33:48.582 "state": "configuring", 00:33:48.582 "raid_level": "raid1", 00:33:48.582 "superblock": true, 00:33:48.582 "num_base_bdevs": 2, 00:33:48.582 "num_base_bdevs_discovered": 1, 00:33:48.582 "num_base_bdevs_operational": 2, 00:33:48.582 "base_bdevs_list": [ 00:33:48.582 { 00:33:48.582 "name": "BaseBdev1", 00:33:48.582 "uuid": "de97ab5c-2519-48dc-96ff-8ba1bba94730", 00:33:48.582 "is_configured": true, 00:33:48.582 "data_offset": 256, 00:33:48.582 "data_size": 7936 00:33:48.582 }, 00:33:48.582 { 00:33:48.582 "name": "BaseBdev2", 00:33:48.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:48.582 "is_configured": false, 00:33:48.582 "data_offset": 0, 00:33:48.582 "data_size": 0 00:33:48.582 } 00:33:48.582 ] 00:33:48.582 }' 00:33:48.582 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:48.582 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:48.841 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:33:49.100 BaseBdev2 00:33:49.101 [2024-07-25 00:17:44.874865] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:49.101 [2024-07-25 00:17:44.875062] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:33:49.101 [2024-07-25 00:17:44.875078] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:49.101 [2024-07-25 00:17:44.875155] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:33:49.101 [2024-07-25 00:17:44.875230] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:33:49.101 [2024-07-25 00:17:44.875249] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007280 00:33:49.101 [2024-07-25 00:17:44.875313] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:49.101 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:33:49.101 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:33:49.101 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:49.101 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:33:49.101 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:49.101 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:49.101 00:17:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:49.360 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:49.632 [ 00:33:49.632 { 00:33:49.632 "name": "BaseBdev2", 00:33:49.632 "aliases": [ 00:33:49.632 "3f1ea005-20de-42e9-8d2a-66cf4f38ff66" 00:33:49.632 ], 00:33:49.632 "product_name": "Malloc disk", 00:33:49.632 "block_size": 4128, 00:33:49.632 "num_blocks": 8192, 00:33:49.632 "uuid": "3f1ea005-20de-42e9-8d2a-66cf4f38ff66", 00:33:49.632 "md_size": 32, 00:33:49.632 "md_interleave": true, 00:33:49.632 "dif_type": 0, 00:33:49.632 "assigned_rate_limits": { 00:33:49.632 "rw_ios_per_sec": 0, 00:33:49.632 "rw_mbytes_per_sec": 0, 00:33:49.632 "r_mbytes_per_sec": 0, 00:33:49.632 "w_mbytes_per_sec": 0 00:33:49.632 }, 00:33:49.632 "claimed": true, 00:33:49.632 "claim_type": "exclusive_write", 00:33:49.632 "zoned": false, 00:33:49.632 "supported_io_types": { 00:33:49.632 "read": true, 00:33:49.632 "write": true, 00:33:49.632 "unmap": true, 00:33:49.632 "flush": true, 00:33:49.632 "reset": true, 00:33:49.632 "nvme_admin": false, 00:33:49.632 "nvme_io": false, 00:33:49.632 "nvme_io_md": false, 00:33:49.632 "write_zeroes": true, 00:33:49.632 "zcopy": true, 00:33:49.632 "get_zone_info": false, 00:33:49.632 "zone_management": false, 00:33:49.632 "zone_append": false, 00:33:49.632 "compare": false, 00:33:49.632 "compare_and_write": false, 00:33:49.632 "abort": true, 00:33:49.632 "seek_hole": false, 00:33:49.632 "seek_data": false, 00:33:49.632 "copy": true, 00:33:49.632 "nvme_iov_md": false 00:33:49.632 }, 00:33:49.632 "memory_domains": [ 00:33:49.632 { 00:33:49.632 "dma_device_id": "system", 00:33:49.632 "dma_device_type": 1 00:33:49.632 }, 00:33:49.632 { 00:33:49.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:49.632 "dma_device_type": 2 00:33:49.632 } 00:33:49.632 ], 00:33:49.632 "driver_specific": {} 00:33:49.632 } 00:33:49.632 ] 00:33:49.632 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:33:49.632 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:33:49.632 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:49.632 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:33:49.632 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:49.632 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:49.632 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:49.632 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:49.632 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:49.632 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:49.632 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:49.632 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:49.632 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:49.632 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:49.632 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:49.632 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:49.632 "name": "Existed_Raid", 00:33:49.632 "uuid": "f70f4219-a11e-4c37-9f2e-755e57167850", 00:33:49.632 "strip_size_kb": 0, 00:33:49.632 "state": "online", 00:33:49.632 "raid_level": "raid1", 00:33:49.632 "superblock": true, 00:33:49.632 "num_base_bdevs": 2, 00:33:49.632 "num_base_bdevs_discovered": 2, 00:33:49.632 "num_base_bdevs_operational": 2, 00:33:49.632 "base_bdevs_list": [ 00:33:49.632 { 00:33:49.632 "name": "BaseBdev1", 00:33:49.632 "uuid": "de97ab5c-2519-48dc-96ff-8ba1bba94730", 00:33:49.632 "is_configured": true, 00:33:49.632 "data_offset": 256, 00:33:49.632 "data_size": 7936 00:33:49.632 }, 00:33:49.632 { 00:33:49.632 "name": "BaseBdev2", 00:33:49.632 "uuid": "3f1ea005-20de-42e9-8d2a-66cf4f38ff66", 00:33:49.632 "is_configured": true, 00:33:49.632 "data_offset": 256, 00:33:49.632 "data_size": 7936 00:33:49.632 } 00:33:49.632 ] 00:33:49.632 }' 00:33:49.632 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:49.632 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:49.905 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:33:49.905 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:33:49.905 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:49.905 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:49.905 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:49.905 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:33:49.905 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:33:49.905 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:50.164 [2024-07-25 00:17:45.963515] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:50.164 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:50.164 "name": "Existed_Raid", 00:33:50.164 "aliases": [ 00:33:50.164 "f70f4219-a11e-4c37-9f2e-755e57167850" 00:33:50.164 ], 00:33:50.164 "product_name": "Raid Volume", 00:33:50.164 "block_size": 4128, 00:33:50.164 "num_blocks": 7936, 00:33:50.164 "uuid": "f70f4219-a11e-4c37-9f2e-755e57167850", 00:33:50.164 "md_size": 32, 00:33:50.164 "md_interleave": true, 00:33:50.164 "dif_type": 0, 00:33:50.164 "assigned_rate_limits": { 00:33:50.164 "rw_ios_per_sec": 0, 00:33:50.164 "rw_mbytes_per_sec": 0, 00:33:50.164 "r_mbytes_per_sec": 0, 00:33:50.164 "w_mbytes_per_sec": 0 00:33:50.164 }, 00:33:50.164 "claimed": false, 00:33:50.164 "zoned": false, 00:33:50.164 "supported_io_types": { 00:33:50.164 "read": true, 00:33:50.164 "write": true, 00:33:50.164 "unmap": false, 00:33:50.164 "flush": false, 00:33:50.164 "reset": true, 00:33:50.164 "nvme_admin": false, 00:33:50.164 "nvme_io": false, 00:33:50.164 "nvme_io_md": false, 00:33:50.164 "write_zeroes": true, 00:33:50.164 "zcopy": false, 00:33:50.164 "get_zone_info": false, 00:33:50.164 "zone_management": false, 00:33:50.164 "zone_append": false, 00:33:50.164 "compare": false, 00:33:50.164 "compare_and_write": false, 00:33:50.164 "abort": false, 00:33:50.164 "seek_hole": false, 00:33:50.164 "seek_data": false, 00:33:50.164 "copy": false, 00:33:50.164 "nvme_iov_md": false 00:33:50.164 }, 00:33:50.164 "memory_domains": [ 00:33:50.164 { 00:33:50.164 "dma_device_id": "system", 00:33:50.164 "dma_device_type": 1 00:33:50.164 }, 00:33:50.164 { 00:33:50.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:50.164 "dma_device_type": 2 00:33:50.164 }, 00:33:50.164 { 00:33:50.164 "dma_device_id": "system", 00:33:50.164 "dma_device_type": 1 00:33:50.164 }, 00:33:50.164 { 00:33:50.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:50.164 "dma_device_type": 2 00:33:50.164 } 00:33:50.164 ], 00:33:50.164 "driver_specific": { 00:33:50.164 "raid": { 00:33:50.164 "uuid": "f70f4219-a11e-4c37-9f2e-755e57167850", 00:33:50.164 "strip_size_kb": 0, 00:33:50.164 "state": "online", 00:33:50.164 "raid_level": "raid1", 00:33:50.164 "superblock": true, 00:33:50.164 "num_base_bdevs": 2, 00:33:50.164 "num_base_bdevs_discovered": 2, 00:33:50.164 "num_base_bdevs_operational": 2, 00:33:50.164 "base_bdevs_list": [ 00:33:50.164 { 00:33:50.164 "name": "BaseBdev1", 00:33:50.164 "uuid": "de97ab5c-2519-48dc-96ff-8ba1bba94730", 00:33:50.164 "is_configured": true, 00:33:50.164 "data_offset": 256, 00:33:50.164 "data_size": 7936 00:33:50.164 }, 00:33:50.164 { 00:33:50.164 "name": "BaseBdev2", 00:33:50.164 "uuid": "3f1ea005-20de-42e9-8d2a-66cf4f38ff66", 00:33:50.164 "is_configured": true, 00:33:50.164 "data_offset": 256, 00:33:50.164 "data_size": 7936 00:33:50.164 } 00:33:50.164 ] 00:33:50.164 } 00:33:50.164 } 00:33:50.164 }' 00:33:50.164 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:50.164 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:33:50.164 BaseBdev2' 00:33:50.164 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:50.164 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:50.164 00:17:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:33:50.424 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:50.424 "name": "BaseBdev1", 00:33:50.424 "aliases": [ 00:33:50.424 "de97ab5c-2519-48dc-96ff-8ba1bba94730" 00:33:50.424 ], 00:33:50.424 "product_name": "Malloc disk", 00:33:50.424 "block_size": 4128, 00:33:50.424 "num_blocks": 8192, 00:33:50.424 "uuid": "de97ab5c-2519-48dc-96ff-8ba1bba94730", 00:33:50.424 "md_size": 32, 00:33:50.424 "md_interleave": true, 00:33:50.424 "dif_type": 0, 00:33:50.424 "assigned_rate_limits": { 00:33:50.424 "rw_ios_per_sec": 0, 00:33:50.424 "rw_mbytes_per_sec": 0, 00:33:50.424 "r_mbytes_per_sec": 0, 00:33:50.424 "w_mbytes_per_sec": 0 00:33:50.424 }, 00:33:50.424 "claimed": true, 00:33:50.424 "claim_type": "exclusive_write", 00:33:50.424 "zoned": false, 00:33:50.424 "supported_io_types": { 00:33:50.424 "read": true, 00:33:50.424 "write": true, 00:33:50.424 "unmap": true, 00:33:50.424 "flush": true, 00:33:50.424 "reset": true, 00:33:50.424 "nvme_admin": false, 00:33:50.424 "nvme_io": false, 00:33:50.424 "nvme_io_md": false, 00:33:50.424 "write_zeroes": true, 00:33:50.424 "zcopy": true, 00:33:50.424 "get_zone_info": false, 00:33:50.424 "zone_management": false, 00:33:50.424 "zone_append": false, 00:33:50.424 "compare": false, 00:33:50.424 "compare_and_write": false, 00:33:50.424 "abort": true, 00:33:50.424 "seek_hole": false, 00:33:50.424 "seek_data": false, 00:33:50.424 "copy": true, 00:33:50.424 "nvme_iov_md": false 00:33:50.424 }, 00:33:50.424 "memory_domains": [ 00:33:50.424 { 00:33:50.424 "dma_device_id": "system", 00:33:50.424 "dma_device_type": 1 00:33:50.424 }, 00:33:50.424 { 00:33:50.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:50.424 "dma_device_type": 2 00:33:50.424 } 00:33:50.424 ], 00:33:50.424 "driver_specific": {} 00:33:50.424 }' 00:33:50.424 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:50.424 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:50.424 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:50.424 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:50.424 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:50.424 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:50.683 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:50.683 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:50.683 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:50.683 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:50.683 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:50.683 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:50.683 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:50.683 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:33:50.683 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:50.683 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:50.683 "name": "BaseBdev2", 00:33:50.683 "aliases": [ 00:33:50.683 "3f1ea005-20de-42e9-8d2a-66cf4f38ff66" 00:33:50.683 ], 00:33:50.683 "product_name": "Malloc disk", 00:33:50.683 "block_size": 4128, 00:33:50.683 "num_blocks": 8192, 00:33:50.683 "uuid": "3f1ea005-20de-42e9-8d2a-66cf4f38ff66", 00:33:50.683 "md_size": 32, 00:33:50.683 "md_interleave": true, 00:33:50.683 "dif_type": 0, 00:33:50.683 "assigned_rate_limits": { 00:33:50.683 "rw_ios_per_sec": 0, 00:33:50.683 "rw_mbytes_per_sec": 0, 00:33:50.683 "r_mbytes_per_sec": 0, 00:33:50.683 "w_mbytes_per_sec": 0 00:33:50.683 }, 00:33:50.683 "claimed": true, 00:33:50.683 "claim_type": "exclusive_write", 00:33:50.683 "zoned": false, 00:33:50.683 "supported_io_types": { 00:33:50.683 "read": true, 00:33:50.683 "write": true, 00:33:50.683 "unmap": true, 00:33:50.683 "flush": true, 00:33:50.683 "reset": true, 00:33:50.683 "nvme_admin": false, 00:33:50.683 "nvme_io": false, 00:33:50.683 "nvme_io_md": false, 00:33:50.683 "write_zeroes": true, 00:33:50.683 "zcopy": true, 00:33:50.683 "get_zone_info": false, 00:33:50.683 "zone_management": false, 00:33:50.683 "zone_append": false, 00:33:50.683 "compare": false, 00:33:50.683 "compare_and_write": false, 00:33:50.683 "abort": true, 00:33:50.683 "seek_hole": false, 00:33:50.683 "seek_data": false, 00:33:50.683 "copy": true, 00:33:50.683 "nvme_iov_md": false 00:33:50.683 }, 00:33:50.683 "memory_domains": [ 00:33:50.683 { 00:33:50.684 "dma_device_id": "system", 00:33:50.684 "dma_device_type": 1 00:33:50.684 }, 00:33:50.684 { 00:33:50.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:50.684 "dma_device_type": 2 00:33:50.684 } 00:33:50.684 ], 00:33:50.684 "driver_specific": {} 00:33:50.684 }' 00:33:50.684 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:50.684 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:50.684 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:50.684 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:50.684 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:50.684 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:50.684 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:50.943 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:50.943 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:50.943 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:50.943 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:50.943 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:50.943 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:51.202 [2024-07-25 00:17:46.839549] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:51.202 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:33:51.202 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:33:51.202 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:51.202 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:33:51.202 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:33:51.202 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:33:51.202 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:51.202 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:51.202 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:51.202 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:51.202 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:51.202 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:51.202 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:51.202 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:51.202 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:51.202 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:51.202 00:17:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:51.462 00:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:51.462 "name": "Existed_Raid", 00:33:51.462 "uuid": "f70f4219-a11e-4c37-9f2e-755e57167850", 00:33:51.462 "strip_size_kb": 0, 00:33:51.462 "state": "online", 00:33:51.462 "raid_level": "raid1", 00:33:51.462 "superblock": true, 00:33:51.462 "num_base_bdevs": 2, 00:33:51.462 "num_base_bdevs_discovered": 1, 00:33:51.462 "num_base_bdevs_operational": 1, 00:33:51.462 "base_bdevs_list": [ 00:33:51.462 { 00:33:51.462 "name": null, 00:33:51.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:51.462 "is_configured": false, 00:33:51.462 "data_offset": 256, 00:33:51.462 "data_size": 7936 00:33:51.462 }, 00:33:51.462 { 00:33:51.462 "name": "BaseBdev2", 00:33:51.462 "uuid": "3f1ea005-20de-42e9-8d2a-66cf4f38ff66", 00:33:51.462 "is_configured": true, 00:33:51.462 "data_offset": 256, 00:33:51.462 "data_size": 7936 00:33:51.462 } 00:33:51.462 ] 00:33:51.462 }' 00:33:51.462 00:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:51.462 00:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:51.720 00:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:33:51.720 00:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:51.720 00:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:51.720 00:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:51.979 00:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:51.979 00:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:51.979 00:17:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:33:52.238 [2024-07-25 00:17:47.925308] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:52.238 [2024-07-25 00:17:47.925410] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:52.238 [2024-07-25 00:17:47.987045] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:52.238 [2024-07-25 00:17:47.987091] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:52.238 [2024-07-25 00:17:47.987106] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name Existed_Raid, state offline 00:33:52.238 00:17:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:52.238 00:17:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:52.238 00:17:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:33:52.238 00:17:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:52.497 00:17:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:33:52.497 00:17:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:33:52.497 00:17:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:33:52.497 00:17:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 113276 00:33:52.497 00:17:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 113276 ']' 00:33:52.497 00:17:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 113276 00:33:52.497 00:17:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:33:52.497 00:17:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:52.497 00:17:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 113276 00:33:52.497 killing process with pid 113276 00:33:52.497 00:17:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:52.497 00:17:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:52.497 00:17:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 113276' 00:33:52.497 00:17:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 113276 00:33:52.497 [2024-07-25 00:17:48.220054] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:52.497 [2024-07-25 00:17:48.220164] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:52.497 00:17:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 113276 00:33:53.434 ************************************ 00:33:53.434 END TEST raid_state_function_test_sb_md_interleaved 00:33:53.434 ************************************ 00:33:53.434 00:17:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:33:53.434 00:33:53.434 real 0m8.541s 00:33:53.434 user 0m14.052s 00:33:53.434 sys 0m1.356s 00:33:53.434 00:17:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:53.434 00:17:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:53.434 00:17:49 bdev_raid -- bdev/bdev_raid.sh@993 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:33:53.434 00:17:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:53.434 00:17:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:53.434 00:17:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:53.434 ************************************ 00:33:53.434 START TEST raid_superblock_test_md_interleaved 00:33:53.434 ************************************ 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@414 -- # local strip_size 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@427 -- # raid_pid=113594 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@428 -- # waitforlisten 113594 /var/tmp/spdk-raid.sock 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 113594 ']' 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:53.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:53.434 00:17:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:53.434 [2024-07-25 00:17:49.252703] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:33:53.434 [2024-07-25 00:17:49.253134] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113594 ] 00:33:53.694 [2024-07-25 00:17:49.425124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.952 [2024-07-25 00:17:49.581345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.952 [2024-07-25 00:17:49.727413] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:54.520 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:54.520 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:33:54.520 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:33:54.520 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:33:54.520 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:33:54.520 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:33:54.520 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:54.520 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:54.520 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:33:54.520 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:54.520 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:33:54.779 malloc1 00:33:54.779 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:54.779 [2024-07-25 00:17:50.590651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:54.779 [2024-07-25 00:17:50.590895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:54.779 [2024-07-25 00:17:50.591045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:33:54.779 [2024-07-25 00:17:50.591157] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:54.779 [2024-07-25 00:17:50.593238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:54.779 [2024-07-25 00:17:50.593396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:54.779 pt1 00:33:54.779 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:33:54.779 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:33:54.779 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:33:54.779 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:33:54.779 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:54.779 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:54.779 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:33:54.779 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:54.779 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:33:55.038 malloc2 00:33:55.038 00:17:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:55.297 [2024-07-25 00:17:50.989792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:55.297 [2024-07-25 00:17:50.989869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:55.297 [2024-07-25 00:17:50.989896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:33:55.297 [2024-07-25 00:17:50.989908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:55.297 [2024-07-25 00:17:50.991650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:55.297 [2024-07-25 00:17:50.991687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:55.297 pt2 00:33:55.297 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:33:55.297 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:33:55.297 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:33:55.555 [2024-07-25 00:17:51.177939] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:55.555 [2024-07-25 00:17:51.179871] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:55.555 [2024-07-25 00:17:51.180240] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007e80 00:33:55.555 [2024-07-25 00:17:51.180349] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:55.555 [2024-07-25 00:17:51.180548] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:33:55.555 [2024-07-25 00:17:51.180757] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007e80 00:33:55.555 [2024-07-25 00:17:51.180910] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007e80 00:33:55.555 [2024-07-25 00:17:51.181116] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:55.555 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:55.555 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:55.555 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:55.555 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:55.555 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:55.555 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:55.555 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:55.555 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:55.556 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:55.556 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:55.556 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:55.556 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:55.814 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:55.814 "name": "raid_bdev1", 00:33:55.814 "uuid": "2b81509c-d061-49bf-9b6c-33ed3112c788", 00:33:55.814 "strip_size_kb": 0, 00:33:55.814 "state": "online", 00:33:55.814 "raid_level": "raid1", 00:33:55.814 "superblock": true, 00:33:55.814 "num_base_bdevs": 2, 00:33:55.814 "num_base_bdevs_discovered": 2, 00:33:55.814 "num_base_bdevs_operational": 2, 00:33:55.814 "base_bdevs_list": [ 00:33:55.814 { 00:33:55.814 "name": "pt1", 00:33:55.814 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:55.814 "is_configured": true, 00:33:55.814 "data_offset": 256, 00:33:55.814 "data_size": 7936 00:33:55.814 }, 00:33:55.814 { 00:33:55.814 "name": "pt2", 00:33:55.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:55.814 "is_configured": true, 00:33:55.814 "data_offset": 256, 00:33:55.814 "data_size": 7936 00:33:55.814 } 00:33:55.814 ] 00:33:55.814 }' 00:33:55.814 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:55.814 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:56.072 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:33:56.072 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:33:56.072 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:56.072 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:56.072 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:56.072 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:33:56.072 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:56.072 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:56.072 [2024-07-25 00:17:51.934348] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:56.331 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:56.331 "name": "raid_bdev1", 00:33:56.331 "aliases": [ 00:33:56.331 "2b81509c-d061-49bf-9b6c-33ed3112c788" 00:33:56.331 ], 00:33:56.331 "product_name": "Raid Volume", 00:33:56.331 "block_size": 4128, 00:33:56.331 "num_blocks": 7936, 00:33:56.331 "uuid": "2b81509c-d061-49bf-9b6c-33ed3112c788", 00:33:56.331 "md_size": 32, 00:33:56.331 "md_interleave": true, 00:33:56.331 "dif_type": 0, 00:33:56.331 "assigned_rate_limits": { 00:33:56.331 "rw_ios_per_sec": 0, 00:33:56.331 "rw_mbytes_per_sec": 0, 00:33:56.331 "r_mbytes_per_sec": 0, 00:33:56.331 "w_mbytes_per_sec": 0 00:33:56.331 }, 00:33:56.331 "claimed": false, 00:33:56.331 "zoned": false, 00:33:56.331 "supported_io_types": { 00:33:56.331 "read": true, 00:33:56.331 "write": true, 00:33:56.331 "unmap": false, 00:33:56.331 "flush": false, 00:33:56.331 "reset": true, 00:33:56.331 "nvme_admin": false, 00:33:56.331 "nvme_io": false, 00:33:56.331 "nvme_io_md": false, 00:33:56.331 "write_zeroes": true, 00:33:56.331 "zcopy": false, 00:33:56.331 "get_zone_info": false, 00:33:56.331 "zone_management": false, 00:33:56.331 "zone_append": false, 00:33:56.331 "compare": false, 00:33:56.331 "compare_and_write": false, 00:33:56.331 "abort": false, 00:33:56.331 "seek_hole": false, 00:33:56.331 "seek_data": false, 00:33:56.331 "copy": false, 00:33:56.331 "nvme_iov_md": false 00:33:56.331 }, 00:33:56.331 "memory_domains": [ 00:33:56.331 { 00:33:56.331 "dma_device_id": "system", 00:33:56.331 "dma_device_type": 1 00:33:56.331 }, 00:33:56.331 { 00:33:56.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:56.331 "dma_device_type": 2 00:33:56.331 }, 00:33:56.331 { 00:33:56.331 "dma_device_id": "system", 00:33:56.331 "dma_device_type": 1 00:33:56.331 }, 00:33:56.331 { 00:33:56.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:56.331 "dma_device_type": 2 00:33:56.331 } 00:33:56.331 ], 00:33:56.331 "driver_specific": { 00:33:56.331 "raid": { 00:33:56.331 "uuid": "2b81509c-d061-49bf-9b6c-33ed3112c788", 00:33:56.331 "strip_size_kb": 0, 00:33:56.331 "state": "online", 00:33:56.331 "raid_level": "raid1", 00:33:56.331 "superblock": true, 00:33:56.331 "num_base_bdevs": 2, 00:33:56.331 "num_base_bdevs_discovered": 2, 00:33:56.331 "num_base_bdevs_operational": 2, 00:33:56.331 "base_bdevs_list": [ 00:33:56.331 { 00:33:56.331 "name": "pt1", 00:33:56.331 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:56.331 "is_configured": true, 00:33:56.331 "data_offset": 256, 00:33:56.331 "data_size": 7936 00:33:56.331 }, 00:33:56.331 { 00:33:56.331 "name": "pt2", 00:33:56.331 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:56.331 "is_configured": true, 00:33:56.331 "data_offset": 256, 00:33:56.331 "data_size": 7936 00:33:56.331 } 00:33:56.331 ] 00:33:56.331 } 00:33:56.331 } 00:33:56.331 }' 00:33:56.331 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:56.331 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:33:56.331 pt2' 00:33:56.331 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:56.331 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:56.331 00:17:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:33:56.589 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:56.589 "name": "pt1", 00:33:56.589 "aliases": [ 00:33:56.589 "00000000-0000-0000-0000-000000000001" 00:33:56.589 ], 00:33:56.589 "product_name": "passthru", 00:33:56.589 "block_size": 4128, 00:33:56.589 "num_blocks": 8192, 00:33:56.589 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:56.589 "md_size": 32, 00:33:56.589 "md_interleave": true, 00:33:56.589 "dif_type": 0, 00:33:56.589 "assigned_rate_limits": { 00:33:56.589 "rw_ios_per_sec": 0, 00:33:56.589 "rw_mbytes_per_sec": 0, 00:33:56.589 "r_mbytes_per_sec": 0, 00:33:56.589 "w_mbytes_per_sec": 0 00:33:56.589 }, 00:33:56.589 "claimed": true, 00:33:56.589 "claim_type": "exclusive_write", 00:33:56.589 "zoned": false, 00:33:56.589 "supported_io_types": { 00:33:56.589 "read": true, 00:33:56.589 "write": true, 00:33:56.589 "unmap": true, 00:33:56.589 "flush": true, 00:33:56.589 "reset": true, 00:33:56.590 "nvme_admin": false, 00:33:56.590 "nvme_io": false, 00:33:56.590 "nvme_io_md": false, 00:33:56.590 "write_zeroes": true, 00:33:56.590 "zcopy": true, 00:33:56.590 "get_zone_info": false, 00:33:56.590 "zone_management": false, 00:33:56.590 "zone_append": false, 00:33:56.590 "compare": false, 00:33:56.590 "compare_and_write": false, 00:33:56.590 "abort": true, 00:33:56.590 "seek_hole": false, 00:33:56.590 "seek_data": false, 00:33:56.590 "copy": true, 00:33:56.590 "nvme_iov_md": false 00:33:56.590 }, 00:33:56.590 "memory_domains": [ 00:33:56.590 { 00:33:56.590 "dma_device_id": "system", 00:33:56.590 "dma_device_type": 1 00:33:56.590 }, 00:33:56.590 { 00:33:56.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:56.590 "dma_device_type": 2 00:33:56.590 } 00:33:56.590 ], 00:33:56.590 "driver_specific": { 00:33:56.590 "passthru": { 00:33:56.590 "name": "pt1", 00:33:56.590 "base_bdev_name": "malloc1" 00:33:56.590 } 00:33:56.590 } 00:33:56.590 }' 00:33:56.590 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:56.590 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:56.590 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:56.590 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:56.590 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:56.590 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:56.590 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:56.590 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:56.590 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:56.590 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:56.590 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:56.590 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:56.590 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:56.590 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:33:56.590 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:56.849 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:56.849 "name": "pt2", 00:33:56.849 "aliases": [ 00:33:56.849 "00000000-0000-0000-0000-000000000002" 00:33:56.849 ], 00:33:56.849 "product_name": "passthru", 00:33:56.849 "block_size": 4128, 00:33:56.849 "num_blocks": 8192, 00:33:56.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:56.849 "md_size": 32, 00:33:56.849 "md_interleave": true, 00:33:56.849 "dif_type": 0, 00:33:56.849 "assigned_rate_limits": { 00:33:56.849 "rw_ios_per_sec": 0, 00:33:56.849 "rw_mbytes_per_sec": 0, 00:33:56.849 "r_mbytes_per_sec": 0, 00:33:56.849 "w_mbytes_per_sec": 0 00:33:56.849 }, 00:33:56.849 "claimed": true, 00:33:56.849 "claim_type": "exclusive_write", 00:33:56.849 "zoned": false, 00:33:56.849 "supported_io_types": { 00:33:56.849 "read": true, 00:33:56.849 "write": true, 00:33:56.849 "unmap": true, 00:33:56.849 "flush": true, 00:33:56.849 "reset": true, 00:33:56.849 "nvme_admin": false, 00:33:56.849 "nvme_io": false, 00:33:56.849 "nvme_io_md": false, 00:33:56.849 "write_zeroes": true, 00:33:56.849 "zcopy": true, 00:33:56.849 "get_zone_info": false, 00:33:56.849 "zone_management": false, 00:33:56.849 "zone_append": false, 00:33:56.849 "compare": false, 00:33:56.849 "compare_and_write": false, 00:33:56.849 "abort": true, 00:33:56.849 "seek_hole": false, 00:33:56.849 "seek_data": false, 00:33:56.849 "copy": true, 00:33:56.849 "nvme_iov_md": false 00:33:56.849 }, 00:33:56.849 "memory_domains": [ 00:33:56.849 { 00:33:56.849 "dma_device_id": "system", 00:33:56.849 "dma_device_type": 1 00:33:56.849 }, 00:33:56.849 { 00:33:56.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:56.849 "dma_device_type": 2 00:33:56.849 } 00:33:56.849 ], 00:33:56.849 "driver_specific": { 00:33:56.849 "passthru": { 00:33:56.849 "name": "pt2", 00:33:56.849 "base_bdev_name": "malloc2" 00:33:56.849 } 00:33:56.849 } 00:33:56.849 }' 00:33:56.849 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:56.849 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:56.849 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:56.849 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:56.849 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:56.849 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:56.849 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:56.849 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:56.849 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:56.849 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:56.849 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:56.849 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:56.849 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:33:56.849 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:57.108 [2024-07-25 00:17:52.806637] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:57.108 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=2b81509c-d061-49bf-9b6c-33ed3112c788 00:33:57.108 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' -z 2b81509c-d061-49bf-9b6c-33ed3112c788 ']' 00:33:57.108 00:17:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:57.367 [2024-07-25 00:17:53.074422] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:57.367 [2024-07-25 00:17:53.074449] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:57.367 [2024-07-25 00:17:53.074519] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:57.367 [2024-07-25 00:17:53.074579] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:57.367 [2024-07-25 00:17:53.074599] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state offline 00:33:57.367 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:33:57.367 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:57.625 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:33:57.625 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:33:57.625 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:33:57.625 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:57.884 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:33:57.884 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:58.143 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:33:58.143 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:58.143 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:33:58.143 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:33:58.143 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:33:58.143 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:33:58.143 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:58.143 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:58.143 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:58.143 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:58.143 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:58.143 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:58.143 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:58.143 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:58.143 00:17:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:33:58.402 [2024-07-25 00:17:54.222706] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:58.402 [2024-07-25 00:17:54.224722] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:58.402 [2024-07-25 00:17:54.224808] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:33:58.402 [2024-07-25 00:17:54.225054] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:33:58.402 [2024-07-25 00:17:54.225091] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:58.402 [2024-07-25 00:17:54.225107] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008480 name raid_bdev1, state configuring 00:33:58.402 request: 00:33:58.402 { 00:33:58.402 "name": "raid_bdev1", 00:33:58.402 "raid_level": "raid1", 00:33:58.402 "base_bdevs": [ 00:33:58.402 "malloc1", 00:33:58.402 "malloc2" 00:33:58.402 ], 00:33:58.402 "superblock": false, 00:33:58.402 "method": "bdev_raid_create", 00:33:58.402 "req_id": 1 00:33:58.402 } 00:33:58.402 Got JSON-RPC error response 00:33:58.402 response: 00:33:58.402 { 00:33:58.402 "code": -17, 00:33:58.402 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:58.402 } 00:33:58.402 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:33:58.402 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:58.402 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:58.402 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:58.402 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:58.402 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:33:58.661 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:33:58.661 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:33:58.661 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:58.920 [2024-07-25 00:17:54.614755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:58.920 [2024-07-25 00:17:54.614876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:58.920 [2024-07-25 00:17:54.614900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:33:58.920 [2024-07-25 00:17:54.614915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:58.920 [2024-07-25 00:17:54.616970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:58.920 [2024-07-25 00:17:54.617014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:58.920 [2024-07-25 00:17:54.617074] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:58.920 [2024-07-25 00:17:54.617142] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:58.920 pt1 00:33:58.920 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:33:58.920 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:58.920 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:58.920 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:58.920 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:58.920 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:58.920 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:58.920 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:58.920 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:58.920 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:58.920 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:58.920 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:59.178 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:59.179 "name": "raid_bdev1", 00:33:59.179 "uuid": "2b81509c-d061-49bf-9b6c-33ed3112c788", 00:33:59.179 "strip_size_kb": 0, 00:33:59.179 "state": "configuring", 00:33:59.179 "raid_level": "raid1", 00:33:59.179 "superblock": true, 00:33:59.179 "num_base_bdevs": 2, 00:33:59.179 "num_base_bdevs_discovered": 1, 00:33:59.179 "num_base_bdevs_operational": 2, 00:33:59.179 "base_bdevs_list": [ 00:33:59.179 { 00:33:59.179 "name": "pt1", 00:33:59.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:59.179 "is_configured": true, 00:33:59.179 "data_offset": 256, 00:33:59.179 "data_size": 7936 00:33:59.179 }, 00:33:59.179 { 00:33:59.179 "name": null, 00:33:59.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:59.179 "is_configured": false, 00:33:59.179 "data_offset": 256, 00:33:59.179 "data_size": 7936 00:33:59.179 } 00:33:59.179 ] 00:33:59.179 }' 00:33:59.179 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:59.179 00:17:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:59.438 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:33:59.438 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:33:59.438 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:33:59.438 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:59.696 [2024-07-25 00:17:55.362951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:59.696 [2024-07-25 00:17:55.363198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:59.696 [2024-07-25 00:17:55.363333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:33:59.696 [2024-07-25 00:17:55.363474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:59.696 [2024-07-25 00:17:55.363730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:59.696 [2024-07-25 00:17:55.363882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:59.696 [2024-07-25 00:17:55.364034] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:59.696 [2024-07-25 00:17:55.364154] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:59.696 [2024-07-25 00:17:55.364367] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009080 00:33:59.696 [2024-07-25 00:17:55.364393] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:59.696 [2024-07-25 00:17:55.364502] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:33:59.696 [2024-07-25 00:17:55.364585] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009080 00:33:59.696 [2024-07-25 00:17:55.364598] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009080 00:33:59.696 [2024-07-25 00:17:55.364667] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:59.696 pt2 00:33:59.696 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:33:59.696 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:33:59.696 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:59.696 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:59.696 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:59.696 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:59.696 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:59.696 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:59.696 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:59.696 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:59.696 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:59.696 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:59.696 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:59.696 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:59.955 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:59.955 "name": "raid_bdev1", 00:33:59.955 "uuid": "2b81509c-d061-49bf-9b6c-33ed3112c788", 00:33:59.955 "strip_size_kb": 0, 00:33:59.955 "state": "online", 00:33:59.955 "raid_level": "raid1", 00:33:59.955 "superblock": true, 00:33:59.955 "num_base_bdevs": 2, 00:33:59.955 "num_base_bdevs_discovered": 2, 00:33:59.955 "num_base_bdevs_operational": 2, 00:33:59.955 "base_bdevs_list": [ 00:33:59.955 { 00:33:59.955 "name": "pt1", 00:33:59.955 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:59.955 "is_configured": true, 00:33:59.955 "data_offset": 256, 00:33:59.955 "data_size": 7936 00:33:59.955 }, 00:33:59.955 { 00:33:59.955 "name": "pt2", 00:33:59.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:59.955 "is_configured": true, 00:33:59.955 "data_offset": 256, 00:33:59.955 "data_size": 7936 00:33:59.955 } 00:33:59.955 ] 00:33:59.956 }' 00:33:59.956 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:59.956 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:00.215 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:34:00.215 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:34:00.215 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:00.215 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:00.215 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:00.215 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:34:00.215 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:00.215 00:17:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:00.473 [2024-07-25 00:17:56.103321] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:00.473 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:00.473 "name": "raid_bdev1", 00:34:00.473 "aliases": [ 00:34:00.473 "2b81509c-d061-49bf-9b6c-33ed3112c788" 00:34:00.473 ], 00:34:00.473 "product_name": "Raid Volume", 00:34:00.473 "block_size": 4128, 00:34:00.473 "num_blocks": 7936, 00:34:00.473 "uuid": "2b81509c-d061-49bf-9b6c-33ed3112c788", 00:34:00.473 "md_size": 32, 00:34:00.473 "md_interleave": true, 00:34:00.473 "dif_type": 0, 00:34:00.473 "assigned_rate_limits": { 00:34:00.473 "rw_ios_per_sec": 0, 00:34:00.473 "rw_mbytes_per_sec": 0, 00:34:00.473 "r_mbytes_per_sec": 0, 00:34:00.473 "w_mbytes_per_sec": 0 00:34:00.473 }, 00:34:00.474 "claimed": false, 00:34:00.474 "zoned": false, 00:34:00.474 "supported_io_types": { 00:34:00.474 "read": true, 00:34:00.474 "write": true, 00:34:00.474 "unmap": false, 00:34:00.474 "flush": false, 00:34:00.474 "reset": true, 00:34:00.474 "nvme_admin": false, 00:34:00.474 "nvme_io": false, 00:34:00.474 "nvme_io_md": false, 00:34:00.474 "write_zeroes": true, 00:34:00.474 "zcopy": false, 00:34:00.474 "get_zone_info": false, 00:34:00.474 "zone_management": false, 00:34:00.474 "zone_append": false, 00:34:00.474 "compare": false, 00:34:00.474 "compare_and_write": false, 00:34:00.474 "abort": false, 00:34:00.474 "seek_hole": false, 00:34:00.474 "seek_data": false, 00:34:00.474 "copy": false, 00:34:00.474 "nvme_iov_md": false 00:34:00.474 }, 00:34:00.474 "memory_domains": [ 00:34:00.474 { 00:34:00.474 "dma_device_id": "system", 00:34:00.474 "dma_device_type": 1 00:34:00.474 }, 00:34:00.474 { 00:34:00.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:00.474 "dma_device_type": 2 00:34:00.474 }, 00:34:00.474 { 00:34:00.474 "dma_device_id": "system", 00:34:00.474 "dma_device_type": 1 00:34:00.474 }, 00:34:00.474 { 00:34:00.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:00.474 "dma_device_type": 2 00:34:00.474 } 00:34:00.474 ], 00:34:00.474 "driver_specific": { 00:34:00.474 "raid": { 00:34:00.474 "uuid": "2b81509c-d061-49bf-9b6c-33ed3112c788", 00:34:00.474 "strip_size_kb": 0, 00:34:00.474 "state": "online", 00:34:00.474 "raid_level": "raid1", 00:34:00.474 "superblock": true, 00:34:00.474 "num_base_bdevs": 2, 00:34:00.474 "num_base_bdevs_discovered": 2, 00:34:00.474 "num_base_bdevs_operational": 2, 00:34:00.474 "base_bdevs_list": [ 00:34:00.474 { 00:34:00.474 "name": "pt1", 00:34:00.474 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:00.474 "is_configured": true, 00:34:00.474 "data_offset": 256, 00:34:00.474 "data_size": 7936 00:34:00.474 }, 00:34:00.474 { 00:34:00.474 "name": "pt2", 00:34:00.474 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:00.474 "is_configured": true, 00:34:00.474 "data_offset": 256, 00:34:00.474 "data_size": 7936 00:34:00.474 } 00:34:00.474 ] 00:34:00.474 } 00:34:00.474 } 00:34:00.474 }' 00:34:00.474 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:00.474 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:34:00.474 pt2' 00:34:00.474 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:00.474 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:34:00.474 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:00.733 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:00.733 "name": "pt1", 00:34:00.733 "aliases": [ 00:34:00.733 "00000000-0000-0000-0000-000000000001" 00:34:00.733 ], 00:34:00.733 "product_name": "passthru", 00:34:00.733 "block_size": 4128, 00:34:00.733 "num_blocks": 8192, 00:34:00.733 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:00.733 "md_size": 32, 00:34:00.733 "md_interleave": true, 00:34:00.733 "dif_type": 0, 00:34:00.733 "assigned_rate_limits": { 00:34:00.733 "rw_ios_per_sec": 0, 00:34:00.733 "rw_mbytes_per_sec": 0, 00:34:00.733 "r_mbytes_per_sec": 0, 00:34:00.733 "w_mbytes_per_sec": 0 00:34:00.733 }, 00:34:00.733 "claimed": true, 00:34:00.733 "claim_type": "exclusive_write", 00:34:00.733 "zoned": false, 00:34:00.733 "supported_io_types": { 00:34:00.733 "read": true, 00:34:00.733 "write": true, 00:34:00.733 "unmap": true, 00:34:00.733 "flush": true, 00:34:00.733 "reset": true, 00:34:00.733 "nvme_admin": false, 00:34:00.733 "nvme_io": false, 00:34:00.733 "nvme_io_md": false, 00:34:00.733 "write_zeroes": true, 00:34:00.733 "zcopy": true, 00:34:00.733 "get_zone_info": false, 00:34:00.733 "zone_management": false, 00:34:00.733 "zone_append": false, 00:34:00.733 "compare": false, 00:34:00.733 "compare_and_write": false, 00:34:00.733 "abort": true, 00:34:00.733 "seek_hole": false, 00:34:00.733 "seek_data": false, 00:34:00.733 "copy": true, 00:34:00.733 "nvme_iov_md": false 00:34:00.733 }, 00:34:00.733 "memory_domains": [ 00:34:00.733 { 00:34:00.733 "dma_device_id": "system", 00:34:00.733 "dma_device_type": 1 00:34:00.733 }, 00:34:00.733 { 00:34:00.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:00.733 "dma_device_type": 2 00:34:00.733 } 00:34:00.733 ], 00:34:00.733 "driver_specific": { 00:34:00.733 "passthru": { 00:34:00.733 "name": "pt1", 00:34:00.733 "base_bdev_name": "malloc1" 00:34:00.733 } 00:34:00.733 } 00:34:00.733 }' 00:34:00.733 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:00.733 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:00.733 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:34:00.733 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:00.733 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:00.733 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:34:00.733 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:00.733 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:00.733 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:34:00.733 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:00.733 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:00.733 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:34:00.733 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:00.733 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:34:00.733 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:00.993 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:00.993 "name": "pt2", 00:34:00.993 "aliases": [ 00:34:00.993 "00000000-0000-0000-0000-000000000002" 00:34:00.993 ], 00:34:00.993 "product_name": "passthru", 00:34:00.993 "block_size": 4128, 00:34:00.993 "num_blocks": 8192, 00:34:00.993 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:00.993 "md_size": 32, 00:34:00.993 "md_interleave": true, 00:34:00.993 "dif_type": 0, 00:34:00.993 "assigned_rate_limits": { 00:34:00.993 "rw_ios_per_sec": 0, 00:34:00.993 "rw_mbytes_per_sec": 0, 00:34:00.993 "r_mbytes_per_sec": 0, 00:34:00.993 "w_mbytes_per_sec": 0 00:34:00.993 }, 00:34:00.993 "claimed": true, 00:34:00.993 "claim_type": "exclusive_write", 00:34:00.993 "zoned": false, 00:34:00.993 "supported_io_types": { 00:34:00.993 "read": true, 00:34:00.993 "write": true, 00:34:00.993 "unmap": true, 00:34:00.993 "flush": true, 00:34:00.993 "reset": true, 00:34:00.993 "nvme_admin": false, 00:34:00.993 "nvme_io": false, 00:34:00.993 "nvme_io_md": false, 00:34:00.993 "write_zeroes": true, 00:34:00.993 "zcopy": true, 00:34:00.993 "get_zone_info": false, 00:34:00.993 "zone_management": false, 00:34:00.993 "zone_append": false, 00:34:00.993 "compare": false, 00:34:00.993 "compare_and_write": false, 00:34:00.993 "abort": true, 00:34:00.993 "seek_hole": false, 00:34:00.993 "seek_data": false, 00:34:00.993 "copy": true, 00:34:00.993 "nvme_iov_md": false 00:34:00.993 }, 00:34:00.993 "memory_domains": [ 00:34:00.993 { 00:34:00.993 "dma_device_id": "system", 00:34:00.993 "dma_device_type": 1 00:34:00.993 }, 00:34:00.993 { 00:34:00.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:00.993 "dma_device_type": 2 00:34:00.993 } 00:34:00.993 ], 00:34:00.993 "driver_specific": { 00:34:00.993 "passthru": { 00:34:00.993 "name": "pt2", 00:34:00.993 "base_bdev_name": "malloc2" 00:34:00.993 } 00:34:00.993 } 00:34:00.993 }' 00:34:00.993 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:00.993 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:00.993 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:34:00.993 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:00.993 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:00.993 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:34:00.993 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:00.993 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:00.993 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:34:00.993 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:00.993 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:00.993 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:34:00.993 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:00.993 00:17:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:34:01.253 [2024-07-25 00:17:56.987621] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:01.253 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # '[' 2b81509c-d061-49bf-9b6c-33ed3112c788 '!=' 2b81509c-d061-49bf-9b6c-33ed3112c788 ']' 00:34:01.253 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:34:01.253 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:34:01.253 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:34:01.253 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:34:01.512 [2024-07-25 00:17:57.183480] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:34:01.512 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:01.512 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:01.512 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:01.512 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:01.512 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:01.512 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:01.512 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:01.512 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:01.512 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:01.512 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:01.512 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:01.512 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:01.771 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:01.771 "name": "raid_bdev1", 00:34:01.771 "uuid": "2b81509c-d061-49bf-9b6c-33ed3112c788", 00:34:01.771 "strip_size_kb": 0, 00:34:01.771 "state": "online", 00:34:01.771 "raid_level": "raid1", 00:34:01.771 "superblock": true, 00:34:01.771 "num_base_bdevs": 2, 00:34:01.771 "num_base_bdevs_discovered": 1, 00:34:01.771 "num_base_bdevs_operational": 1, 00:34:01.771 "base_bdevs_list": [ 00:34:01.771 { 00:34:01.771 "name": null, 00:34:01.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:01.771 "is_configured": false, 00:34:01.771 "data_offset": 256, 00:34:01.771 "data_size": 7936 00:34:01.771 }, 00:34:01.771 { 00:34:01.771 "name": "pt2", 00:34:01.771 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:01.771 "is_configured": true, 00:34:01.771 "data_offset": 256, 00:34:01.771 "data_size": 7936 00:34:01.771 } 00:34:01.771 ] 00:34:01.771 }' 00:34:01.771 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:01.771 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:02.030 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:02.030 [2024-07-25 00:17:57.879575] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:02.030 [2024-07-25 00:17:57.879728] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:02.030 [2024-07-25 00:17:57.879852] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:02.030 [2024-07-25 00:17:57.879913] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:02.030 [2024-07-25 00:17:57.879932] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state offline 00:34:02.030 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:02.030 00:17:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:34:02.290 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:34:02.290 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:34:02.290 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:34:02.290 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:34:02.290 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:02.548 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:34:02.548 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:34:02.548 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:34:02.548 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:34:02.548 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@534 -- # i=1 00:34:02.548 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:02.806 [2024-07-25 00:17:58.523694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:02.806 [2024-07-25 00:17:58.523759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:02.806 [2024-07-25 00:17:58.523780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009680 00:34:02.806 [2024-07-25 00:17:58.523794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:02.806 [2024-07-25 00:17:58.525760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:02.806 [2024-07-25 00:17:58.525810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:02.806 [2024-07-25 00:17:58.525870] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:02.806 [2024-07-25 00:17:58.525936] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:02.806 [2024-07-25 00:17:58.526016] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009c80 00:34:02.806 [2024-07-25 00:17:58.526034] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:34:02.806 [2024-07-25 00:17:58.526121] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:34:02.806 [2024-07-25 00:17:58.526226] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009c80 00:34:02.806 [2024-07-25 00:17:58.526239] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009c80 00:34:02.806 [2024-07-25 00:17:58.526302] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:02.806 pt2 00:34:02.806 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:02.806 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:02.806 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:02.806 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:02.806 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:02.806 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:02.806 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:02.806 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:02.806 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:02.806 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:02.806 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:02.806 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:03.063 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:03.064 "name": "raid_bdev1", 00:34:03.064 "uuid": "2b81509c-d061-49bf-9b6c-33ed3112c788", 00:34:03.064 "strip_size_kb": 0, 00:34:03.064 "state": "online", 00:34:03.064 "raid_level": "raid1", 00:34:03.064 "superblock": true, 00:34:03.064 "num_base_bdevs": 2, 00:34:03.064 "num_base_bdevs_discovered": 1, 00:34:03.064 "num_base_bdevs_operational": 1, 00:34:03.064 "base_bdevs_list": [ 00:34:03.064 { 00:34:03.064 "name": null, 00:34:03.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:03.064 "is_configured": false, 00:34:03.064 "data_offset": 256, 00:34:03.064 "data_size": 7936 00:34:03.064 }, 00:34:03.064 { 00:34:03.064 "name": "pt2", 00:34:03.064 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:03.064 "is_configured": true, 00:34:03.064 "data_offset": 256, 00:34:03.064 "data_size": 7936 00:34:03.064 } 00:34:03.064 ] 00:34:03.064 }' 00:34:03.064 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:03.064 00:17:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:03.333 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:03.606 [2024-07-25 00:17:59.315860] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:03.606 [2024-07-25 00:17:59.315892] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:03.606 [2024-07-25 00:17:59.315957] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:03.606 [2024-07-25 00:17:59.316015] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:03.606 [2024-07-25 00:17:59.316028] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009c80 name raid_bdev1, state offline 00:34:03.606 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:34:03.606 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:03.865 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:34:03.865 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:34:03.865 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:34:03.865 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:03.865 [2024-07-25 00:17:59.703918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:03.865 [2024-07-25 00:17:59.704141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:03.865 [2024-07-25 00:17:59.704178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:34:03.865 [2024-07-25 00:17:59.704202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:03.865 [2024-07-25 00:17:59.706460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:03.865 [2024-07-25 00:17:59.706514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:03.865 [2024-07-25 00:17:59.706607] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:03.865 [2024-07-25 00:17:59.706663] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:03.865 [2024-07-25 00:17:59.706810] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:34:03.865 [2024-07-25 00:17:59.706826] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:03.865 [2024-07-25 00:17:59.706847] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state configuring 00:34:03.865 [2024-07-25 00:17:59.706923] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:03.865 [2024-07-25 00:17:59.707054] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a880 00:34:03.865 [2024-07-25 00:17:59.707069] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:34:03.865 [2024-07-25 00:17:59.707132] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:34:03.865 [2024-07-25 00:17:59.707205] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a880 00:34:03.865 [2024-07-25 00:17:59.707221] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a880 00:34:03.865 [2024-07-25 00:17:59.707302] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:03.865 pt1 00:34:03.865 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:34:03.865 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:03.865 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:03.865 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:03.865 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:03.865 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:03.865 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:03.865 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:03.865 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:03.865 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:03.865 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:03.865 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:03.865 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:04.124 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:04.124 "name": "raid_bdev1", 00:34:04.124 "uuid": "2b81509c-d061-49bf-9b6c-33ed3112c788", 00:34:04.124 "strip_size_kb": 0, 00:34:04.124 "state": "online", 00:34:04.124 "raid_level": "raid1", 00:34:04.124 "superblock": true, 00:34:04.124 "num_base_bdevs": 2, 00:34:04.124 "num_base_bdevs_discovered": 1, 00:34:04.124 "num_base_bdevs_operational": 1, 00:34:04.124 "base_bdevs_list": [ 00:34:04.124 { 00:34:04.124 "name": null, 00:34:04.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:04.124 "is_configured": false, 00:34:04.124 "data_offset": 256, 00:34:04.124 "data_size": 7936 00:34:04.124 }, 00:34:04.124 { 00:34:04.124 "name": "pt2", 00:34:04.124 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:04.124 "is_configured": true, 00:34:04.124 "data_offset": 256, 00:34:04.124 "data_size": 7936 00:34:04.124 } 00:34:04.124 ] 00:34:04.124 }' 00:34:04.124 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:04.124 00:17:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:04.383 00:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:34:04.383 00:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:34:04.641 00:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:34:04.641 00:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:04.641 00:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:34:04.900 [2024-07-25 00:18:00.624373] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:04.900 00:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # '[' 2b81509c-d061-49bf-9b6c-33ed3112c788 '!=' 2b81509c-d061-49bf-9b6c-33ed3112c788 ']' 00:34:04.900 00:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@578 -- # killprocess 113594 00:34:04.900 00:18:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 113594 ']' 00:34:04.900 00:18:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 113594 00:34:04.900 00:18:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:34:04.900 00:18:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:04.900 00:18:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 113594 00:34:04.900 00:18:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:04.900 killing process with pid 113594 00:34:04.900 00:18:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:04.900 00:18:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 113594' 00:34:04.900 00:18:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 113594 00:34:04.900 [2024-07-25 00:18:00.673449] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:04.900 [2024-07-25 00:18:00.673531] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:04.900 00:18:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 113594 00:34:04.901 [2024-07-25 00:18:00.673583] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:04.901 [2024-07-25 00:18:00.673602] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state offline 00:34:05.158 [2024-07-25 00:18:00.809087] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:06.093 00:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@580 -- # return 0 00:34:06.093 ************************************ 00:34:06.093 END TEST raid_superblock_test_md_interleaved 00:34:06.093 ************************************ 00:34:06.093 00:34:06.093 real 0m12.576s 00:34:06.093 user 0m21.440s 00:34:06.093 sys 0m1.967s 00:34:06.093 00:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:06.093 00:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:06.093 00:18:01 bdev_raid -- bdev/bdev_raid.sh@994 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:34:06.093 00:18:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:34:06.093 00:18:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:06.093 00:18:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:06.093 ************************************ 00:34:06.093 START TEST raid_rebuild_test_sb_md_interleaved 00:34:06.093 ************************************ 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # local verify=false 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # local strip_size 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # local create_arg 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@594 -- # local data_offset 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # raid_pid=114057 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # waitforlisten 114057 /var/tmp/spdk-raid.sock 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 114057 ']' 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:06.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:06.093 00:18:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:06.093 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:06.093 Zero copy mechanism will not be used. 00:34:06.093 [2024-07-25 00:18:01.889827] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:34:06.093 [2024-07-25 00:18:01.890038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114057 ] 00:34:06.352 [2024-07-25 00:18:02.056970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:06.352 [2024-07-25 00:18:02.215860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:06.610 [2024-07-25 00:18:02.377221] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:07.177 00:18:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:07.177 00:18:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:34:07.177 00:18:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:34:07.177 00:18:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:34:07.177 BaseBdev1_malloc 00:34:07.177 00:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:07.435 [2024-07-25 00:18:03.214911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:07.435 [2024-07-25 00:18:03.214995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:07.435 [2024-07-25 00:18:03.215023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:34:07.435 [2024-07-25 00:18:03.215039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:07.435 [2024-07-25 00:18:03.217083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:07.435 [2024-07-25 00:18:03.217124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:07.435 BaseBdev1 00:34:07.436 00:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:34:07.436 00:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:34:07.699 BaseBdev2_malloc 00:34:07.699 00:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:34:07.962 [2024-07-25 00:18:03.654373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:34:07.962 [2024-07-25 00:18:03.654458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:07.962 [2024-07-25 00:18:03.654487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:34:07.962 [2024-07-25 00:18:03.654507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:07.962 [2024-07-25 00:18:03.656660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:07.962 [2024-07-25 00:18:03.656705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:07.962 BaseBdev2 00:34:07.962 00:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:34:08.220 spare_malloc 00:34:08.220 00:18:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:34:08.220 spare_delay 00:34:08.479 00:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:08.479 [2024-07-25 00:18:04.265051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:08.479 [2024-07-25 00:18:04.265102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:08.479 [2024-07-25 00:18:04.265125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:34:08.479 [2024-07-25 00:18:04.265139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:08.479 [2024-07-25 00:18:04.266969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:08.479 [2024-07-25 00:18:04.267005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:08.479 spare 00:34:08.479 00:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:34:08.738 [2024-07-25 00:18:04.457148] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:08.738 [2024-07-25 00:18:04.458895] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:08.738 [2024-07-25 00:18:04.459134] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009080 00:34:08.738 [2024-07-25 00:18:04.459163] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:34:08.738 [2024-07-25 00:18:04.459251] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:34:08.738 [2024-07-25 00:18:04.459331] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009080 00:34:08.738 [2024-07-25 00:18:04.459344] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009080 00:34:08.738 [2024-07-25 00:18:04.459411] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:08.738 00:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:08.738 00:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:08.738 00:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:08.738 00:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:08.738 00:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:08.738 00:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:08.738 00:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:08.738 00:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:08.738 00:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:08.738 00:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:08.738 00:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:08.738 00:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:08.997 00:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:08.997 "name": "raid_bdev1", 00:34:08.998 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:08.998 "strip_size_kb": 0, 00:34:08.998 "state": "online", 00:34:08.998 "raid_level": "raid1", 00:34:08.998 "superblock": true, 00:34:08.998 "num_base_bdevs": 2, 00:34:08.998 "num_base_bdevs_discovered": 2, 00:34:08.998 "num_base_bdevs_operational": 2, 00:34:08.998 "base_bdevs_list": [ 00:34:08.998 { 00:34:08.998 "name": "BaseBdev1", 00:34:08.998 "uuid": "b3f61130-f1b8-5210-b8cf-0f2b8601b913", 00:34:08.998 "is_configured": true, 00:34:08.998 "data_offset": 256, 00:34:08.998 "data_size": 7936 00:34:08.998 }, 00:34:08.998 { 00:34:08.998 "name": "BaseBdev2", 00:34:08.998 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:08.998 "is_configured": true, 00:34:08.998 "data_offset": 256, 00:34:08.998 "data_size": 7936 00:34:08.998 } 00:34:08.998 ] 00:34:08.998 }' 00:34:08.998 00:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:08.998 00:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:09.256 00:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:09.256 00:18:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:34:09.515 [2024-07-25 00:18:05.217540] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:09.515 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:34:09.515 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:09.515 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:09.774 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:34:09.774 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:34:09.774 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # '[' false = true ']' 00:34:09.774 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:34:09.774 [2024-07-25 00:18:05.605388] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:09.774 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:09.774 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:09.774 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:09.774 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:09.774 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:09.774 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:09.774 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:09.774 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:09.774 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:09.774 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:09.774 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:09.774 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:10.032 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:10.032 "name": "raid_bdev1", 00:34:10.032 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:10.032 "strip_size_kb": 0, 00:34:10.032 "state": "online", 00:34:10.032 "raid_level": "raid1", 00:34:10.032 "superblock": true, 00:34:10.032 "num_base_bdevs": 2, 00:34:10.032 "num_base_bdevs_discovered": 1, 00:34:10.032 "num_base_bdevs_operational": 1, 00:34:10.032 "base_bdevs_list": [ 00:34:10.032 { 00:34:10.032 "name": null, 00:34:10.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:10.032 "is_configured": false, 00:34:10.032 "data_offset": 256, 00:34:10.032 "data_size": 7936 00:34:10.032 }, 00:34:10.032 { 00:34:10.032 "name": "BaseBdev2", 00:34:10.032 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:10.032 "is_configured": true, 00:34:10.032 "data_offset": 256, 00:34:10.032 "data_size": 7936 00:34:10.032 } 00:34:10.032 ] 00:34:10.032 }' 00:34:10.032 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:10.032 00:18:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:10.290 00:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:34:10.549 [2024-07-25 00:18:06.297558] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:10.549 [2024-07-25 00:18:06.309154] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:34:10.549 [2024-07-25 00:18:06.310891] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:10.549 00:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:34:11.486 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:11.486 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:11.486 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:11.486 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:11.486 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:11.486 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:11.486 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:11.745 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:11.745 "name": "raid_bdev1", 00:34:11.745 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:11.745 "strip_size_kb": 0, 00:34:11.745 "state": "online", 00:34:11.745 "raid_level": "raid1", 00:34:11.745 "superblock": true, 00:34:11.745 "num_base_bdevs": 2, 00:34:11.745 "num_base_bdevs_discovered": 2, 00:34:11.745 "num_base_bdevs_operational": 2, 00:34:11.745 "process": { 00:34:11.745 "type": "rebuild", 00:34:11.745 "target": "spare", 00:34:11.745 "progress": { 00:34:11.745 "blocks": 3072, 00:34:11.745 "percent": 38 00:34:11.745 } 00:34:11.746 }, 00:34:11.746 "base_bdevs_list": [ 00:34:11.746 { 00:34:11.746 "name": "spare", 00:34:11.746 "uuid": "95046304-1bf8-5859-9394-153bf797010c", 00:34:11.746 "is_configured": true, 00:34:11.746 "data_offset": 256, 00:34:11.746 "data_size": 7936 00:34:11.746 }, 00:34:11.746 { 00:34:11.746 "name": "BaseBdev2", 00:34:11.746 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:11.746 "is_configured": true, 00:34:11.746 "data_offset": 256, 00:34:11.746 "data_size": 7936 00:34:11.746 } 00:34:11.746 ] 00:34:11.746 }' 00:34:11.746 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:11.746 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:11.746 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:11.746 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:11.746 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:34:12.005 [2024-07-25 00:18:07.821017] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:12.263 [2024-07-25 00:18:07.917711] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:12.263 [2024-07-25 00:18:07.917789] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:12.263 [2024-07-25 00:18:07.917824] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:12.263 [2024-07-25 00:18:07.917850] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:12.263 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:12.263 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:12.263 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:12.263 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:12.263 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:12.263 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:12.263 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:12.263 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:12.263 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:12.263 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:12.263 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:12.263 00:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:12.521 00:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:12.521 "name": "raid_bdev1", 00:34:12.521 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:12.521 "strip_size_kb": 0, 00:34:12.521 "state": "online", 00:34:12.521 "raid_level": "raid1", 00:34:12.521 "superblock": true, 00:34:12.521 "num_base_bdevs": 2, 00:34:12.521 "num_base_bdevs_discovered": 1, 00:34:12.521 "num_base_bdevs_operational": 1, 00:34:12.521 "base_bdevs_list": [ 00:34:12.521 { 00:34:12.521 "name": null, 00:34:12.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:12.521 "is_configured": false, 00:34:12.521 "data_offset": 256, 00:34:12.521 "data_size": 7936 00:34:12.521 }, 00:34:12.521 { 00:34:12.521 "name": "BaseBdev2", 00:34:12.521 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:12.521 "is_configured": true, 00:34:12.521 "data_offset": 256, 00:34:12.521 "data_size": 7936 00:34:12.521 } 00:34:12.521 ] 00:34:12.521 }' 00:34:12.521 00:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:12.521 00:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:12.779 00:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:12.779 00:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:12.779 00:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:12.779 00:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:12.779 00:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:12.779 00:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:12.779 00:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:13.037 00:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:13.037 "name": "raid_bdev1", 00:34:13.037 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:13.037 "strip_size_kb": 0, 00:34:13.037 "state": "online", 00:34:13.037 "raid_level": "raid1", 00:34:13.037 "superblock": true, 00:34:13.037 "num_base_bdevs": 2, 00:34:13.038 "num_base_bdevs_discovered": 1, 00:34:13.038 "num_base_bdevs_operational": 1, 00:34:13.038 "base_bdevs_list": [ 00:34:13.038 { 00:34:13.038 "name": null, 00:34:13.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:13.038 "is_configured": false, 00:34:13.038 "data_offset": 256, 00:34:13.038 "data_size": 7936 00:34:13.038 }, 00:34:13.038 { 00:34:13.038 "name": "BaseBdev2", 00:34:13.038 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:13.038 "is_configured": true, 00:34:13.038 "data_offset": 256, 00:34:13.038 "data_size": 7936 00:34:13.038 } 00:34:13.038 ] 00:34:13.038 }' 00:34:13.038 00:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:13.038 00:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:13.038 00:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:13.038 00:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:13.038 00:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:34:13.296 [2024-07-25 00:18:08.946530] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:13.296 [2024-07-25 00:18:08.957412] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:34:13.296 [2024-07-25 00:18:08.959247] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:13.296 00:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@678 -- # sleep 1 00:34:14.233 00:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:14.233 00:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:14.233 00:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:14.233 00:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:14.233 00:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:14.233 00:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:14.233 00:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:14.492 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:14.492 "name": "raid_bdev1", 00:34:14.492 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:14.492 "strip_size_kb": 0, 00:34:14.492 "state": "online", 00:34:14.492 "raid_level": "raid1", 00:34:14.492 "superblock": true, 00:34:14.492 "num_base_bdevs": 2, 00:34:14.492 "num_base_bdevs_discovered": 2, 00:34:14.492 "num_base_bdevs_operational": 2, 00:34:14.492 "process": { 00:34:14.492 "type": "rebuild", 00:34:14.492 "target": "spare", 00:34:14.492 "progress": { 00:34:14.492 "blocks": 3072, 00:34:14.492 "percent": 38 00:34:14.492 } 00:34:14.492 }, 00:34:14.492 "base_bdevs_list": [ 00:34:14.492 { 00:34:14.492 "name": "spare", 00:34:14.492 "uuid": "95046304-1bf8-5859-9394-153bf797010c", 00:34:14.492 "is_configured": true, 00:34:14.492 "data_offset": 256, 00:34:14.492 "data_size": 7936 00:34:14.492 }, 00:34:14.492 { 00:34:14.492 "name": "BaseBdev2", 00:34:14.492 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:14.492 "is_configured": true, 00:34:14.492 "data_offset": 256, 00:34:14.493 "data_size": 7936 00:34:14.493 } 00:34:14.493 ] 00:34:14.493 }' 00:34:14.493 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:14.493 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:14.493 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:14.493 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:14.493 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:34:14.493 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:34:14.493 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:34:14.493 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:34:14.493 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:34:14.493 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:34:14.493 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # local timeout=1280 00:34:14.493 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:34:14.493 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:14.493 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:14.493 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:14.493 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:14.493 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:14.493 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:14.493 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:14.752 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:14.752 "name": "raid_bdev1", 00:34:14.752 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:14.752 "strip_size_kb": 0, 00:34:14.752 "state": "online", 00:34:14.752 "raid_level": "raid1", 00:34:14.752 "superblock": true, 00:34:14.752 "num_base_bdevs": 2, 00:34:14.752 "num_base_bdevs_discovered": 2, 00:34:14.752 "num_base_bdevs_operational": 2, 00:34:14.752 "process": { 00:34:14.752 "type": "rebuild", 00:34:14.752 "target": "spare", 00:34:14.752 "progress": { 00:34:14.752 "blocks": 3840, 00:34:14.752 "percent": 48 00:34:14.752 } 00:34:14.752 }, 00:34:14.752 "base_bdevs_list": [ 00:34:14.752 { 00:34:14.752 "name": "spare", 00:34:14.752 "uuid": "95046304-1bf8-5859-9394-153bf797010c", 00:34:14.752 "is_configured": true, 00:34:14.752 "data_offset": 256, 00:34:14.752 "data_size": 7936 00:34:14.752 }, 00:34:14.752 { 00:34:14.752 "name": "BaseBdev2", 00:34:14.752 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:14.752 "is_configured": true, 00:34:14.752 "data_offset": 256, 00:34:14.752 "data_size": 7936 00:34:14.752 } 00:34:14.752 ] 00:34:14.752 }' 00:34:14.752 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:14.752 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:14.752 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:14.752 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:14.752 00:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@726 -- # sleep 1 00:34:15.688 00:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:34:15.688 00:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:15.688 00:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:15.688 00:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:15.688 00:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:15.688 00:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:15.688 00:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:15.688 00:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:15.947 00:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:15.948 "name": "raid_bdev1", 00:34:15.948 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:15.948 "strip_size_kb": 0, 00:34:15.948 "state": "online", 00:34:15.948 "raid_level": "raid1", 00:34:15.948 "superblock": true, 00:34:15.948 "num_base_bdevs": 2, 00:34:15.948 "num_base_bdevs_discovered": 2, 00:34:15.948 "num_base_bdevs_operational": 2, 00:34:15.948 "process": { 00:34:15.948 "type": "rebuild", 00:34:15.948 "target": "spare", 00:34:15.948 "progress": { 00:34:15.948 "blocks": 6912, 00:34:15.948 "percent": 87 00:34:15.948 } 00:34:15.948 }, 00:34:15.948 "base_bdevs_list": [ 00:34:15.948 { 00:34:15.948 "name": "spare", 00:34:15.948 "uuid": "95046304-1bf8-5859-9394-153bf797010c", 00:34:15.948 "is_configured": true, 00:34:15.948 "data_offset": 256, 00:34:15.948 "data_size": 7936 00:34:15.948 }, 00:34:15.948 { 00:34:15.948 "name": "BaseBdev2", 00:34:15.948 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:15.948 "is_configured": true, 00:34:15.948 "data_offset": 256, 00:34:15.948 "data_size": 7936 00:34:15.948 } 00:34:15.948 ] 00:34:15.948 }' 00:34:15.948 00:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:15.948 00:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:15.948 00:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:15.948 00:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:15.948 00:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@726 -- # sleep 1 00:34:16.207 [2024-07-25 00:18:12.072359] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:34:16.207 [2024-07-25 00:18:12.072450] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:34:16.207 [2024-07-25 00:18:12.072601] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:17.144 00:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:34:17.144 00:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:17.144 00:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:17.144 00:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:17.144 00:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:17.144 00:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:17.144 00:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:17.144 00:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:17.144 00:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:17.144 "name": "raid_bdev1", 00:34:17.144 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:17.144 "strip_size_kb": 0, 00:34:17.144 "state": "online", 00:34:17.144 "raid_level": "raid1", 00:34:17.144 "superblock": true, 00:34:17.144 "num_base_bdevs": 2, 00:34:17.144 "num_base_bdevs_discovered": 2, 00:34:17.144 "num_base_bdevs_operational": 2, 00:34:17.144 "base_bdevs_list": [ 00:34:17.144 { 00:34:17.144 "name": "spare", 00:34:17.144 "uuid": "95046304-1bf8-5859-9394-153bf797010c", 00:34:17.144 "is_configured": true, 00:34:17.144 "data_offset": 256, 00:34:17.144 "data_size": 7936 00:34:17.144 }, 00:34:17.144 { 00:34:17.144 "name": "BaseBdev2", 00:34:17.144 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:17.144 "is_configured": true, 00:34:17.144 "data_offset": 256, 00:34:17.144 "data_size": 7936 00:34:17.144 } 00:34:17.144 ] 00:34:17.144 }' 00:34:17.144 00:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:17.144 00:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:34:17.144 00:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:17.144 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:34:17.144 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@724 -- # break 00:34:17.144 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:17.144 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:17.144 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:17.144 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:17.144 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:17.144 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:17.144 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:17.403 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:17.403 "name": "raid_bdev1", 00:34:17.403 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:17.403 "strip_size_kb": 0, 00:34:17.403 "state": "online", 00:34:17.403 "raid_level": "raid1", 00:34:17.403 "superblock": true, 00:34:17.403 "num_base_bdevs": 2, 00:34:17.403 "num_base_bdevs_discovered": 2, 00:34:17.403 "num_base_bdevs_operational": 2, 00:34:17.403 "base_bdevs_list": [ 00:34:17.403 { 00:34:17.403 "name": "spare", 00:34:17.403 "uuid": "95046304-1bf8-5859-9394-153bf797010c", 00:34:17.403 "is_configured": true, 00:34:17.403 "data_offset": 256, 00:34:17.403 "data_size": 7936 00:34:17.403 }, 00:34:17.403 { 00:34:17.403 "name": "BaseBdev2", 00:34:17.403 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:17.403 "is_configured": true, 00:34:17.403 "data_offset": 256, 00:34:17.403 "data_size": 7936 00:34:17.403 } 00:34:17.403 ] 00:34:17.403 }' 00:34:17.403 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:17.403 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:17.403 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:17.667 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:17.667 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:17.667 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:17.667 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:17.667 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:17.667 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:17.667 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:17.667 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:17.667 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:17.667 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:17.667 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:17.667 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:17.667 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:17.667 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:17.667 "name": "raid_bdev1", 00:34:17.667 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:17.667 "strip_size_kb": 0, 00:34:17.667 "state": "online", 00:34:17.667 "raid_level": "raid1", 00:34:17.667 "superblock": true, 00:34:17.667 "num_base_bdevs": 2, 00:34:17.667 "num_base_bdevs_discovered": 2, 00:34:17.667 "num_base_bdevs_operational": 2, 00:34:17.667 "base_bdevs_list": [ 00:34:17.667 { 00:34:17.667 "name": "spare", 00:34:17.667 "uuid": "95046304-1bf8-5859-9394-153bf797010c", 00:34:17.667 "is_configured": true, 00:34:17.667 "data_offset": 256, 00:34:17.667 "data_size": 7936 00:34:17.667 }, 00:34:17.667 { 00:34:17.667 "name": "BaseBdev2", 00:34:17.667 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:17.667 "is_configured": true, 00:34:17.667 "data_offset": 256, 00:34:17.667 "data_size": 7936 00:34:17.667 } 00:34:17.667 ] 00:34:17.667 }' 00:34:17.667 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:17.667 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:17.930 00:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:18.198 [2024-07-25 00:18:14.038691] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:18.198 [2024-07-25 00:18:14.038762] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:18.198 [2024-07-25 00:18:14.038973] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:18.198 [2024-07-25 00:18:14.039104] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:18.198 [2024-07-25 00:18:14.039122] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state offline 00:34:18.198 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # jq length 00:34:18.198 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:18.457 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:34:18.457 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@737 -- # '[' false = true ']' 00:34:18.457 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:34:18.457 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:18.715 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:18.974 [2024-07-25 00:18:14.682773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:18.974 [2024-07-25 00:18:14.682870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:18.974 [2024-07-25 00:18:14.682903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:34:18.974 [2024-07-25 00:18:14.682916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:18.974 [2024-07-25 00:18:14.684924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:18.974 [2024-07-25 00:18:14.684970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:18.974 [2024-07-25 00:18:14.685034] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:18.974 [2024-07-25 00:18:14.685103] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:18.974 [2024-07-25 00:18:14.685244] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:18.974 spare 00:34:18.974 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:18.974 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:18.974 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:18.974 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:18.974 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:18.974 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:18.974 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:18.974 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:18.974 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:18.974 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:18.974 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:18.974 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:18.974 [2024-07-25 00:18:14.785360] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a280 00:34:18.974 [2024-07-25 00:18:14.785393] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:34:18.974 [2024-07-25 00:18:14.785534] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ba0 00:34:18.974 [2024-07-25 00:18:14.785632] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a280 00:34:18.974 [2024-07-25 00:18:14.785646] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a280 00:34:18.974 [2024-07-25 00:18:14.785717] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:19.233 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:19.233 "name": "raid_bdev1", 00:34:19.233 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:19.233 "strip_size_kb": 0, 00:34:19.233 "state": "online", 00:34:19.233 "raid_level": "raid1", 00:34:19.233 "superblock": true, 00:34:19.233 "num_base_bdevs": 2, 00:34:19.233 "num_base_bdevs_discovered": 2, 00:34:19.233 "num_base_bdevs_operational": 2, 00:34:19.233 "base_bdevs_list": [ 00:34:19.233 { 00:34:19.233 "name": "spare", 00:34:19.233 "uuid": "95046304-1bf8-5859-9394-153bf797010c", 00:34:19.233 "is_configured": true, 00:34:19.233 "data_offset": 256, 00:34:19.233 "data_size": 7936 00:34:19.233 }, 00:34:19.233 { 00:34:19.233 "name": "BaseBdev2", 00:34:19.233 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:19.233 "is_configured": true, 00:34:19.233 "data_offset": 256, 00:34:19.233 "data_size": 7936 00:34:19.233 } 00:34:19.233 ] 00:34:19.233 }' 00:34:19.233 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:19.233 00:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:19.492 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:19.492 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:19.492 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:19.492 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:19.492 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:19.492 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:19.492 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:19.751 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:19.751 "name": "raid_bdev1", 00:34:19.751 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:19.751 "strip_size_kb": 0, 00:34:19.751 "state": "online", 00:34:19.751 "raid_level": "raid1", 00:34:19.751 "superblock": true, 00:34:19.751 "num_base_bdevs": 2, 00:34:19.751 "num_base_bdevs_discovered": 2, 00:34:19.751 "num_base_bdevs_operational": 2, 00:34:19.751 "base_bdevs_list": [ 00:34:19.751 { 00:34:19.751 "name": "spare", 00:34:19.751 "uuid": "95046304-1bf8-5859-9394-153bf797010c", 00:34:19.751 "is_configured": true, 00:34:19.751 "data_offset": 256, 00:34:19.751 "data_size": 7936 00:34:19.751 }, 00:34:19.751 { 00:34:19.751 "name": "BaseBdev2", 00:34:19.751 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:19.751 "is_configured": true, 00:34:19.751 "data_offset": 256, 00:34:19.751 "data_size": 7936 00:34:19.751 } 00:34:19.751 ] 00:34:19.751 }' 00:34:19.751 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:19.751 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:19.751 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:19.751 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:19.751 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:19.751 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:34:20.009 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:34:20.009 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:34:20.269 [2024-07-25 00:18:15.923138] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:20.269 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:20.269 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:20.269 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:20.269 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:20.269 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:20.269 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:20.269 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:20.269 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:20.269 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:20.269 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:20.269 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:20.269 00:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:20.528 00:18:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:20.528 "name": "raid_bdev1", 00:34:20.528 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:20.528 "strip_size_kb": 0, 00:34:20.528 "state": "online", 00:34:20.528 "raid_level": "raid1", 00:34:20.528 "superblock": true, 00:34:20.528 "num_base_bdevs": 2, 00:34:20.528 "num_base_bdevs_discovered": 1, 00:34:20.528 "num_base_bdevs_operational": 1, 00:34:20.528 "base_bdevs_list": [ 00:34:20.528 { 00:34:20.528 "name": null, 00:34:20.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:20.528 "is_configured": false, 00:34:20.528 "data_offset": 256, 00:34:20.528 "data_size": 7936 00:34:20.528 }, 00:34:20.528 { 00:34:20.528 "name": "BaseBdev2", 00:34:20.528 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:20.528 "is_configured": true, 00:34:20.528 "data_offset": 256, 00:34:20.528 "data_size": 7936 00:34:20.528 } 00:34:20.528 ] 00:34:20.528 }' 00:34:20.528 00:18:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:20.528 00:18:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:20.787 00:18:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:34:21.046 [2024-07-25 00:18:16.699416] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:21.046 [2024-07-25 00:18:16.699659] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:21.046 [2024-07-25 00:18:16.699682] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:21.046 [2024-07-25 00:18:16.699769] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:21.046 [2024-07-25 00:18:16.710461] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005c70 00:34:21.046 [2024-07-25 00:18:16.712260] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:21.046 00:18:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # sleep 1 00:34:21.981 00:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:21.981 00:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:21.981 00:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:21.981 00:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:21.981 00:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:21.981 00:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:21.981 00:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:22.240 00:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:22.240 "name": "raid_bdev1", 00:34:22.240 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:22.240 "strip_size_kb": 0, 00:34:22.240 "state": "online", 00:34:22.240 "raid_level": "raid1", 00:34:22.240 "superblock": true, 00:34:22.240 "num_base_bdevs": 2, 00:34:22.240 "num_base_bdevs_discovered": 2, 00:34:22.240 "num_base_bdevs_operational": 2, 00:34:22.240 "process": { 00:34:22.240 "type": "rebuild", 00:34:22.240 "target": "spare", 00:34:22.240 "progress": { 00:34:22.240 "blocks": 3072, 00:34:22.240 "percent": 38 00:34:22.240 } 00:34:22.240 }, 00:34:22.240 "base_bdevs_list": [ 00:34:22.240 { 00:34:22.240 "name": "spare", 00:34:22.240 "uuid": "95046304-1bf8-5859-9394-153bf797010c", 00:34:22.240 "is_configured": true, 00:34:22.240 "data_offset": 256, 00:34:22.240 "data_size": 7936 00:34:22.240 }, 00:34:22.240 { 00:34:22.240 "name": "BaseBdev2", 00:34:22.240 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:22.240 "is_configured": true, 00:34:22.240 "data_offset": 256, 00:34:22.240 "data_size": 7936 00:34:22.240 } 00:34:22.240 ] 00:34:22.240 }' 00:34:22.240 00:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:22.240 00:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:22.240 00:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:22.240 00:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:22.240 00:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:22.499 [2024-07-25 00:18:18.222509] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:22.499 [2024-07-25 00:18:18.319143] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:22.499 [2024-07-25 00:18:18.319229] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:22.499 [2024-07-25 00:18:18.319250] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:22.499 [2024-07-25 00:18:18.319264] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:22.499 00:18:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:22.499 00:18:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:22.499 00:18:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:22.499 00:18:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:22.499 00:18:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:22.499 00:18:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:22.499 00:18:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:22.499 00:18:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:22.499 00:18:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:22.499 00:18:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:22.499 00:18:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:22.499 00:18:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:22.757 00:18:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:22.757 "name": "raid_bdev1", 00:34:22.757 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:22.757 "strip_size_kb": 0, 00:34:22.757 "state": "online", 00:34:22.757 "raid_level": "raid1", 00:34:22.757 "superblock": true, 00:34:22.757 "num_base_bdevs": 2, 00:34:22.757 "num_base_bdevs_discovered": 1, 00:34:22.757 "num_base_bdevs_operational": 1, 00:34:22.757 "base_bdevs_list": [ 00:34:22.757 { 00:34:22.757 "name": null, 00:34:22.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:22.757 "is_configured": false, 00:34:22.757 "data_offset": 256, 00:34:22.757 "data_size": 7936 00:34:22.757 }, 00:34:22.757 { 00:34:22.757 "name": "BaseBdev2", 00:34:22.757 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:22.757 "is_configured": true, 00:34:22.757 "data_offset": 256, 00:34:22.757 "data_size": 7936 00:34:22.757 } 00:34:22.757 ] 00:34:22.757 }' 00:34:22.757 00:18:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:22.757 00:18:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:23.324 00:18:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:23.324 [2024-07-25 00:18:19.126561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:23.324 [2024-07-25 00:18:19.126643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:23.324 [2024-07-25 00:18:19.126673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:34:23.324 [2024-07-25 00:18:19.126689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:23.324 [2024-07-25 00:18:19.126956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:23.324 [2024-07-25 00:18:19.126991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:23.324 [2024-07-25 00:18:19.127053] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:23.324 [2024-07-25 00:18:19.127083] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:23.324 [2024-07-25 00:18:19.127098] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:23.324 [2024-07-25 00:18:19.127125] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:23.324 [2024-07-25 00:18:19.139294] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005d40 00:34:23.324 spare 00:34:23.324 [2024-07-25 00:18:19.141543] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:23.324 00:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # sleep 1 00:34:24.699 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:24.699 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:24.699 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:24.699 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:24.699 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:24.699 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:24.699 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:24.699 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:24.699 "name": "raid_bdev1", 00:34:24.699 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:24.699 "strip_size_kb": 0, 00:34:24.699 "state": "online", 00:34:24.699 "raid_level": "raid1", 00:34:24.699 "superblock": true, 00:34:24.699 "num_base_bdevs": 2, 00:34:24.699 "num_base_bdevs_discovered": 2, 00:34:24.699 "num_base_bdevs_operational": 2, 00:34:24.699 "process": { 00:34:24.699 "type": "rebuild", 00:34:24.699 "target": "spare", 00:34:24.699 "progress": { 00:34:24.699 "blocks": 3072, 00:34:24.699 "percent": 38 00:34:24.699 } 00:34:24.699 }, 00:34:24.699 "base_bdevs_list": [ 00:34:24.699 { 00:34:24.699 "name": "spare", 00:34:24.699 "uuid": "95046304-1bf8-5859-9394-153bf797010c", 00:34:24.699 "is_configured": true, 00:34:24.699 "data_offset": 256, 00:34:24.699 "data_size": 7936 00:34:24.699 }, 00:34:24.699 { 00:34:24.699 "name": "BaseBdev2", 00:34:24.699 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:24.699 "is_configured": true, 00:34:24.699 "data_offset": 256, 00:34:24.699 "data_size": 7936 00:34:24.699 } 00:34:24.699 ] 00:34:24.699 }' 00:34:24.699 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:24.699 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:24.699 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:24.699 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:24.699 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:24.958 [2024-07-25 00:18:20.655221] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:24.958 [2024-07-25 00:18:20.748316] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:24.958 [2024-07-25 00:18:20.748375] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:24.958 [2024-07-25 00:18:20.748400] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:24.958 [2024-07-25 00:18:20.748410] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:24.958 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:24.958 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:24.958 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:24.958 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:24.958 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:24.958 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:24.958 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:24.958 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:24.958 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:24.958 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:24.958 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:24.958 00:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:25.217 00:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:25.217 "name": "raid_bdev1", 00:34:25.217 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:25.217 "strip_size_kb": 0, 00:34:25.217 "state": "online", 00:34:25.217 "raid_level": "raid1", 00:34:25.217 "superblock": true, 00:34:25.217 "num_base_bdevs": 2, 00:34:25.217 "num_base_bdevs_discovered": 1, 00:34:25.217 "num_base_bdevs_operational": 1, 00:34:25.217 "base_bdevs_list": [ 00:34:25.217 { 00:34:25.217 "name": null, 00:34:25.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:25.217 "is_configured": false, 00:34:25.217 "data_offset": 256, 00:34:25.217 "data_size": 7936 00:34:25.217 }, 00:34:25.217 { 00:34:25.217 "name": "BaseBdev2", 00:34:25.217 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:25.217 "is_configured": true, 00:34:25.217 "data_offset": 256, 00:34:25.217 "data_size": 7936 00:34:25.217 } 00:34:25.217 ] 00:34:25.217 }' 00:34:25.217 00:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:25.217 00:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:25.784 00:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:25.784 00:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:25.784 00:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:25.784 00:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:25.784 00:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:25.784 00:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:25.784 00:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:25.784 00:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:25.784 "name": "raid_bdev1", 00:34:25.784 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:25.784 "strip_size_kb": 0, 00:34:25.784 "state": "online", 00:34:25.784 "raid_level": "raid1", 00:34:25.784 "superblock": true, 00:34:25.784 "num_base_bdevs": 2, 00:34:25.784 "num_base_bdevs_discovered": 1, 00:34:25.784 "num_base_bdevs_operational": 1, 00:34:25.784 "base_bdevs_list": [ 00:34:25.784 { 00:34:25.784 "name": null, 00:34:25.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:25.784 "is_configured": false, 00:34:25.784 "data_offset": 256, 00:34:25.784 "data_size": 7936 00:34:25.784 }, 00:34:25.784 { 00:34:25.784 "name": "BaseBdev2", 00:34:25.784 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:25.784 "is_configured": true, 00:34:25.784 "data_offset": 256, 00:34:25.784 "data_size": 7936 00:34:25.784 } 00:34:25.784 ] 00:34:25.784 }' 00:34:25.784 00:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:25.784 00:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:25.784 00:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:25.784 00:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:25.784 00:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:34:26.042 00:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:26.300 [2024-07-25 00:18:22.139574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:26.300 [2024-07-25 00:18:22.139770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:26.300 [2024-07-25 00:18:22.139849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:34:26.300 [2024-07-25 00:18:22.139867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:26.300 [2024-07-25 00:18:22.140031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:26.300 [2024-07-25 00:18:22.140051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:26.300 [2024-07-25 00:18:22.140122] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:34:26.300 [2024-07-25 00:18:22.140139] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:26.300 [2024-07-25 00:18:22.140150] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:26.300 BaseBdev1 00:34:26.300 00:18:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@789 -- # sleep 1 00:34:27.674 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:27.674 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:27.674 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:27.674 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:27.674 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:27.674 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:27.674 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:27.674 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:27.674 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:27.674 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:27.674 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:27.674 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:27.674 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:27.674 "name": "raid_bdev1", 00:34:27.674 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:27.674 "strip_size_kb": 0, 00:34:27.674 "state": "online", 00:34:27.674 "raid_level": "raid1", 00:34:27.674 "superblock": true, 00:34:27.674 "num_base_bdevs": 2, 00:34:27.674 "num_base_bdevs_discovered": 1, 00:34:27.674 "num_base_bdevs_operational": 1, 00:34:27.674 "base_bdevs_list": [ 00:34:27.674 { 00:34:27.674 "name": null, 00:34:27.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:27.674 "is_configured": false, 00:34:27.674 "data_offset": 256, 00:34:27.674 "data_size": 7936 00:34:27.674 }, 00:34:27.674 { 00:34:27.674 "name": "BaseBdev2", 00:34:27.674 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:27.674 "is_configured": true, 00:34:27.674 "data_offset": 256, 00:34:27.674 "data_size": 7936 00:34:27.674 } 00:34:27.674 ] 00:34:27.674 }' 00:34:27.674 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:27.674 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:27.931 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:27.931 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:27.931 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:27.931 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:27.931 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:27.931 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:27.931 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:28.188 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:28.188 "name": "raid_bdev1", 00:34:28.188 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:28.188 "strip_size_kb": 0, 00:34:28.188 "state": "online", 00:34:28.188 "raid_level": "raid1", 00:34:28.188 "superblock": true, 00:34:28.188 "num_base_bdevs": 2, 00:34:28.188 "num_base_bdevs_discovered": 1, 00:34:28.188 "num_base_bdevs_operational": 1, 00:34:28.188 "base_bdevs_list": [ 00:34:28.188 { 00:34:28.188 "name": null, 00:34:28.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:28.188 "is_configured": false, 00:34:28.188 "data_offset": 256, 00:34:28.188 "data_size": 7936 00:34:28.188 }, 00:34:28.188 { 00:34:28.188 "name": "BaseBdev2", 00:34:28.188 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:28.188 "is_configured": true, 00:34:28.188 "data_offset": 256, 00:34:28.188 "data_size": 7936 00:34:28.188 } 00:34:28.188 ] 00:34:28.188 }' 00:34:28.188 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:28.188 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:28.188 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:28.188 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:28.188 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:28.188 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:34:28.188 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:28.188 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:28.188 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:28.188 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:28.188 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:28.188 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:28.188 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:28.188 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:28.188 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:28.188 00:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:28.447 [2024-07-25 00:18:24.204100] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:28.447 [2024-07-25 00:18:24.204418] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:28.447 [2024-07-25 00:18:24.204443] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:28.447 request: 00:34:28.447 { 00:34:28.447 "base_bdev": "BaseBdev1", 00:34:28.447 "raid_bdev": "raid_bdev1", 00:34:28.447 "method": "bdev_raid_add_base_bdev", 00:34:28.447 "req_id": 1 00:34:28.447 } 00:34:28.447 Got JSON-RPC error response 00:34:28.447 response: 00:34:28.447 { 00:34:28.447 "code": -22, 00:34:28.447 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:34:28.447 } 00:34:28.447 00:18:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:34:28.447 00:18:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:28.447 00:18:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:28.447 00:18:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:28.447 00:18:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@793 -- # sleep 1 00:34:29.381 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:29.381 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:29.381 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:29.381 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:29.381 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:29.381 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:29.381 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:29.381 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:29.381 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:29.381 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:29.381 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:29.381 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:29.640 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:29.640 "name": "raid_bdev1", 00:34:29.640 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:29.640 "strip_size_kb": 0, 00:34:29.640 "state": "online", 00:34:29.640 "raid_level": "raid1", 00:34:29.640 "superblock": true, 00:34:29.640 "num_base_bdevs": 2, 00:34:29.640 "num_base_bdevs_discovered": 1, 00:34:29.640 "num_base_bdevs_operational": 1, 00:34:29.640 "base_bdevs_list": [ 00:34:29.640 { 00:34:29.640 "name": null, 00:34:29.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:29.640 "is_configured": false, 00:34:29.640 "data_offset": 256, 00:34:29.640 "data_size": 7936 00:34:29.640 }, 00:34:29.640 { 00:34:29.640 "name": "BaseBdev2", 00:34:29.640 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:29.640 "is_configured": true, 00:34:29.640 "data_offset": 256, 00:34:29.640 "data_size": 7936 00:34:29.640 } 00:34:29.640 ] 00:34:29.640 }' 00:34:29.640 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:29.640 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:30.206 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:30.206 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:30.206 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:30.206 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:30.206 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:30.206 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:30.206 00:18:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:30.206 00:18:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:30.206 "name": "raid_bdev1", 00:34:30.206 "uuid": "7ca020a6-1c7a-4562-b5d8-6081f9c17673", 00:34:30.206 "strip_size_kb": 0, 00:34:30.206 "state": "online", 00:34:30.206 "raid_level": "raid1", 00:34:30.206 "superblock": true, 00:34:30.206 "num_base_bdevs": 2, 00:34:30.206 "num_base_bdevs_discovered": 1, 00:34:30.206 "num_base_bdevs_operational": 1, 00:34:30.206 "base_bdevs_list": [ 00:34:30.206 { 00:34:30.206 "name": null, 00:34:30.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:30.206 "is_configured": false, 00:34:30.206 "data_offset": 256, 00:34:30.206 "data_size": 7936 00:34:30.206 }, 00:34:30.206 { 00:34:30.206 "name": "BaseBdev2", 00:34:30.206 "uuid": "97561791-8de6-5bab-b4da-8b915daa8f2b", 00:34:30.206 "is_configured": true, 00:34:30.206 "data_offset": 256, 00:34:30.206 "data_size": 7936 00:34:30.206 } 00:34:30.206 ] 00:34:30.206 }' 00:34:30.465 00:18:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:30.465 00:18:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:30.465 00:18:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:30.465 00:18:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:30.465 00:18:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@798 -- # killprocess 114057 00:34:30.465 00:18:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 114057 ']' 00:34:30.465 00:18:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 114057 00:34:30.465 00:18:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:34:30.465 00:18:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:30.465 00:18:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114057 00:34:30.465 killing process with pid 114057 00:34:30.465 Received shutdown signal, test time was about 60.000000 seconds 00:34:30.465 00:34:30.465 Latency(us) 00:34:30.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:30.465 =================================================================================================================== 00:34:30.465 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:30.465 00:18:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:30.465 00:18:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:30.465 00:18:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114057' 00:34:30.465 00:18:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 114057 00:34:30.465 00:18:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 114057 00:34:30.465 [2024-07-25 00:18:26.129439] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:30.465 [2024-07-25 00:18:26.129543] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:30.465 [2024-07-25 00:18:26.129648] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:30.465 [2024-07-25 00:18:26.129710] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a280 name raid_bdev1, state offline 00:34:30.465 [2024-07-25 00:18:26.322106] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:31.402 00:18:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@800 -- # return 0 00:34:31.402 00:34:31.402 real 0m25.405s 00:34:31.402 user 0m38.023s 00:34:31.402 sys 0m2.688s 00:34:31.402 00:18:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:31.402 00:18:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:31.402 ************************************ 00:34:31.402 END TEST raid_rebuild_test_sb_md_interleaved 00:34:31.402 ************************************ 00:34:31.660 00:18:27 bdev_raid -- bdev/bdev_raid.sh@996 -- # trap - EXIT 00:34:31.661 00:18:27 bdev_raid -- bdev/bdev_raid.sh@997 -- # cleanup 00:34:31.661 00:18:27 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 114057 ']' 00:34:31.661 00:18:27 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 114057 00:34:31.661 00:18:27 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:34:31.661 00:34:31.661 real 21m7.149s 00:34:31.661 user 33m35.418s 00:34:31.661 sys 3m6.857s 00:34:31.661 00:18:27 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:31.661 00:18:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:31.661 ************************************ 00:34:31.661 END TEST bdev_raid 00:34:31.661 ************************************ 00:34:31.661 00:18:27 -- spdk/autotest.sh@195 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:34:31.661 00:18:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:31.661 00:18:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:31.661 00:18:27 -- common/autotest_common.sh@10 -- # set +x 00:34:31.661 ************************************ 00:34:31.661 START TEST bdevperf_config 00:34:31.661 ************************************ 00:34:31.661 00:18:27 bdevperf_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:34:31.661 * Looking for test storage... 00:34:31.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:34:31.661 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:31.661 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:31.661 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:31.661 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:31.661 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:31.661 00:18:27 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:34:35.859 00:18:31 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-25 00:18:27.519148] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:34:35.859 [2024-07-25 00:18:27.519322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114812 ] 00:34:35.859 Using job config with 4 jobs 00:34:35.859 [2024-07-25 00:18:27.689952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:35.859 [2024-07-25 00:18:27.853657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.859 cpumask for '\''job0'\'' is too big 00:34:35.859 cpumask for '\''job1'\'' is too big 00:34:35.859 cpumask for '\''job2'\'' is too big 00:34:35.859 cpumask for '\''job3'\'' is too big 00:34:35.859 Running I/O for 2 seconds... 00:34:35.859 00:34:35.859 Latency(us) 00:34:35.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.859 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:35.859 Malloc0 : 2.01 29856.81 29.16 0.00 0.00 8563.00 1556.48 13285.93 00:34:35.859 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:35.859 Malloc0 : 2.02 29836.55 29.14 0.00 0.00 8554.05 1467.11 11796.48 00:34:35.859 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:35.859 Malloc0 : 2.02 29816.93 29.12 0.00 0.00 8544.47 1675.64 11081.54 00:34:35.859 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:35.859 Malloc0 : 2.02 29796.13 29.10 0.00 0.00 8533.70 1601.16 11141.12 00:34:35.859 =================================================================================================================== 00:34:35.859 Total : 119306.41 116.51 0.00 0.00 8548.81 1467.11 13285.93' 00:34:35.859 00:18:31 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-25 00:18:27.519148] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:34:35.859 [2024-07-25 00:18:27.519322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114812 ] 00:34:35.859 Using job config with 4 jobs 00:34:35.859 [2024-07-25 00:18:27.689952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:35.859 [2024-07-25 00:18:27.853657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.859 cpumask for '\''job0'\'' is too big 00:34:35.859 cpumask for '\''job1'\'' is too big 00:34:35.859 cpumask for '\''job2'\'' is too big 00:34:35.859 cpumask for '\''job3'\'' is too big 00:34:35.859 Running I/O for 2 seconds... 00:34:35.859 00:34:35.859 Latency(us) 00:34:35.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.859 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:35.859 Malloc0 : 2.01 29856.81 29.16 0.00 0.00 8563.00 1556.48 13285.93 00:34:35.859 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:35.859 Malloc0 : 2.02 29836.55 29.14 0.00 0.00 8554.05 1467.11 11796.48 00:34:35.859 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:35.859 Malloc0 : 2.02 29816.93 29.12 0.00 0.00 8544.47 1675.64 11081.54 00:34:35.859 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:35.859 Malloc0 : 2.02 29796.13 29.10 0.00 0.00 8533.70 1601.16 11141.12 00:34:35.859 =================================================================================================================== 00:34:35.859 Total : 119306.41 116.51 0.00 0.00 8548.81 1467.11 13285.93' 00:34:35.859 00:18:31 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 00:18:27.519148] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:34:35.859 [2024-07-25 00:18:27.519322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114812 ] 00:34:35.859 Using job config with 4 jobs 00:34:35.859 [2024-07-25 00:18:27.689952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:35.859 [2024-07-25 00:18:27.853657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.859 cpumask for '\''job0'\'' is too big 00:34:35.859 cpumask for '\''job1'\'' is too big 00:34:35.859 cpumask for '\''job2'\'' is too big 00:34:35.859 cpumask for '\''job3'\'' is too big 00:34:35.859 Running I/O for 2 seconds... 00:34:35.859 00:34:35.859 Latency(us) 00:34:35.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.859 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:35.859 Malloc0 : 2.01 29856.81 29.16 0.00 0.00 8563.00 1556.48 13285.93 00:34:35.859 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:35.859 Malloc0 : 2.02 29836.55 29.14 0.00 0.00 8554.05 1467.11 11796.48 00:34:35.859 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:35.859 Malloc0 : 2.02 29816.93 29.12 0.00 0.00 8544.47 1675.64 11081.54 00:34:35.859 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:35.859 Malloc0 : 2.02 29796.13 29.10 0.00 0.00 8533.70 1601.16 11141.12 00:34:35.859 =================================================================================================================== 00:34:35.859 Total : 119306.41 116.51 0.00 0.00 8548.81 1467.11 13285.93' 00:34:35.859 00:18:31 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:34:35.859 00:18:31 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:34:35.859 00:18:31 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:34:35.859 00:18:31 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:34:35.859 [2024-07-25 00:18:31.382592] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:34:35.859 [2024-07-25 00:18:31.382988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114853 ] 00:34:35.859 [2024-07-25 00:18:31.554150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:35.859 [2024-07-25 00:18:31.711239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:36.424 cpumask for 'job0' is too big 00:34:36.424 cpumask for 'job1' is too big 00:34:36.424 cpumask for 'job2' is too big 00:34:36.424 cpumask for 'job3' is too big 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:34:39.703 Running I/O for 2 seconds... 00:34:39.703 00:34:39.703 Latency(us) 00:34:39.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:39.703 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:39.703 Malloc0 : 2.01 30018.51 29.31 0.00 0.00 8524.42 1549.03 13047.62 00:34:39.703 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:39.703 Malloc0 : 2.01 29995.60 29.29 0.00 0.00 8515.30 1452.22 12392.26 00:34:39.703 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:39.703 Malloc0 : 2.02 30035.97 29.33 0.00 0.00 8488.23 1482.01 11856.06 00:34:39.703 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:39.703 Malloc0 : 2.02 30015.73 29.31 0.00 0.00 8479.23 1474.56 11915.64 00:34:39.703 =================================================================================================================== 00:34:39.703 Total : 120065.81 117.25 0.00 0.00 8501.76 1452.22 13047.62' 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:39.703 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:34:39.703 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:34:39.703 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:39.703 00:18:35 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-25 00:18:35.251199] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:34:43.892 [2024-07-25 00:18:35.251374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114896 ] 00:34:43.892 Using job config with 3 jobs 00:34:43.892 [2024-07-25 00:18:35.422690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.892 [2024-07-25 00:18:35.589559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:43.892 cpumask for '\''job0'\'' is too big 00:34:43.892 cpumask for '\''job1'\'' is too big 00:34:43.892 cpumask for '\''job2'\'' is too big 00:34:43.892 Running I/O for 2 seconds... 00:34:43.892 00:34:43.892 Latency(us) 00:34:43.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:43.892 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:43.892 Malloc0 : 2.01 40785.82 39.83 0.00 0.00 6270.89 1467.11 9353.77 00:34:43.892 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:43.892 Malloc0 : 2.01 40758.98 39.80 0.00 0.00 6263.40 1504.35 8162.21 00:34:43.892 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:43.892 Malloc0 : 2.01 40817.06 39.86 0.00 0.00 6243.52 688.87 8162.21 00:34:43.892 =================================================================================================================== 00:34:43.892 Total : 122361.86 119.49 0.00 0.00 6259.25 688.87 9353.77' 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-25 00:18:35.251199] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:34:43.892 [2024-07-25 00:18:35.251374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114896 ] 00:34:43.892 Using job config with 3 jobs 00:34:43.892 [2024-07-25 00:18:35.422690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.892 [2024-07-25 00:18:35.589559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:43.892 cpumask for '\''job0'\'' is too big 00:34:43.892 cpumask for '\''job1'\'' is too big 00:34:43.892 cpumask for '\''job2'\'' is too big 00:34:43.892 Running I/O for 2 seconds... 00:34:43.892 00:34:43.892 Latency(us) 00:34:43.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:43.892 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:43.892 Malloc0 : 2.01 40785.82 39.83 0.00 0.00 6270.89 1467.11 9353.77 00:34:43.892 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:43.892 Malloc0 : 2.01 40758.98 39.80 0.00 0.00 6263.40 1504.35 8162.21 00:34:43.892 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:43.892 Malloc0 : 2.01 40817.06 39.86 0.00 0.00 6243.52 688.87 8162.21 00:34:43.892 =================================================================================================================== 00:34:43.892 Total : 122361.86 119.49 0.00 0.00 6259.25 688.87 9353.77' 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 00:18:35.251199] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:34:43.892 [2024-07-25 00:18:35.251374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114896 ] 00:34:43.892 Using job config with 3 jobs 00:34:43.892 [2024-07-25 00:18:35.422690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.892 [2024-07-25 00:18:35.589559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:43.892 cpumask for '\''job0'\'' is too big 00:34:43.892 cpumask for '\''job1'\'' is too big 00:34:43.892 cpumask for '\''job2'\'' is too big 00:34:43.892 Running I/O for 2 seconds... 00:34:43.892 00:34:43.892 Latency(us) 00:34:43.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:43.892 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:43.892 Malloc0 : 2.01 40785.82 39.83 0.00 0.00 6270.89 1467.11 9353.77 00:34:43.892 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:43.892 Malloc0 : 2.01 40758.98 39.80 0.00 0.00 6263.40 1504.35 8162.21 00:34:43.892 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:43.892 Malloc0 : 2.01 40817.06 39.86 0.00 0.00 6243.52 688.87 8162.21 00:34:43.892 =================================================================================================================== 00:34:43.892 Total : 122361.86 119.49 0.00 0.00 6259.25 688.87 9353.77' 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:34:43.892 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:43.892 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:34:43.892 00:18:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:34:43.893 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:34:43.893 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:43.893 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:43.893 00:18:39 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:34:47.177 00:18:42 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-25 00:18:39.144900] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:34:47.177 [2024-07-25 00:18:39.145073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114950 ] 00:34:47.177 Using job config with 4 jobs 00:34:47.177 [2024-07-25 00:18:39.314385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.177 [2024-07-25 00:18:39.474931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.177 cpumask for '\''job0'\'' is too big 00:34:47.177 cpumask for '\''job1'\'' is too big 00:34:47.177 cpumask for '\''job2'\'' is too big 00:34:47.177 cpumask for '\''job3'\'' is too big 00:34:47.177 Running I/O for 2 seconds... 00:34:47.177 00:34:47.177 Latency(us) 00:34:47.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:47.177 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.177 Malloc0 : 2.03 14849.41 14.50 0.00 0.00 17225.62 3425.75 27763.43 00:34:47.177 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.177 Malloc1 : 2.04 14839.19 14.49 0.00 0.00 17222.18 4021.53 27763.43 00:34:47.177 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.177 Malloc0 : 2.04 14829.61 14.48 0.00 0.00 17180.70 3261.91 24546.21 00:34:47.177 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.177 Malloc1 : 2.04 14819.60 14.47 0.00 0.00 17176.20 3902.37 24427.05 00:34:47.177 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.177 Malloc0 : 2.04 14810.02 14.46 0.00 0.00 17134.31 3187.43 21328.99 00:34:47.177 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.177 Malloc1 : 2.04 14799.99 14.45 0.00 0.00 17135.90 3872.58 21209.83 00:34:47.177 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.177 Malloc0 : 2.04 14790.51 14.44 0.00 0.00 17100.13 3172.54 20852.36 00:34:47.177 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.177 Malloc1 : 2.04 14780.50 14.43 0.00 0.00 17096.65 3842.79 20852.36 00:34:47.177 =================================================================================================================== 00:34:47.177 Total : 118518.83 115.74 0.00 0.00 17158.96 3172.54 27763.43' 00:34:47.177 00:18:42 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-25 00:18:39.144900] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:34:47.177 [2024-07-25 00:18:39.145073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114950 ] 00:34:47.177 Using job config with 4 jobs 00:34:47.177 [2024-07-25 00:18:39.314385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.177 [2024-07-25 00:18:39.474931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.177 cpumask for '\''job0'\'' is too big 00:34:47.177 cpumask for '\''job1'\'' is too big 00:34:47.177 cpumask for '\''job2'\'' is too big 00:34:47.177 cpumask for '\''job3'\'' is too big 00:34:47.177 Running I/O for 2 seconds... 00:34:47.177 00:34:47.177 Latency(us) 00:34:47.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:47.178 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.178 Malloc0 : 2.03 14849.41 14.50 0.00 0.00 17225.62 3425.75 27763.43 00:34:47.178 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.178 Malloc1 : 2.04 14839.19 14.49 0.00 0.00 17222.18 4021.53 27763.43 00:34:47.178 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.178 Malloc0 : 2.04 14829.61 14.48 0.00 0.00 17180.70 3261.91 24546.21 00:34:47.178 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.178 Malloc1 : 2.04 14819.60 14.47 0.00 0.00 17176.20 3902.37 24427.05 00:34:47.178 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.178 Malloc0 : 2.04 14810.02 14.46 0.00 0.00 17134.31 3187.43 21328.99 00:34:47.178 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.178 Malloc1 : 2.04 14799.99 14.45 0.00 0.00 17135.90 3872.58 21209.83 00:34:47.178 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.178 Malloc0 : 2.04 14790.51 14.44 0.00 0.00 17100.13 3172.54 20852.36 00:34:47.178 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.178 Malloc1 : 2.04 14780.50 14.43 0.00 0.00 17096.65 3842.79 20852.36 00:34:47.178 =================================================================================================================== 00:34:47.178 Total : 118518.83 115.74 0.00 0.00 17158.96 3172.54 27763.43' 00:34:47.178 00:18:42 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:34:47.178 00:18:42 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 00:18:39.144900] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:34:47.178 [2024-07-25 00:18:39.145073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114950 ] 00:34:47.178 Using job config with 4 jobs 00:34:47.178 [2024-07-25 00:18:39.314385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.178 [2024-07-25 00:18:39.474931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.178 cpumask for '\''job0'\'' is too big 00:34:47.178 cpumask for '\''job1'\'' is too big 00:34:47.178 cpumask for '\''job2'\'' is too big 00:34:47.178 cpumask for '\''job3'\'' is too big 00:34:47.178 Running I/O for 2 seconds... 00:34:47.178 00:34:47.178 Latency(us) 00:34:47.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:47.178 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.178 Malloc0 : 2.03 14849.41 14.50 0.00 0.00 17225.62 3425.75 27763.43 00:34:47.178 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.178 Malloc1 : 2.04 14839.19 14.49 0.00 0.00 17222.18 4021.53 27763.43 00:34:47.178 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.178 Malloc0 : 2.04 14829.61 14.48 0.00 0.00 17180.70 3261.91 24546.21 00:34:47.178 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.178 Malloc1 : 2.04 14819.60 14.47 0.00 0.00 17176.20 3902.37 24427.05 00:34:47.178 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.178 Malloc0 : 2.04 14810.02 14.46 0.00 0.00 17134.31 3187.43 21328.99 00:34:47.178 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.178 Malloc1 : 2.04 14799.99 14.45 0.00 0.00 17135.90 3872.58 21209.83 00:34:47.178 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.178 Malloc0 : 2.04 14790.51 14.44 0.00 0.00 17100.13 3172.54 20852.36 00:34:47.178 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:47.178 Malloc1 : 2.04 14780.50 14.43 0.00 0.00 17096.65 3842.79 20852.36 00:34:47.178 =================================================================================================================== 00:34:47.178 Total : 118518.83 115.74 0.00 0.00 17158.96 3172.54 27763.43' 00:34:47.178 00:18:42 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:34:47.178 00:18:42 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:34:47.178 00:18:42 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:34:47.178 00:18:42 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:34:47.178 00:18:42 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:34:47.178 ************************************ 00:34:47.178 END TEST bdevperf_config 00:34:47.178 ************************************ 00:34:47.178 00:34:47.178 real 0m15.644s 00:34:47.178 user 0m14.107s 00:34:47.178 sys 0m1.045s 00:34:47.178 00:18:42 bdevperf_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:47.178 00:18:42 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:34:47.178 00:18:43 -- spdk/autotest.sh@196 -- # uname -s 00:34:47.178 00:18:43 -- spdk/autotest.sh@196 -- # [[ Linux == Linux ]] 00:34:47.178 00:18:43 -- spdk/autotest.sh@197 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:34:47.178 00:18:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:47.178 00:18:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:47.178 00:18:43 -- common/autotest_common.sh@10 -- # set +x 00:34:47.437 ************************************ 00:34:47.437 START TEST reactor_set_interrupt 00:34:47.437 ************************************ 00:34:47.437 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:34:47.437 * Looking for test storage... 00:34:47.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:34:47.437 00:18:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:34:47.437 00:18:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:34:47.437 00:18:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:34:47.437 00:18:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:34:47.437 00:18:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:34:47.437 00:18:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:34:47.437 00:18:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:34:47.437 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:34:47.437 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:34:47.437 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:34:47.437 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:34:47.437 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:34:47.437 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:34:47.437 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:34:47.437 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:34:47.437 00:18:43 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:34:47.438 00:18:43 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:34:47.438 00:18:43 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:34:47.438 00:18:43 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:34:47.438 00:18:43 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=y 00:34:47.438 00:18:43 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:34:47.438 00:18:43 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:34:47.438 00:18:43 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:34:47.438 00:18:43 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:34:47.438 00:18:43 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:34:47.438 00:18:43 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:34:47.438 00:18:43 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:34:47.438 00:18:43 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:34:47.438 00:18:43 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:34:47.438 00:18:43 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:34:47.438 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:34:47.438 00:18:43 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:34:47.438 00:18:43 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:34:47.438 00:18:43 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:34:47.438 00:18:43 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:34:47.438 00:18:43 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:34:47.438 00:18:43 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:34:47.438 00:18:43 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:34:47.438 00:18:43 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:34:47.438 00:18:43 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:34:47.438 00:18:43 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:34:47.438 00:18:43 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:34:47.438 00:18:43 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:34:47.438 00:18:43 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:34:47.438 00:18:43 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:34:47.438 00:18:43 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:34:47.438 #define SPDK_CONFIG_H 00:34:47.438 #define SPDK_CONFIG_APPS 1 00:34:47.438 #define SPDK_CONFIG_ARCH native 00:34:47.438 #define SPDK_CONFIG_ASAN 1 00:34:47.438 #undef SPDK_CONFIG_AVAHI 00:34:47.438 #undef SPDK_CONFIG_CET 00:34:47.438 #define SPDK_CONFIG_COVERAGE 1 00:34:47.438 #define SPDK_CONFIG_CROSS_PREFIX 00:34:47.438 #undef SPDK_CONFIG_CRYPTO 00:34:47.438 #undef SPDK_CONFIG_CRYPTO_MLX5 00:34:47.438 #undef SPDK_CONFIG_CUSTOMOCF 00:34:47.438 #undef SPDK_CONFIG_DAOS 00:34:47.438 #define SPDK_CONFIG_DAOS_DIR 00:34:47.438 #define SPDK_CONFIG_DEBUG 1 00:34:47.438 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:34:47.438 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:34:47.438 #define SPDK_CONFIG_DPDK_INC_DIR 00:34:47.438 #define SPDK_CONFIG_DPDK_LIB_DIR 00:34:47.438 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:34:47.438 #undef SPDK_CONFIG_DPDK_UADK 00:34:47.438 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:34:47.438 #define SPDK_CONFIG_EXAMPLES 1 00:34:47.438 #undef SPDK_CONFIG_FC 00:34:47.438 #define SPDK_CONFIG_FC_PATH 00:34:47.438 #define SPDK_CONFIG_FIO_PLUGIN 1 00:34:47.438 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:34:47.438 #undef SPDK_CONFIG_FUSE 00:34:47.438 #undef SPDK_CONFIG_FUZZER 00:34:47.438 #define SPDK_CONFIG_FUZZER_LIB 00:34:47.438 #undef SPDK_CONFIG_GOLANG 00:34:47.438 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:34:47.438 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:34:47.438 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:34:47.438 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:34:47.438 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:34:47.438 #undef SPDK_CONFIG_HAVE_LIBBSD 00:34:47.438 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:34:47.438 #define SPDK_CONFIG_IDXD 1 00:34:47.438 #define SPDK_CONFIG_IDXD_KERNEL 1 00:34:47.438 #undef SPDK_CONFIG_IPSEC_MB 00:34:47.438 #define SPDK_CONFIG_IPSEC_MB_DIR 00:34:47.438 #define SPDK_CONFIG_ISAL 1 00:34:47.438 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:34:47.438 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:34:47.438 #define SPDK_CONFIG_LIBDIR 00:34:47.438 #undef SPDK_CONFIG_LTO 00:34:47.438 #define SPDK_CONFIG_MAX_LCORES 128 00:34:47.438 #define SPDK_CONFIG_NVME_CUSE 1 00:34:47.438 #undef SPDK_CONFIG_OCF 00:34:47.438 #define SPDK_CONFIG_OCF_PATH 00:34:47.438 #define SPDK_CONFIG_OPENSSL_PATH 00:34:47.438 #undef SPDK_CONFIG_PGO_CAPTURE 00:34:47.438 #define SPDK_CONFIG_PGO_DIR 00:34:47.438 #undef SPDK_CONFIG_PGO_USE 00:34:47.438 #define SPDK_CONFIG_PREFIX /usr/local 00:34:47.438 #define SPDK_CONFIG_RAID5F 1 00:34:47.438 #undef SPDK_CONFIG_RBD 00:34:47.438 #define SPDK_CONFIG_RDMA 1 00:34:47.438 #define SPDK_CONFIG_RDMA_PROV verbs 00:34:47.438 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:34:47.438 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:34:47.438 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:34:47.438 #undef SPDK_CONFIG_SHARED 00:34:47.438 #undef SPDK_CONFIG_SMA 00:34:47.438 #define SPDK_CONFIG_TESTS 1 00:34:47.438 #undef SPDK_CONFIG_TSAN 00:34:47.438 #define SPDK_CONFIG_UBLK 1 00:34:47.438 #define SPDK_CONFIG_UBSAN 1 00:34:47.438 #define SPDK_CONFIG_UNIT_TESTS 1 00:34:47.438 #undef SPDK_CONFIG_URING 00:34:47.438 #define SPDK_CONFIG_URING_PATH 00:34:47.438 #undef SPDK_CONFIG_URING_ZNS 00:34:47.438 #undef SPDK_CONFIG_USDT 00:34:47.438 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:34:47.438 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:34:47.438 #undef SPDK_CONFIG_VFIO_USER 00:34:47.438 #define SPDK_CONFIG_VFIO_USER_DIR 00:34:47.438 #define SPDK_CONFIG_VHOST 1 00:34:47.438 #define SPDK_CONFIG_VIRTIO 1 00:34:47.438 #undef SPDK_CONFIG_VTUNE 00:34:47.438 #define SPDK_CONFIG_VTUNE_DIR 00:34:47.438 #define SPDK_CONFIG_WERROR 1 00:34:47.438 #define SPDK_CONFIG_WPDK_DIR 00:34:47.438 #undef SPDK_CONFIG_XNVME 00:34:47.438 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:34:47.438 00:18:43 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:34:47.438 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:47.438 00:18:43 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:47.438 00:18:43 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:47.438 00:18:43 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:47.438 00:18:43 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:47.438 00:18:43 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:47.438 00:18:43 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:47.438 00:18:43 reactor_set_interrupt -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:47.438 00:18:43 reactor_set_interrupt -- paths/export.sh@6 -- # export PATH 00:34:47.438 00:18:43 reactor_set_interrupt -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:47.438 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:34:47.438 00:18:43 reactor_set_interrupt -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:34:47.438 00:18:43 reactor_set_interrupt -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:34:47.438 00:18:43 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:34:47.438 00:18:43 reactor_set_interrupt -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:34:47.438 00:18:43 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:34:47.438 00:18:43 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:34:47.438 00:18:43 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:34:47.438 00:18:43 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:34:47.439 00:18:43 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:34:47.439 00:18:43 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:34:47.439 00:18:43 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:34:47.439 00:18:43 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:34:47.439 00:18:43 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:34:47.439 00:18:43 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:34:47.439 00:18:43 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:34:47.439 00:18:43 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:34:47.439 00:18:43 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:34:47.439 00:18:43 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:34:47.439 00:18:43 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:34:47.439 00:18:43 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:34:47.439 00:18:43 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:34:47.439 00:18:43 reactor_set_interrupt -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:34:47.439 00:18:43 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 1 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 1 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 1 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 1 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 1 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@166 -- # : 0 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:34:47.439 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@173 -- # : 0 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@187 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@202 -- # cat 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@255 -- # export QEMU_BIN= 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@255 -- # QEMU_BIN= 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@258 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@258 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@265 -- # export valgrind= 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@265 -- # valgrind= 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@271 -- # uname -s 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@281 -- # MAKE=make 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j10 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@301 -- # TEST_MODE= 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@320 -- # [[ -z 115030 ]] 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@320 -- # kill -0 115030 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local mount target_dir 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.K5XkSj 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@357 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.K5XkSj/tests/interrupt /tmp/spdk.K5XkSj 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@329 -- # df -T 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=1249312768 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=1254027264 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4714496 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda1 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=ext4 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=9870053376 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=19681529856 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=9794699264 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=6266740736 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=6270115840 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=3375104 00:34:47.440 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=5242880 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=5242880 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda16 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=ext4 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=777306112 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=923156480 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=81207296 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda15 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=vfat 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=103000064 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=109395968 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=6395904 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=1254010880 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=1254023168 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=12288 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=fuse.sshfs 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=98557538304 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=105088212992 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=1145241600 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:34:47.441 * Looking for test storage... 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@370 -- # local target_space new_size 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@374 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mount=/ 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@376 -- # target_space=9870053376 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ ext4 == tmpfs ]] 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ ext4 == ramfs ]] 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@383 -- # new_size=12009291776 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:34:47.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@391 -- # return 0 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:34:47.441 00:18:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:34:47.441 00:18:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:47.441 00:18:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:34:47.441 00:18:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:34:47.441 00:18:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:34:47.441 00:18:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:34:47.441 00:18:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:34:47.441 00:18:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:34:47.441 00:18:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:34:47.441 00:18:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:34:47.441 00:18:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.441 00:18:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:34:47.441 00:18:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=115071 00:34:47.441 00:18:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:47.441 00:18:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:34:47.441 00:18:43 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 115071 /var/tmp/spdk.sock 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@831 -- # '[' -z 115071 ']' 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:47.441 00:18:43 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:47.699 [2024-07-25 00:18:43.335025] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:34:47.699 [2024-07-25 00:18:43.335370] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115071 ] 00:34:47.699 [2024-07-25 00:18:43.503674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:47.957 [2024-07-25 00:18:43.658921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:47.957 [2024-07-25 00:18:43.659017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.957 [2024-07-25 00:18:43.659035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:48.215 [2024-07-25 00:18:43.880165] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:48.472 00:18:44 reactor_set_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:48.472 00:18:44 reactor_set_interrupt -- common/autotest_common.sh@864 -- # return 0 00:34:48.472 00:18:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:34:48.472 00:18:44 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:48.730 Malloc0 00:34:48.730 Malloc1 00:34:48.730 Malloc2 00:34:48.730 00:18:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:34:48.730 00:18:44 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:34:48.730 00:18:44 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:48.730 00:18:44 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:34:48.988 5000+0 records in 00:34:48.988 5000+0 records out 00:34:48.988 10240000 bytes (10 MB, 9.8 MiB) copied, 0.019361 s, 529 MB/s 00:34:48.988 00:18:44 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:34:48.988 AIO0 00:34:48.988 00:18:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 115071 00:34:48.988 00:18:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 115071 without_thd 00:34:48.988 00:18:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=115071 00:34:48.988 00:18:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:34:48.988 00:18:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:34:48.988 00:18:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:34:48.988 00:18:44 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:34:48.988 00:18:44 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:34:48.988 00:18:44 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:34:48.988 00:18:44 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:48.988 00:18:44 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:34:48.988 00:18:44 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:49.247 00:18:45 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:34:49.247 00:18:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:34:49.247 00:18:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:34:49.247 00:18:45 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:34:49.247 00:18:45 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:34:49.247 00:18:45 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:34:49.247 00:18:45 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:49.247 00:18:45 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:34:49.247 00:18:45 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:49.505 00:18:45 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:34:49.505 spdk_thread ids are 1 on reactor0. 00:34:49.505 00:18:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:34:49.505 00:18:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:34:49.505 00:18:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:49.505 00:18:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 115071 0 00:34:49.505 00:18:45 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 115071 0 idle 00:34:49.505 00:18:45 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=115071 00:34:49.505 00:18:45 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:49.505 00:18:45 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:49.505 00:18:45 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:49.505 00:18:45 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:49.505 00:18:45 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:49.505 00:18:45 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:49.505 00:18:45 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:49.505 00:18:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 115071 -w 256 00:34:49.505 00:18:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 115071 root 20 0 20.1t 153984 32896 S 0.0 1.3 0:00.60 reactor_0' 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 115071 root 20 0 20.1t 153984 32896 S 0.0 1.3 0:00.60 reactor_0 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 115071 1 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 115071 1 idle 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=115071 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 115071 -w 256 00:34:49.764 00:18:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:34:50.022 00:18:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 115074 root 20 0 20.1t 153984 32896 S 0.0 1.3 0:00.00 reactor_1' 00:34:50.022 00:18:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 115074 root 20 0 20.1t 153984 32896 S 0.0 1.3 0:00.00 reactor_1 00:34:50.022 00:18:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:50.022 00:18:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:50.022 00:18:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:50.022 00:18:45 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:50.022 00:18:45 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:50.022 00:18:45 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:50.022 00:18:45 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:50.022 00:18:45 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:50.023 00:18:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:50.023 00:18:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 115071 2 00:34:50.023 00:18:45 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 115071 2 idle 00:34:50.023 00:18:45 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=115071 00:34:50.023 00:18:45 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:50.023 00:18:45 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:50.023 00:18:45 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:50.023 00:18:45 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:50.023 00:18:45 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:50.023 00:18:45 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:50.023 00:18:45 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:50.023 00:18:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 115071 -w 256 00:34:50.023 00:18:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:50.281 00:18:46 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 115075 root 20 0 20.1t 153984 32896 S 0.0 1.3 0:00.00 reactor_2' 00:34:50.281 00:18:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 115075 root 20 0 20.1t 153984 32896 S 0.0 1.3 0:00.00 reactor_2 00:34:50.281 00:18:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:50.281 00:18:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:50.281 00:18:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:50.281 00:18:46 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:50.281 00:18:46 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:50.281 00:18:46 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:50.281 00:18:46 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:50.281 00:18:46 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:50.281 00:18:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:34:50.281 00:18:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:34:50.281 00:18:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:34:50.539 [2024-07-25 00:18:46.260718] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:50.539 00:18:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:34:50.797 [2024-07-25 00:18:46.520387] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:34:50.797 [2024-07-25 00:18:46.521467] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:50.797 00:18:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:34:51.055 [2024-07-25 00:18:46.728278] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:34:51.055 [2024-07-25 00:18:46.729413] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:51.055 00:18:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:34:51.055 00:18:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 115071 0 00:34:51.055 00:18:46 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 115071 0 busy 00:34:51.055 00:18:46 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=115071 00:34:51.055 00:18:46 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:51.055 00:18:46 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:51.055 00:18:46 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:34:51.055 00:18:46 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:51.055 00:18:46 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:51.055 00:18:46 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:51.055 00:18:46 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 115071 -w 256 00:34:51.055 00:18:46 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 115071 root 20 0 20.1t 157312 32896 R 99.9 1.3 0:01.04 reactor_0' 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 115071 root 20 0 20.1t 157312 32896 R 99.9 1.3 0:01.04 reactor_0 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 115071 2 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 115071 2 busy 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=115071 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 115071 -w 256 00:34:51.313 00:18:46 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:51.571 00:18:47 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 115075 root 20 0 20.1t 157312 32896 R 90.9 1.3 0:00.44 reactor_2' 00:34:51.571 00:18:47 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 115075 root 20 0 20.1t 157312 32896 R 90.9 1.3 0:00.44 reactor_2 00:34:51.571 00:18:47 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:51.571 00:18:47 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:51.571 00:18:47 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=90.9 00:34:51.571 00:18:47 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=90 00:34:51.571 00:18:47 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:34:51.571 00:18:47 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 90 -lt 70 ]] 00:34:51.571 00:18:47 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:34:51.571 00:18:47 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:51.571 00:18:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:34:51.571 [2024-07-25 00:18:47.384287] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:34:51.571 [2024-07-25 00:18:47.385113] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:51.571 00:18:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:34:51.571 00:18:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 115071 2 00:34:51.571 00:18:47 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 115071 2 idle 00:34:51.571 00:18:47 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=115071 00:34:51.571 00:18:47 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:51.571 00:18:47 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:51.572 00:18:47 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:51.572 00:18:47 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:51.572 00:18:47 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:51.572 00:18:47 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:51.572 00:18:47 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:51.572 00:18:47 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 115071 -w 256 00:34:51.572 00:18:47 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:51.830 00:18:47 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 115075 root 20 0 20.1t 157440 32896 S 0.0 1.3 0:00.64 reactor_2' 00:34:51.830 00:18:47 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 115075 root 20 0 20.1t 157440 32896 S 0.0 1.3 0:00.64 reactor_2 00:34:51.830 00:18:47 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:51.830 00:18:47 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:51.830 00:18:47 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:51.830 00:18:47 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:51.830 00:18:47 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:51.830 00:18:47 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:51.830 00:18:47 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:51.830 00:18:47 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:51.830 00:18:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:34:52.089 [2024-07-25 00:18:47.816239] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:34:52.089 [2024-07-25 00:18:47.816947] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:52.089 00:18:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:34:52.089 00:18:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:34:52.089 00:18:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:34:52.348 [2024-07-25 00:18:48.072843] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:52.348 00:18:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 115071 0 00:34:52.348 00:18:48 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 115071 0 idle 00:34:52.348 00:18:48 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=115071 00:34:52.348 00:18:48 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:52.348 00:18:48 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:52.348 00:18:48 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:52.348 00:18:48 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:52.348 00:18:48 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:52.348 00:18:48 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:52.348 00:18:48 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:52.348 00:18:48 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 115071 -w 256 00:34:52.348 00:18:48 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:52.606 00:18:48 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 115071 root 20 0 20.1t 157568 32896 S 0.0 1.3 0:01.91 reactor_0' 00:34:52.606 00:18:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 115071 root 20 0 20.1t 157568 32896 S 0.0 1.3 0:01.91 reactor_0 00:34:52.606 00:18:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:52.606 00:18:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:52.606 00:18:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:52.606 00:18:48 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:52.606 00:18:48 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:52.607 00:18:48 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:52.607 00:18:48 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:52.607 00:18:48 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:52.607 00:18:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:34:52.607 00:18:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:34:52.607 00:18:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:34:52.607 00:18:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 115071 00:34:52.607 00:18:48 reactor_set_interrupt -- common/autotest_common.sh@950 -- # '[' -z 115071 ']' 00:34:52.607 00:18:48 reactor_set_interrupt -- common/autotest_common.sh@954 -- # kill -0 115071 00:34:52.607 00:18:48 reactor_set_interrupt -- common/autotest_common.sh@955 -- # uname 00:34:52.607 00:18:48 reactor_set_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:52.607 00:18:48 reactor_set_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115071 00:34:52.607 killing process with pid 115071 00:34:52.607 00:18:48 reactor_set_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:52.607 00:18:48 reactor_set_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:52.607 00:18:48 reactor_set_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115071' 00:34:52.607 00:18:48 reactor_set_interrupt -- common/autotest_common.sh@969 -- # kill 115071 00:34:52.607 00:18:48 reactor_set_interrupt -- common/autotest_common.sh@974 -- # wait 115071 00:34:53.985 00:18:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:34:53.985 00:18:49 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:34:53.985 00:18:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:34:53.985 00:18:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.985 00:18:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:34:53.985 00:18:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=115213 00:34:53.985 00:18:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:34:53.985 00:18:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:53.985 00:18:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 115213 /var/tmp/spdk.sock 00:34:53.985 00:18:49 reactor_set_interrupt -- common/autotest_common.sh@831 -- # '[' -z 115213 ']' 00:34:53.985 00:18:49 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.985 00:18:49 reactor_set_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:53.985 00:18:49 reactor_set_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:53.985 00:18:49 reactor_set_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:53.985 00:18:49 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:53.985 [2024-07-25 00:18:49.493012] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:34:53.985 [2024-07-25 00:18:49.493366] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115213 ] 00:34:53.985 [2024-07-25 00:18:49.665011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:53.985 [2024-07-25 00:18:49.815967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:53.985 [2024-07-25 00:18:49.817092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:53.985 [2024-07-25 00:18:49.817094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:54.285 [2024-07-25 00:18:50.037543] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:54.875 00:18:50 reactor_set_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:54.875 00:18:50 reactor_set_interrupt -- common/autotest_common.sh@864 -- # return 0 00:34:54.875 00:18:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:34:54.875 00:18:50 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:55.134 Malloc0 00:34:55.134 Malloc1 00:34:55.134 Malloc2 00:34:55.134 00:18:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:34:55.134 00:18:50 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:34:55.134 00:18:50 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:55.134 00:18:50 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:34:55.134 5000+0 records in 00:34:55.134 5000+0 records out 00:34:55.134 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0220139 s, 465 MB/s 00:34:55.134 00:18:50 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:34:55.392 AIO0 00:34:55.392 00:18:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 115213 00:34:55.392 00:18:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 115213 00:34:55.392 00:18:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=115213 00:34:55.392 00:18:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:34:55.392 00:18:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:34:55.392 00:18:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:34:55.392 00:18:51 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:34:55.392 00:18:51 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:34:55.392 00:18:51 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:34:55.392 00:18:51 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:55.392 00:18:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:34:55.392 00:18:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:55.651 00:18:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:34:55.651 00:18:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:34:55.651 00:18:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:34:55.651 00:18:51 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:34:55.651 00:18:51 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:34:55.651 00:18:51 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:34:55.651 00:18:51 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:55.651 00:18:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:55.651 00:18:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:34:55.910 00:18:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:34:55.910 spdk_thread ids are 1 on reactor0. 00:34:55.910 00:18:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:34:55.910 00:18:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:34:55.910 00:18:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:55.910 00:18:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 115213 0 00:34:55.910 00:18:51 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 115213 0 idle 00:34:55.910 00:18:51 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=115213 00:34:55.910 00:18:51 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:55.910 00:18:51 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:55.910 00:18:51 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:55.910 00:18:51 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:55.910 00:18:51 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:55.910 00:18:51 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:55.910 00:18:51 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:55.910 00:18:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 115213 -w 256 00:34:55.910 00:18:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 115213 root 20 0 20.1t 154112 32896 S 0.0 1.3 0:00.60 reactor_0' 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 115213 root 20 0 20.1t 154112 32896 S 0.0 1.3 0:00.60 reactor_0 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 115213 1 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 115213 1 idle 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=115213 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 115213 -w 256 00:34:56.170 00:18:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 115217 root 20 0 20.1t 154112 32896 S 0.0 1.3 0:00.00 reactor_1' 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 115217 root 20 0 20.1t 154112 32896 S 0.0 1.3 0:00.00 reactor_1 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 115213 2 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 115213 2 idle 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=115213 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 115213 -w 256 00:34:56.170 00:18:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:56.429 00:18:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 115218 root 20 0 20.1t 154112 32896 S 0.0 1.3 0:00.00 reactor_2' 00:34:56.429 00:18:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 115218 root 20 0 20.1t 154112 32896 S 0.0 1.3 0:00.00 reactor_2 00:34:56.429 00:18:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:56.429 00:18:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:56.429 00:18:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:56.429 00:18:52 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:56.429 00:18:52 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:56.429 00:18:52 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:56.429 00:18:52 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:56.429 00:18:52 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:56.429 00:18:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:34:56.429 00:18:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:34:56.688 [2024-07-25 00:18:52.503503] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:34:56.688 [2024-07-25 00:18:52.503776] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:34:56.688 [2024-07-25 00:18:52.505027] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:56.688 00:18:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:34:56.946 [2024-07-25 00:18:52.759327] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:34:56.946 [2024-07-25 00:18:52.760259] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:56.946 00:18:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:34:56.946 00:18:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 115213 0 00:34:56.946 00:18:52 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 115213 0 busy 00:34:56.946 00:18:52 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=115213 00:34:56.946 00:18:52 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:56.946 00:18:52 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:56.946 00:18:52 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:34:56.946 00:18:52 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:56.946 00:18:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:56.946 00:18:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:56.946 00:18:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 115213 -w 256 00:34:56.946 00:18:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:57.205 00:18:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 115213 root 20 0 20.1t 157440 32896 R 99.9 1.3 0:01.10 reactor_0' 00:34:57.205 00:18:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 115213 root 20 0 20.1t 157440 32896 R 99.9 1.3 0:01.10 reactor_0 00:34:57.205 00:18:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:57.205 00:18:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:57.205 00:18:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:34:57.205 00:18:52 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:34:57.205 00:18:52 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:34:57.205 00:18:52 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:34:57.205 00:18:52 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:34:57.205 00:18:52 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:57.205 00:18:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:34:57.205 00:18:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 115213 2 00:34:57.205 00:18:52 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 115213 2 busy 00:34:57.205 00:18:52 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=115213 00:34:57.205 00:18:52 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:57.205 00:18:52 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:57.205 00:18:52 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:34:57.205 00:18:52 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:57.205 00:18:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:57.205 00:18:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:57.205 00:18:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 115213 -w 256 00:34:57.205 00:18:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:57.464 00:18:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 115218 root 20 0 20.1t 157440 32896 R 90.9 1.3 0:00.45 reactor_2' 00:34:57.464 00:18:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 115218 root 20 0 20.1t 157440 32896 R 90.9 1.3 0:00.45 reactor_2 00:34:57.464 00:18:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:57.464 00:18:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:57.464 00:18:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=90.9 00:34:57.464 00:18:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=90 00:34:57.464 00:18:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:34:57.464 00:18:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 90 -lt 70 ]] 00:34:57.464 00:18:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:34:57.464 00:18:53 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:57.464 00:18:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:34:57.721 [2024-07-25 00:18:53.455558] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:34:57.721 [2024-07-25 00:18:53.456002] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:57.721 00:18:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:34:57.721 00:18:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 115213 2 00:34:57.721 00:18:53 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 115213 2 idle 00:34:57.721 00:18:53 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=115213 00:34:57.721 00:18:53 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:57.721 00:18:53 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:57.721 00:18:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:57.721 00:18:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:57.721 00:18:53 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:57.721 00:18:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:57.721 00:18:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:57.721 00:18:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 115213 -w 256 00:34:57.721 00:18:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:57.979 00:18:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 115218 root 20 0 20.1t 157440 32896 S 0.0 1.3 0:00.69 reactor_2' 00:34:57.979 00:18:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 115218 root 20 0 20.1t 157440 32896 S 0.0 1.3 0:00.69 reactor_2 00:34:57.979 00:18:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:57.979 00:18:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:57.979 00:18:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:57.979 00:18:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:57.979 00:18:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:57.979 00:18:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:57.979 00:18:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:57.979 00:18:53 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:57.979 00:18:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:34:58.236 [2024-07-25 00:18:53.935616] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:34:58.236 [2024-07-25 00:18:53.936501] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:34:58.236 [2024-07-25 00:18:53.936611] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:58.236 00:18:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:34:58.236 00:18:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 115213 0 00:34:58.236 00:18:53 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 115213 0 idle 00:34:58.237 00:18:53 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=115213 00:34:58.237 00:18:53 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:58.237 00:18:53 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:58.237 00:18:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:58.237 00:18:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:58.237 00:18:53 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:58.237 00:18:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:58.237 00:18:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:58.237 00:18:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 115213 -w 256 00:34:58.237 00:18:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:58.495 00:18:54 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 115213 root 20 0 20.1t 157440 32896 S 0.0 1.3 0:02.05 reactor_0' 00:34:58.495 00:18:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 115213 root 20 0 20.1t 157440 32896 S 0.0 1.3 0:02.05 reactor_0 00:34:58.495 00:18:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:58.495 00:18:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:58.495 00:18:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:58.495 00:18:54 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:58.495 00:18:54 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:58.495 00:18:54 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:58.495 00:18:54 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:58.495 00:18:54 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:58.495 00:18:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:34:58.495 00:18:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:34:58.495 00:18:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:34:58.495 00:18:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 115213 00:34:58.495 00:18:54 reactor_set_interrupt -- common/autotest_common.sh@950 -- # '[' -z 115213 ']' 00:34:58.495 00:18:54 reactor_set_interrupt -- common/autotest_common.sh@954 -- # kill -0 115213 00:34:58.495 00:18:54 reactor_set_interrupt -- common/autotest_common.sh@955 -- # uname 00:34:58.495 00:18:54 reactor_set_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:58.495 00:18:54 reactor_set_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115213 00:34:58.495 00:18:54 reactor_set_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:58.495 00:18:54 reactor_set_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:58.495 killing process with pid 115213 00:34:58.495 00:18:54 reactor_set_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115213' 00:34:58.495 00:18:54 reactor_set_interrupt -- common/autotest_common.sh@969 -- # kill 115213 00:34:58.495 00:18:54 reactor_set_interrupt -- common/autotest_common.sh@974 -- # wait 115213 00:34:59.430 00:18:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:34:59.430 00:18:55 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:34:59.690 ************************************ 00:34:59.690 END TEST reactor_set_interrupt 00:34:59.690 ************************************ 00:34:59.690 00:34:59.690 real 0m12.258s 00:34:59.690 user 0m12.105s 00:34:59.690 sys 0m1.595s 00:34:59.690 00:18:55 reactor_set_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:59.690 00:18:55 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:59.690 00:18:55 -- spdk/autotest.sh@198 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:34:59.690 00:18:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:59.690 00:18:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:59.690 00:18:55 -- common/autotest_common.sh@10 -- # set +x 00:34:59.690 ************************************ 00:34:59.690 START TEST reap_unregistered_poller 00:34:59.690 ************************************ 00:34:59.690 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:34:59.690 * Looking for test storage... 00:34:59.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:34:59.690 00:18:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:34:59.690 00:18:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:34:59.690 00:18:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:34:59.690 00:18:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:34:59.690 00:18:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:34:59.690 00:18:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:34:59.690 00:18:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:34:59.690 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:34:59.690 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:34:59.690 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:34:59.690 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:34:59.690 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:34:59.690 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:34:59.690 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:34:59.690 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:34:59.690 00:18:55 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=y 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:34:59.691 00:18:55 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:34:59.691 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:34:59.691 00:18:55 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:34:59.691 00:18:55 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:34:59.691 00:18:55 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:34:59.691 00:18:55 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:34:59.691 00:18:55 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:34:59.691 00:18:55 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:34:59.691 00:18:55 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:34:59.691 00:18:55 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:34:59.691 00:18:55 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:34:59.691 00:18:55 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:34:59.691 00:18:55 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:34:59.691 00:18:55 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:34:59.691 00:18:55 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:34:59.691 00:18:55 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:34:59.691 00:18:55 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:34:59.691 #define SPDK_CONFIG_H 00:34:59.691 #define SPDK_CONFIG_APPS 1 00:34:59.691 #define SPDK_CONFIG_ARCH native 00:34:59.691 #define SPDK_CONFIG_ASAN 1 00:34:59.691 #undef SPDK_CONFIG_AVAHI 00:34:59.691 #undef SPDK_CONFIG_CET 00:34:59.691 #define SPDK_CONFIG_COVERAGE 1 00:34:59.691 #define SPDK_CONFIG_CROSS_PREFIX 00:34:59.691 #undef SPDK_CONFIG_CRYPTO 00:34:59.691 #undef SPDK_CONFIG_CRYPTO_MLX5 00:34:59.691 #undef SPDK_CONFIG_CUSTOMOCF 00:34:59.691 #undef SPDK_CONFIG_DAOS 00:34:59.691 #define SPDK_CONFIG_DAOS_DIR 00:34:59.691 #define SPDK_CONFIG_DEBUG 1 00:34:59.691 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:34:59.691 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:34:59.691 #define SPDK_CONFIG_DPDK_INC_DIR 00:34:59.691 #define SPDK_CONFIG_DPDK_LIB_DIR 00:34:59.691 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:34:59.691 #undef SPDK_CONFIG_DPDK_UADK 00:34:59.691 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:34:59.691 #define SPDK_CONFIG_EXAMPLES 1 00:34:59.691 #undef SPDK_CONFIG_FC 00:34:59.691 #define SPDK_CONFIG_FC_PATH 00:34:59.691 #define SPDK_CONFIG_FIO_PLUGIN 1 00:34:59.691 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:34:59.691 #undef SPDK_CONFIG_FUSE 00:34:59.691 #undef SPDK_CONFIG_FUZZER 00:34:59.691 #define SPDK_CONFIG_FUZZER_LIB 00:34:59.691 #undef SPDK_CONFIG_GOLANG 00:34:59.691 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:34:59.691 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:34:59.691 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:34:59.691 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:34:59.691 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:34:59.691 #undef SPDK_CONFIG_HAVE_LIBBSD 00:34:59.691 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:34:59.691 #define SPDK_CONFIG_IDXD 1 00:34:59.691 #define SPDK_CONFIG_IDXD_KERNEL 1 00:34:59.691 #undef SPDK_CONFIG_IPSEC_MB 00:34:59.691 #define SPDK_CONFIG_IPSEC_MB_DIR 00:34:59.691 #define SPDK_CONFIG_ISAL 1 00:34:59.691 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:34:59.691 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:34:59.691 #define SPDK_CONFIG_LIBDIR 00:34:59.691 #undef SPDK_CONFIG_LTO 00:34:59.691 #define SPDK_CONFIG_MAX_LCORES 128 00:34:59.691 #define SPDK_CONFIG_NVME_CUSE 1 00:34:59.691 #undef SPDK_CONFIG_OCF 00:34:59.691 #define SPDK_CONFIG_OCF_PATH 00:34:59.691 #define SPDK_CONFIG_OPENSSL_PATH 00:34:59.691 #undef SPDK_CONFIG_PGO_CAPTURE 00:34:59.691 #define SPDK_CONFIG_PGO_DIR 00:34:59.691 #undef SPDK_CONFIG_PGO_USE 00:34:59.691 #define SPDK_CONFIG_PREFIX /usr/local 00:34:59.691 #define SPDK_CONFIG_RAID5F 1 00:34:59.691 #undef SPDK_CONFIG_RBD 00:34:59.691 #define SPDK_CONFIG_RDMA 1 00:34:59.691 #define SPDK_CONFIG_RDMA_PROV verbs 00:34:59.691 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:34:59.691 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:34:59.691 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:34:59.691 #undef SPDK_CONFIG_SHARED 00:34:59.691 #undef SPDK_CONFIG_SMA 00:34:59.691 #define SPDK_CONFIG_TESTS 1 00:34:59.691 #undef SPDK_CONFIG_TSAN 00:34:59.691 #define SPDK_CONFIG_UBLK 1 00:34:59.691 #define SPDK_CONFIG_UBSAN 1 00:34:59.691 #define SPDK_CONFIG_UNIT_TESTS 1 00:34:59.691 #undef SPDK_CONFIG_URING 00:34:59.691 #define SPDK_CONFIG_URING_PATH 00:34:59.691 #undef SPDK_CONFIG_URING_ZNS 00:34:59.691 #undef SPDK_CONFIG_USDT 00:34:59.691 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:34:59.691 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:34:59.691 #undef SPDK_CONFIG_VFIO_USER 00:34:59.691 #define SPDK_CONFIG_VFIO_USER_DIR 00:34:59.691 #define SPDK_CONFIG_VHOST 1 00:34:59.691 #define SPDK_CONFIG_VIRTIO 1 00:34:59.691 #undef SPDK_CONFIG_VTUNE 00:34:59.691 #define SPDK_CONFIG_VTUNE_DIR 00:34:59.691 #define SPDK_CONFIG_WERROR 1 00:34:59.691 #define SPDK_CONFIG_WPDK_DIR 00:34:59.691 #undef SPDK_CONFIG_XNVME 00:34:59.691 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:34:59.691 00:18:55 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:34:59.691 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:59.691 00:18:55 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:59.691 00:18:55 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:59.691 00:18:55 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:59.691 00:18:55 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:59.691 00:18:55 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:59.691 00:18:55 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:59.691 00:18:55 reap_unregistered_poller -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:59.692 00:18:55 reap_unregistered_poller -- paths/export.sh@6 -- # export PATH 00:34:59.692 00:18:55 reap_unregistered_poller -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:34:59.692 00:18:55 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 1 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 1 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 1 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 1 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 1 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:34:59.692 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@166 -- # : 0 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@173 -- # : 0 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@187 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@202 -- # cat 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@255 -- # export QEMU_BIN= 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@255 -- # QEMU_BIN= 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@258 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@258 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@265 -- # export valgrind= 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@265 -- # valgrind= 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@271 -- # uname -s 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@281 -- # MAKE=make 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j10 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@301 -- # TEST_MODE= 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@320 -- # [[ -z 115381 ]] 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@320 -- # kill -0 115381 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local mount target_dir 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.CjoNl2 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@357 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.CjoNl2/tests/interrupt /tmp/spdk.CjoNl2 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@329 -- # df -T 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:59.693 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=1249312768 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=1254027264 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4714496 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda1 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=ext4 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=9870012416 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=19681529856 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=9794740224 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=6266740736 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=6270115840 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=3375104 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=5242880 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=5242880 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda16 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=ext4 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=777306112 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=923156480 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=81207296 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda15 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=vfat 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=103000064 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=109395968 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=6395904 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=1254010880 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=1254023168 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=12288 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=fuse.sshfs 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=98557427712 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=105088212992 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=1145352192 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:34:59.694 * Looking for test storage... 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@370 -- # local target_space new_size 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@374 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mount=/ 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@376 -- # target_space=9870012416 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:34:59.694 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ ext4 == tmpfs ]] 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ ext4 == ramfs ]] 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@383 -- # new_size=12009332736 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:34:59.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@391 -- # return 0 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:34:59.953 00:18:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:34:59.953 00:18:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:59.953 00:18:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:34:59.953 00:18:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:34:59.953 00:18:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:34:59.953 00:18:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:34:59.953 00:18:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:34:59.953 00:18:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:34:59.953 00:18:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:34:59.953 00:18:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:34:59.953 00:18:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:59.953 00:18:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:34:59.953 00:18:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=115422 00:34:59.953 00:18:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:59.953 00:18:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 115422 /var/tmp/spdk.sock 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@831 -- # '[' -z 115422 ']' 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:59.953 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:59.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:59.954 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:59.954 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:59.954 00:18:55 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:34:59.954 00:18:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:34:59.954 [2024-07-25 00:18:55.609852] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:34:59.954 [2024-07-25 00:18:55.610015] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115422 ] 00:34:59.954 [2024-07-25 00:18:55.779835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:00.213 [2024-07-25 00:18:55.932089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:00.213 [2024-07-25 00:18:55.932227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:00.213 [2024-07-25 00:18:55.932258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:00.471 [2024-07-25 00:18:56.146261] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:00.729 00:18:56 reap_unregistered_poller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:00.729 00:18:56 reap_unregistered_poller -- common/autotest_common.sh@864 -- # return 0 00:35:00.988 00:18:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:35:00.988 00:18:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:35:00.988 00:18:56 reap_unregistered_poller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.988 00:18:56 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:35:00.988 00:18:56 reap_unregistered_poller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.988 00:18:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:35:00.988 "name": "app_thread", 00:35:00.988 "id": 1, 00:35:00.988 "active_pollers": [], 00:35:00.988 "timed_pollers": [ 00:35:00.988 { 00:35:00.988 "name": "rpc_subsystem_poll_servers", 00:35:00.988 "id": 1, 00:35:00.988 "state": "waiting", 00:35:00.988 "run_count": 0, 00:35:00.988 "busy_count": 0, 00:35:00.988 "period_ticks": 8800000 00:35:00.988 } 00:35:00.988 ], 00:35:00.988 "paused_pollers": [] 00:35:00.988 }' 00:35:00.988 00:18:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:35:00.988 00:18:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:35:00.988 00:18:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:35:00.988 00:18:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:35:00.988 00:18:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:35:00.988 00:18:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:35:00.988 00:18:56 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:35:00.988 00:18:56 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:00.988 00:18:56 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:35:00.988 5000+0 records in 00:35:00.988 5000+0 records out 00:35:00.988 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0180443 s, 567 MB/s 00:35:00.988 00:18:56 reap_unregistered_poller -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:35:01.246 AIO0 00:35:01.246 00:18:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:01.505 00:18:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:35:01.505 00:18:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:35:01.505 00:18:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:35:01.505 00:18:57 reap_unregistered_poller -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.505 00:18:57 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:35:01.505 00:18:57 reap_unregistered_poller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.505 00:18:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:35:01.505 "name": "app_thread", 00:35:01.505 "id": 1, 00:35:01.505 "active_pollers": [], 00:35:01.505 "timed_pollers": [ 00:35:01.505 { 00:35:01.505 "name": "rpc_subsystem_poll_servers", 00:35:01.505 "id": 1, 00:35:01.505 "state": "waiting", 00:35:01.505 "run_count": 0, 00:35:01.505 "busy_count": 0, 00:35:01.505 "period_ticks": 8800000 00:35:01.505 } 00:35:01.505 ], 00:35:01.505 "paused_pollers": [] 00:35:01.505 }' 00:35:01.505 00:18:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:35:01.505 00:18:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:35:01.505 00:18:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:35:01.505 00:18:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:35:01.505 00:18:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:35:01.505 00:18:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:35:01.505 00:18:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:35:01.505 00:18:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 115422 00:35:01.505 00:18:57 reap_unregistered_poller -- common/autotest_common.sh@950 -- # '[' -z 115422 ']' 00:35:01.505 00:18:57 reap_unregistered_poller -- common/autotest_common.sh@954 -- # kill -0 115422 00:35:01.505 00:18:57 reap_unregistered_poller -- common/autotest_common.sh@955 -- # uname 00:35:01.505 00:18:57 reap_unregistered_poller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:01.505 00:18:57 reap_unregistered_poller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115422 00:35:01.505 00:18:57 reap_unregistered_poller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:01.505 00:18:57 reap_unregistered_poller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:01.505 killing process with pid 115422 00:35:01.505 00:18:57 reap_unregistered_poller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115422' 00:35:01.505 00:18:57 reap_unregistered_poller -- common/autotest_common.sh@969 -- # kill 115422 00:35:01.505 00:18:57 reap_unregistered_poller -- common/autotest_common.sh@974 -- # wait 115422 00:35:02.883 00:18:58 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:35:02.883 00:18:58 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:35:02.883 00:35:02.883 real 0m3.025s 00:35:02.883 user 0m2.396s 00:35:02.883 sys 0m0.533s 00:35:02.883 00:18:58 reap_unregistered_poller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:02.883 00:18:58 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:35:02.883 ************************************ 00:35:02.883 END TEST reap_unregistered_poller 00:35:02.883 ************************************ 00:35:02.883 00:18:58 -- spdk/autotest.sh@202 -- # uname -s 00:35:02.883 00:18:58 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:35:02.883 00:18:58 -- spdk/autotest.sh@203 -- # [[ 1 -eq 1 ]] 00:35:02.883 00:18:58 -- spdk/autotest.sh@209 -- # [[ 0 -eq 0 ]] 00:35:02.883 00:18:58 -- spdk/autotest.sh@210 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:35:02.883 00:18:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:02.883 00:18:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:02.883 00:18:58 -- common/autotest_common.sh@10 -- # set +x 00:35:02.883 ************************************ 00:35:02.883 START TEST spdk_dd 00:35:02.883 ************************************ 00:35:02.883 00:18:58 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:35:02.883 * Looking for test storage... 00:35:02.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:35:02.884 00:18:58 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:02.884 00:18:58 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:02.884 00:18:58 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:02.884 00:18:58 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:02.884 00:18:58 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:02.884 00:18:58 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:02.884 00:18:58 spdk_dd -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:02.884 00:18:58 spdk_dd -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:02.884 00:18:58 spdk_dd -- paths/export.sh@6 -- # export PATH 00:35:02.884 00:18:58 spdk_dd -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:02.884 00:18:58 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:03.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:35:03.143 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:03.711 00:18:59 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:35:03.711 00:18:59 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@230 -- # local class 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@232 -- # local progif 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@233 -- # class=01 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@15 -- # local i 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@24 -- # return 0 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@325 -- # (( 1 )) 00:35:03.711 00:18:59 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:35:03.711 00:18:59 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:35:03.711 00:18:59 spdk_dd -- dd/common.sh@139 -- # local lib 00:35:03.711 00:18:59 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:35:03.711 00:18:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:35:03.711 00:18:59 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:03.711 00:18:59 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:35:03.711 00:18:59 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:35:03.711 00:18:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:35:03.711 00:18:59 spdk_dd -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:35:03.711 00:18:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:35:03.711 00:18:59 spdk_dd -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:35:03.711 00:18:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:35:03.711 00:18:59 spdk_dd -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:35:03.711 00:18:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:35:03.711 00:18:59 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:35:03.711 00:18:59 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:35:03.711 * spdk_dd linked to liburing 00:35:03.711 00:18:59 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:35:03.711 00:18:59 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:35:03.712 00:18:59 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=n 00:35:03.712 00:18:59 spdk_dd -- dd/common.sh@149 -- # [[ n != y ]] 00:35:03.712 00:18:59 spdk_dd -- dd/common.sh@150 -- # printf '* spdk_dd built with liburing, but no liburing support requested?\n' 00:35:03.712 * spdk_dd built with liburing, but no liburing support requested? 00:35:03.712 00:18:59 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:35:03.712 00:18:59 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:35:03.712 00:18:59 spdk_dd -- dd/common.sh@153 -- # return 0 00:35:03.712 00:18:59 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:35:03.712 00:18:59 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:35:03.712 00:18:59 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:03.712 00:18:59 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:03.712 00:18:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:35:03.712 ************************************ 00:35:03.712 START TEST spdk_dd_basic_rw 00:35:03.712 ************************************ 00:35:03.712 00:18:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:35:03.712 * Looking for test storage... 00:35:03.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:35:03.712 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:03.712 00:18:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:03.712 00:18:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:03.712 00:18:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:03.712 00:18:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:03.712 00:18:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:03.712 00:18:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:03.712 00:18:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:03.712 00:18:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # export PATH 00:35:03.713 00:18:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:03.971 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:35:03.972 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:35:03.972 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:35:03.972 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:35:03.972 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:35:03.972 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:35:03.972 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:35:03.972 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:35:03.972 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:03.972 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:35:03.972 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:35:03.972 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:35:03.972 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:35:04.233 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 113 Data Units Written: 7 Host Read Commands: 2438 Host Write Commands: 109 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:35:04.233 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 113 Data Units Written: 7 Host Read Commands: 2438 Host Write Commands: 109 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:35:04.234 ************************************ 00:35:04.234 START TEST dd_bs_lt_native_bs 00:35:04.234 ************************************ 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:04.234 { 00:35:04.234 "subsystems": [ 00:35:04.234 { 00:35:04.234 "subsystem": "bdev", 00:35:04.234 "config": [ 00:35:04.234 { 00:35:04.234 "params": { 00:35:04.234 "trtype": "pcie", 00:35:04.234 "traddr": "0000:00:10.0", 00:35:04.234 "name": "Nvme0" 00:35:04.234 }, 00:35:04.234 "method": "bdev_nvme_attach_controller" 00:35:04.234 }, 00:35:04.234 { 00:35:04.234 "method": "bdev_wait_for_examine" 00:35:04.234 } 00:35:04.234 ] 00:35:04.234 } 00:35:04.234 ] 00:35:04.234 } 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:35:04.234 00:18:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:35:04.234 [2024-07-25 00:18:59.941576] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:04.234 [2024-07-25 00:18:59.941750] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115693 ] 00:35:04.493 [2024-07-25 00:19:00.118145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.493 [2024-07-25 00:19:00.353071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:05.061 [2024-07-25 00:19:00.669107] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:35:05.061 [2024-07-25 00:19:00.669192] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:05.320 [2024-07-25 00:19:01.082065] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:35:05.579 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:35:05.579 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:05.579 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:35:05.579 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:35:05.579 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:35:05.579 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:05.579 00:35:05.579 real 0m1.555s 00:35:05.579 user 0m1.265s 00:35:05.579 sys 0m0.210s 00:35:05.579 ************************************ 00:35:05.579 END TEST dd_bs_lt_native_bs 00:35:05.579 ************************************ 00:35:05.579 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:05.579 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:35:05.837 00:19:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:35:05.837 00:19:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:05.837 00:19:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:05.837 00:19:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:35:05.837 ************************************ 00:35:05.837 START TEST dd_rw 00:35:05.837 ************************************ 00:35:05.837 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:35:05.838 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:35:05.838 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:35:05.838 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:35:05.838 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:35:05.838 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:35:05.838 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:35:05.838 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:35:05.838 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:35:05.838 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:35:05.838 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:35:05.838 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:35:05.838 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:35:05.838 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:35:05.838 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:35:05.838 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:35:05.838 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:35:05.838 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:35:05.838 00:19:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:06.405 00:19:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:35:06.405 00:19:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:35:06.405 00:19:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:06.405 00:19:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:06.405 { 00:35:06.405 "subsystems": [ 00:35:06.405 { 00:35:06.405 "subsystem": "bdev", 00:35:06.405 "config": [ 00:35:06.405 { 00:35:06.405 "params": { 00:35:06.405 "trtype": "pcie", 00:35:06.405 "traddr": "0000:00:10.0", 00:35:06.405 "name": "Nvme0" 00:35:06.405 }, 00:35:06.405 "method": "bdev_nvme_attach_controller" 00:35:06.405 }, 00:35:06.405 { 00:35:06.405 "method": "bdev_wait_for_examine" 00:35:06.405 } 00:35:06.405 ] 00:35:06.405 } 00:35:06.405 ] 00:35:06.405 } 00:35:06.405 [2024-07-25 00:19:02.113309] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:06.405 [2024-07-25 00:19:02.113684] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115735 ] 00:35:06.664 [2024-07-25 00:19:02.286449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.664 [2024-07-25 00:19:02.436655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:07.860  Copying: 60/60 [kB] (average 19 MBps) 00:35:07.860 00:35:07.860 00:19:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:35:07.860 00:19:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:35:07.860 00:19:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:07.860 00:19:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:07.860 { 00:35:07.860 "subsystems": [ 00:35:07.860 { 00:35:07.860 "subsystem": "bdev", 00:35:07.860 "config": [ 00:35:07.860 { 00:35:07.860 "params": { 00:35:07.860 "trtype": "pcie", 00:35:07.860 "traddr": "0000:00:10.0", 00:35:07.860 "name": "Nvme0" 00:35:07.860 }, 00:35:07.860 "method": "bdev_nvme_attach_controller" 00:35:07.860 }, 00:35:07.860 { 00:35:07.860 "method": "bdev_wait_for_examine" 00:35:07.860 } 00:35:07.860 ] 00:35:07.860 } 00:35:07.860 ] 00:35:07.860 } 00:35:07.860 [2024-07-25 00:19:03.717778] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:07.860 [2024-07-25 00:19:03.717978] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115760 ] 00:35:08.118 [2024-07-25 00:19:03.887209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.376 [2024-07-25 00:19:04.035144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:09.229  Copying: 60/60 [kB] (average 19 MBps) 00:35:09.229 00:35:09.486 00:19:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:09.486 00:19:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:35:09.486 00:19:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:35:09.486 00:19:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:35:09.486 00:19:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:35:09.486 00:19:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:35:09.486 00:19:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:35:09.486 00:19:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:35:09.486 00:19:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:35:09.486 00:19:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:09.486 00:19:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:09.486 { 00:35:09.486 "subsystems": [ 00:35:09.486 { 00:35:09.486 "subsystem": "bdev", 00:35:09.486 "config": [ 00:35:09.486 { 00:35:09.486 "params": { 00:35:09.486 "trtype": "pcie", 00:35:09.486 "traddr": "0000:00:10.0", 00:35:09.486 "name": "Nvme0" 00:35:09.486 }, 00:35:09.486 "method": "bdev_nvme_attach_controller" 00:35:09.486 }, 00:35:09.486 { 00:35:09.486 "method": "bdev_wait_for_examine" 00:35:09.486 } 00:35:09.486 ] 00:35:09.486 } 00:35:09.486 ] 00:35:09.486 } 00:35:09.486 [2024-07-25 00:19:05.167273] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:09.486 [2024-07-25 00:19:05.167602] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115780 ] 00:35:09.486 [2024-07-25 00:19:05.338860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.744 [2024-07-25 00:19:05.488930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.936  Copying: 1024/1024 [kB] (average 1000 MBps) 00:35:10.936 00:35:10.936 00:19:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:35:10.936 00:19:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:35:10.936 00:19:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:35:10.936 00:19:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:35:10.936 00:19:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:35:10.936 00:19:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:35:10.936 00:19:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:11.502 00:19:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:35:11.502 00:19:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:35:11.502 00:19:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:11.502 00:19:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:11.502 { 00:35:11.502 "subsystems": [ 00:35:11.502 { 00:35:11.502 "subsystem": "bdev", 00:35:11.502 "config": [ 00:35:11.502 { 00:35:11.502 "params": { 00:35:11.502 "trtype": "pcie", 00:35:11.502 "traddr": "0000:00:10.0", 00:35:11.502 "name": "Nvme0" 00:35:11.502 }, 00:35:11.502 "method": "bdev_nvme_attach_controller" 00:35:11.502 }, 00:35:11.502 { 00:35:11.502 "method": "bdev_wait_for_examine" 00:35:11.502 } 00:35:11.502 ] 00:35:11.502 } 00:35:11.502 ] 00:35:11.502 } 00:35:11.502 [2024-07-25 00:19:07.291062] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:11.502 [2024-07-25 00:19:07.291235] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115812 ] 00:35:11.760 [2024-07-25 00:19:07.462714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.760 [2024-07-25 00:19:07.618637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:12.891  Copying: 60/60 [kB] (average 58 MBps) 00:35:12.891 00:35:12.891 00:19:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:35:12.891 00:19:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:35:12.891 00:19:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:12.891 00:19:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:12.891 { 00:35:12.891 "subsystems": [ 00:35:12.891 { 00:35:12.891 "subsystem": "bdev", 00:35:12.891 "config": [ 00:35:12.891 { 00:35:12.891 "params": { 00:35:12.891 "trtype": "pcie", 00:35:12.891 "traddr": "0000:00:10.0", 00:35:12.891 "name": "Nvme0" 00:35:12.891 }, 00:35:12.891 "method": "bdev_nvme_attach_controller" 00:35:12.891 }, 00:35:12.891 { 00:35:12.891 "method": "bdev_wait_for_examine" 00:35:12.891 } 00:35:12.891 ] 00:35:12.891 } 00:35:12.891 ] 00:35:12.891 } 00:35:12.891 [2024-07-25 00:19:08.718187] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:12.891 [2024-07-25 00:19:08.718478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115831 ] 00:35:13.148 [2024-07-25 00:19:08.870886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.406 [2024-07-25 00:19:09.025948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:14.597  Copying: 60/60 [kB] (average 58 MBps) 00:35:14.597 00:35:14.597 00:19:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:14.597 00:19:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:35:14.597 00:19:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:35:14.597 00:19:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:35:14.597 00:19:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:35:14.597 00:19:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:35:14.597 00:19:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:35:14.597 00:19:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:35:14.597 00:19:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:35:14.597 00:19:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:14.597 00:19:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:14.597 { 00:35:14.597 "subsystems": [ 00:35:14.597 { 00:35:14.597 "subsystem": "bdev", 00:35:14.597 "config": [ 00:35:14.597 { 00:35:14.597 "params": { 00:35:14.597 "trtype": "pcie", 00:35:14.597 "traddr": "0000:00:10.0", 00:35:14.597 "name": "Nvme0" 00:35:14.597 }, 00:35:14.597 "method": "bdev_nvme_attach_controller" 00:35:14.597 }, 00:35:14.597 { 00:35:14.597 "method": "bdev_wait_for_examine" 00:35:14.597 } 00:35:14.597 ] 00:35:14.597 } 00:35:14.597 ] 00:35:14.597 } 00:35:14.597 [2024-07-25 00:19:10.303387] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:14.597 [2024-07-25 00:19:10.303565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115851 ] 00:35:14.855 [2024-07-25 00:19:10.475434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.855 [2024-07-25 00:19:10.628954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:16.046  Copying: 1024/1024 [kB] (average 1000 MBps) 00:35:16.046 00:35:16.046 00:19:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:35:16.046 00:19:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:35:16.046 00:19:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:35:16.046 00:19:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:35:16.046 00:19:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:35:16.046 00:19:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:35:16.046 00:19:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:35:16.046 00:19:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:16.304 00:19:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:35:16.304 00:19:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:35:16.304 00:19:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:16.304 00:19:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:16.304 { 00:35:16.304 "subsystems": [ 00:35:16.304 { 00:35:16.304 "subsystem": "bdev", 00:35:16.304 "config": [ 00:35:16.304 { 00:35:16.304 "params": { 00:35:16.304 "trtype": "pcie", 00:35:16.304 "traddr": "0000:00:10.0", 00:35:16.304 "name": "Nvme0" 00:35:16.304 }, 00:35:16.304 "method": "bdev_nvme_attach_controller" 00:35:16.304 }, 00:35:16.304 { 00:35:16.304 "method": "bdev_wait_for_examine" 00:35:16.304 } 00:35:16.304 ] 00:35:16.304 } 00:35:16.304 ] 00:35:16.304 } 00:35:16.563 [2024-07-25 00:19:12.208749] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:16.563 [2024-07-25 00:19:12.208932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115881 ] 00:35:16.563 [2024-07-25 00:19:12.379216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:16.821 [2024-07-25 00:19:12.549476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:18.015  Copying: 56/56 [kB] (average 27 MBps) 00:35:18.015 00:35:18.015 00:19:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:35:18.015 00:19:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:35:18.015 00:19:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:18.015 00:19:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:18.015 { 00:35:18.015 "subsystems": [ 00:35:18.015 { 00:35:18.015 "subsystem": "bdev", 00:35:18.015 "config": [ 00:35:18.015 { 00:35:18.015 "params": { 00:35:18.015 "trtype": "pcie", 00:35:18.015 "traddr": "0000:00:10.0", 00:35:18.015 "name": "Nvme0" 00:35:18.015 }, 00:35:18.015 "method": "bdev_nvme_attach_controller" 00:35:18.015 }, 00:35:18.015 { 00:35:18.015 "method": "bdev_wait_for_examine" 00:35:18.015 } 00:35:18.015 ] 00:35:18.015 } 00:35:18.015 ] 00:35:18.015 } 00:35:18.015 [2024-07-25 00:19:13.822272] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:18.015 [2024-07-25 00:19:13.822442] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115906 ] 00:35:18.274 [2024-07-25 00:19:13.992838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.274 [2024-07-25 00:19:14.142259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.409  Copying: 56/56 [kB] (average 27 MBps) 00:35:19.409 00:35:19.409 00:19:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:19.409 00:19:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:35:19.409 00:19:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:35:19.409 00:19:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:35:19.409 00:19:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:35:19.409 00:19:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:35:19.409 00:19:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:35:19.409 00:19:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:35:19.409 00:19:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:35:19.409 00:19:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:19.409 00:19:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:19.409 { 00:35:19.409 "subsystems": [ 00:35:19.409 { 00:35:19.409 "subsystem": "bdev", 00:35:19.409 "config": [ 00:35:19.409 { 00:35:19.409 "params": { 00:35:19.409 "trtype": "pcie", 00:35:19.409 "traddr": "0000:00:10.0", 00:35:19.409 "name": "Nvme0" 00:35:19.409 }, 00:35:19.409 "method": "bdev_nvme_attach_controller" 00:35:19.409 }, 00:35:19.409 { 00:35:19.409 "method": "bdev_wait_for_examine" 00:35:19.409 } 00:35:19.409 ] 00:35:19.409 } 00:35:19.409 ] 00:35:19.409 } 00:35:19.409 [2024-07-25 00:19:15.244996] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:19.409 [2024-07-25 00:19:15.245125] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115926 ] 00:35:19.668 [2024-07-25 00:19:15.399352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.927 [2024-07-25 00:19:15.547406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:21.121  Copying: 1024/1024 [kB] (average 1000 MBps) 00:35:21.121 00:35:21.121 00:19:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:35:21.121 00:19:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:35:21.121 00:19:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:35:21.121 00:19:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:35:21.121 00:19:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:35:21.121 00:19:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:35:21.121 00:19:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:21.380 00:19:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:35:21.380 00:19:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:35:21.380 00:19:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:21.380 00:19:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:21.639 { 00:35:21.639 "subsystems": [ 00:35:21.639 { 00:35:21.639 "subsystem": "bdev", 00:35:21.639 "config": [ 00:35:21.639 { 00:35:21.639 "params": { 00:35:21.639 "trtype": "pcie", 00:35:21.639 "traddr": "0000:00:10.0", 00:35:21.639 "name": "Nvme0" 00:35:21.639 }, 00:35:21.639 "method": "bdev_nvme_attach_controller" 00:35:21.639 }, 00:35:21.639 { 00:35:21.639 "method": "bdev_wait_for_examine" 00:35:21.639 } 00:35:21.639 ] 00:35:21.639 } 00:35:21.639 ] 00:35:21.639 } 00:35:21.639 [2024-07-25 00:19:17.299650] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:21.639 [2024-07-25 00:19:17.300092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115951 ] 00:35:21.639 [2024-07-25 00:19:17.471023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.897 [2024-07-25 00:19:17.622472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:23.091  Copying: 56/56 [kB] (average 54 MBps) 00:35:23.091 00:35:23.091 00:19:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:35:23.091 00:19:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:35:23.091 00:19:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:23.091 00:19:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:23.091 { 00:35:23.091 "subsystems": [ 00:35:23.091 { 00:35:23.091 "subsystem": "bdev", 00:35:23.091 "config": [ 00:35:23.091 { 00:35:23.091 "params": { 00:35:23.091 "trtype": "pcie", 00:35:23.091 "traddr": "0000:00:10.0", 00:35:23.091 "name": "Nvme0" 00:35:23.091 }, 00:35:23.091 "method": "bdev_nvme_attach_controller" 00:35:23.091 }, 00:35:23.091 { 00:35:23.091 "method": "bdev_wait_for_examine" 00:35:23.091 } 00:35:23.091 ] 00:35:23.091 } 00:35:23.091 ] 00:35:23.091 } 00:35:23.091 [2024-07-25 00:19:18.719662] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:23.091 [2024-07-25 00:19:18.720073] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115980 ] 00:35:23.091 [2024-07-25 00:19:18.890588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:23.349 [2024-07-25 00:19:19.043473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:24.574  Copying: 56/56 [kB] (average 54 MBps) 00:35:24.574 00:35:24.574 00:19:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:24.574 00:19:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:35:24.574 00:19:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:35:24.574 00:19:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:35:24.574 00:19:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:35:24.575 00:19:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:35:24.575 00:19:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:35:24.575 00:19:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:35:24.575 00:19:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:35:24.575 00:19:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:24.575 00:19:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:24.575 { 00:35:24.575 "subsystems": [ 00:35:24.575 { 00:35:24.575 "subsystem": "bdev", 00:35:24.575 "config": [ 00:35:24.575 { 00:35:24.575 "params": { 00:35:24.575 "trtype": "pcie", 00:35:24.575 "traddr": "0000:00:10.0", 00:35:24.575 "name": "Nvme0" 00:35:24.575 }, 00:35:24.575 "method": "bdev_nvme_attach_controller" 00:35:24.575 }, 00:35:24.575 { 00:35:24.575 "method": "bdev_wait_for_examine" 00:35:24.575 } 00:35:24.575 ] 00:35:24.575 } 00:35:24.575 ] 00:35:24.575 } 00:35:24.575 [2024-07-25 00:19:20.333057] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:24.575 [2024-07-25 00:19:20.333431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116000 ] 00:35:24.833 [2024-07-25 00:19:20.496648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.833 [2024-07-25 00:19:20.643603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:26.027  Copying: 1024/1024 [kB] (average 1000 MBps) 00:35:26.027 00:35:26.027 00:19:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:35:26.027 00:19:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:35:26.027 00:19:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:35:26.027 00:19:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:35:26.027 00:19:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:35:26.027 00:19:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:35:26.027 00:19:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:35:26.027 00:19:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:26.285 00:19:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:35:26.285 00:19:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:35:26.285 00:19:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:26.285 00:19:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:26.285 { 00:35:26.285 "subsystems": [ 00:35:26.285 { 00:35:26.285 "subsystem": "bdev", 00:35:26.285 "config": [ 00:35:26.285 { 00:35:26.285 "params": { 00:35:26.285 "trtype": "pcie", 00:35:26.285 "traddr": "0000:00:10.0", 00:35:26.285 "name": "Nvme0" 00:35:26.285 }, 00:35:26.285 "method": "bdev_nvme_attach_controller" 00:35:26.285 }, 00:35:26.285 { 00:35:26.285 "method": "bdev_wait_for_examine" 00:35:26.285 } 00:35:26.285 ] 00:35:26.285 } 00:35:26.285 ] 00:35:26.285 } 00:35:26.544 [2024-07-25 00:19:22.191400] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:26.544 [2024-07-25 00:19:22.191759] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116026 ] 00:35:26.544 [2024-07-25 00:19:22.361147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:26.802 [2024-07-25 00:19:22.513006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:27.997  Copying: 48/48 [kB] (average 46 MBps) 00:35:27.997 00:35:27.997 00:19:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:35:27.997 00:19:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:35:27.997 00:19:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:27.997 00:19:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:27.997 { 00:35:27.997 "subsystems": [ 00:35:27.997 { 00:35:27.997 "subsystem": "bdev", 00:35:27.997 "config": [ 00:35:27.997 { 00:35:27.997 "params": { 00:35:27.997 "trtype": "pcie", 00:35:27.997 "traddr": "0000:00:10.0", 00:35:27.997 "name": "Nvme0" 00:35:27.997 }, 00:35:27.997 "method": "bdev_nvme_attach_controller" 00:35:27.997 }, 00:35:27.997 { 00:35:27.997 "method": "bdev_wait_for_examine" 00:35:27.997 } 00:35:27.997 ] 00:35:27.997 } 00:35:27.997 ] 00:35:27.997 } 00:35:27.997 [2024-07-25 00:19:23.775047] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:27.997 [2024-07-25 00:19:23.775845] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116051 ] 00:35:28.255 [2024-07-25 00:19:23.944253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:28.255 [2024-07-25 00:19:24.093219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:29.387  Copying: 48/48 [kB] (average 46 MBps) 00:35:29.387 00:35:29.387 00:19:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:29.387 00:19:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:35:29.387 00:19:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:35:29.387 00:19:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:35:29.387 00:19:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:35:29.387 00:19:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:35:29.387 00:19:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:35:29.387 00:19:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:35:29.387 00:19:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:35:29.387 00:19:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:29.387 00:19:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:29.387 { 00:35:29.387 "subsystems": [ 00:35:29.387 { 00:35:29.387 "subsystem": "bdev", 00:35:29.387 "config": [ 00:35:29.387 { 00:35:29.387 "params": { 00:35:29.387 "trtype": "pcie", 00:35:29.387 "traddr": "0000:00:10.0", 00:35:29.387 "name": "Nvme0" 00:35:29.387 }, 00:35:29.387 "method": "bdev_nvme_attach_controller" 00:35:29.387 }, 00:35:29.387 { 00:35:29.387 "method": "bdev_wait_for_examine" 00:35:29.387 } 00:35:29.387 ] 00:35:29.387 } 00:35:29.387 ] 00:35:29.387 } 00:35:29.387 [2024-07-25 00:19:25.198586] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:29.387 [2024-07-25 00:19:25.198707] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116071 ] 00:35:29.646 [2024-07-25 00:19:25.350615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:29.646 [2024-07-25 00:19:25.497330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:31.146  Copying: 1024/1024 [kB] (average 1000 MBps) 00:35:31.146 00:35:31.146 00:19:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:35:31.147 00:19:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:35:31.147 00:19:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:35:31.147 00:19:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:35:31.147 00:19:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:35:31.147 00:19:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:35:31.147 00:19:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:31.405 00:19:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:35:31.405 00:19:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:35:31.405 00:19:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:31.405 00:19:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:31.405 { 00:35:31.405 "subsystems": [ 00:35:31.405 { 00:35:31.405 "subsystem": "bdev", 00:35:31.405 "config": [ 00:35:31.405 { 00:35:31.405 "params": { 00:35:31.405 "trtype": "pcie", 00:35:31.405 "traddr": "0000:00:10.0", 00:35:31.405 "name": "Nvme0" 00:35:31.405 }, 00:35:31.405 "method": "bdev_nvme_attach_controller" 00:35:31.405 }, 00:35:31.405 { 00:35:31.405 "method": "bdev_wait_for_examine" 00:35:31.405 } 00:35:31.405 ] 00:35:31.405 } 00:35:31.405 ] 00:35:31.405 } 00:35:31.405 [2024-07-25 00:19:27.202174] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:31.405 [2024-07-25 00:19:27.202527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116100 ] 00:35:31.663 [2024-07-25 00:19:27.374471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:31.663 [2024-07-25 00:19:27.522998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:32.794  Copying: 48/48 [kB] (average 46 MBps) 00:35:32.794 00:35:32.794 00:19:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:35:32.794 00:19:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:35:32.794 00:19:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:32.794 00:19:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:32.794 { 00:35:32.794 "subsystems": [ 00:35:32.794 { 00:35:32.794 "subsystem": "bdev", 00:35:32.794 "config": [ 00:35:32.794 { 00:35:32.794 "params": { 00:35:32.794 "trtype": "pcie", 00:35:32.794 "traddr": "0000:00:10.0", 00:35:32.794 "name": "Nvme0" 00:35:32.794 }, 00:35:32.794 "method": "bdev_nvme_attach_controller" 00:35:32.794 }, 00:35:32.794 { 00:35:32.794 "method": "bdev_wait_for_examine" 00:35:32.794 } 00:35:32.794 ] 00:35:32.794 } 00:35:32.794 ] 00:35:32.794 } 00:35:32.794 [2024-07-25 00:19:28.616779] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:32.794 [2024-07-25 00:19:28.617197] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116119 ] 00:35:33.052 [2024-07-25 00:19:28.790037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:33.310 [2024-07-25 00:19:28.938788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:34.505  Copying: 48/48 [kB] (average 46 MBps) 00:35:34.505 00:35:34.505 00:19:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:34.505 00:19:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:35:34.505 00:19:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:35:34.505 00:19:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:35:34.505 00:19:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:35:34.505 00:19:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:35:34.505 00:19:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:35:34.505 00:19:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:35:34.505 00:19:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:35:34.505 00:19:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:34.505 00:19:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:34.505 { 00:35:34.505 "subsystems": [ 00:35:34.505 { 00:35:34.505 "subsystem": "bdev", 00:35:34.505 "config": [ 00:35:34.505 { 00:35:34.505 "params": { 00:35:34.505 "trtype": "pcie", 00:35:34.505 "traddr": "0000:00:10.0", 00:35:34.505 "name": "Nvme0" 00:35:34.505 }, 00:35:34.505 "method": "bdev_nvme_attach_controller" 00:35:34.505 }, 00:35:34.505 { 00:35:34.505 "method": "bdev_wait_for_examine" 00:35:34.505 } 00:35:34.505 ] 00:35:34.505 } 00:35:34.505 ] 00:35:34.505 } 00:35:34.505 [2024-07-25 00:19:30.232216] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:34.505 [2024-07-25 00:19:30.232561] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116145 ] 00:35:34.764 [2024-07-25 00:19:30.403139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:34.764 [2024-07-25 00:19:30.553720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.962  Copying: 1024/1024 [kB] (average 1000 MBps) 00:35:35.962 00:35:35.962 ************************************ 00:35:35.962 END TEST dd_rw 00:35:35.962 ************************************ 00:35:35.962 00:35:35.962 real 0m30.118s 00:35:35.962 user 0m24.599s 00:35:35.962 sys 0m3.838s 00:35:35.962 00:19:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:35.962 00:19:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:35.962 00:19:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:35:35.962 00:19:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:35.962 00:19:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:35.962 00:19:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:35:35.962 ************************************ 00:35:35.962 START TEST dd_rw_offset 00:35:35.962 ************************************ 00:35:35.962 00:19:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:35:35.962 00:19:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:35:35.962 00:19:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:35:35.962 00:19:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:35:35.962 00:19:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:35:35.962 00:19:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:35:35.962 00:19:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=4hwcp1s9t9bouyac068cy1db7bykzi2x9c7m8b2p7x5t17wdqjsa05pbq4lo4y6cdh3qmh65rfo0np4z2um008fsldcssk26jnn9pkmzex5ulbo07gqpom4f5cirlov8lvmnx19ojvmlswa5wn93eh5pjb37rdflikunvvou8awipseti5dqstxcgkk1zwz0b5wnujb9u436mz0xtjj30azwdbjx64a5fal5nf5shv7qw5y5qu2isr4ry17omipkbu77yw3890ioxhwzhycs332dp7cehkle9slsp6khit02ovahki6zqrw54fh5pd9ciehuas0ksbw9jz5s0043ljpx0eyvxe7qzqc8vzzdoqppf8h9sdew3h66z65gktosmvi2exh9evurstlqzqeido3m4a3plvr4m0lt9qnvo6yk83a2dtbdyxsw5tlevvy9z2xb9qwxx0730zd3dujkf8dw9j9fap1bp4v11njcxdue8ez0yxejq5g2fa2c102jqhxdeh2q9iffu32fi1d4ufoa561l7oj6dqtpo1umowk04se13bsmhyre6mdrij1kewdzn12e8eufp5saxejzthu46efoym3diy7qnkexy0kxel9327nqdt7eri47ewpooyhqgl01ptpn9srz21cg0se8u72i8rxz058mo9t6u6qnl11j1uhtbhjkfponqygbftwruqr3k9ulq5wb3ekrh0tcnarx9s12clbrawgfiltal069m2wxibmb10kt3wb7m8j7xq791krlkr0taovw0urd7i8n2ufrbvgyk8b5r9u70gevsyuj6pl45f2pig5467dxlytyw3gcmhjydyw5dwgwkw1citr2udd88xteugldbrh3uesugkidh8c027nks4mrzzlak7qildxdhg3d3zjkh50pcprtz6luk7r1nld2tgag1hpk3odb182ck68oumorbso3zjrjq6lhmpiosm1f1pnrztl3rdr452md8gofy0jjvd761dnqrfmuzqq9ijsn847ute51e3ax3wmmbbj1t6pdyvnqsfh8rr05ws7va6bkh7tosc2k45zscyv69dshb5sbpn8vmklxgw07np1liixj77kggw8qam532b7gwr8ss4jw7y3jvleize49q2aacfs43woyrjgcy837rq9c1pctjnynmgwvcwt9byleenu1kn3f5op6tdqkeonrgxytd99y7o5vqkq68bv1jo3sxa6f46fn4var4t8tp4xj5qclcbac3vqk52210v7h003ajwnc93by1ya9czbmbklr4pgv0ounh20cph5bfktz1qlbpw9m92bop0rjpbu0ttvnhrkwnwpwmbcppgude6fn09nvufoajmnew2rrplfos41975gnnm4t72oyfu1n0zwdyyirb2bmbtfpd7rczpmhu652lvg0ldyt0j7r6mlsec1mdh0dfcvk6p974j0a1x6qsfev9q0k7v6k79tanmj0d6zqzpwp1ugkn7oznnd4ajbomc4pwlfe2c4iope3cc92gpr0osamnhq0nuubnylk7dcwb4h2jhpxtte4uybix1hecc7z3kdauc8ik8i8ia2mojj0hc20lov3lq89ow30q8jh5plib14hvoj53rb0un00fv6hg0740l2xvcwuaahrer28pomtv28alu6y0sxvfn071ub0k9zhz5xm063srfezadatuekjj108n5kuqihp91kdznbsvuphovlyfhkw1sdcs8nylbklddlry7sardj5gxn4gbenacibqrc6nx5rmu66yhyu9y61n2vyvpavuuyxu07zk3s3irest4g58lvczhgdtesfushqg6ew4z6vbmxxg1n5lxulelwmwtx9nvpk57vbsr1hp4bc1g1molpbwe6e81kgrx0u7spyd52molosas8potivimh27y5ibggopihm72btxenciwomsvfbtb5d3py0khj9lml7wmi6u1f7cua0lr5b3cv11moq3uj71e13kubj6lo1exf8z9fniy1nucarlwtrofonw1edi52wvh2dokodky9wynlh8z2huhtulx2t79gbmct2jo7azsm3bilmf8dxl9s0lvkk4m2v1yba8xa4xxxbngeyke96hpfw8u87wwbm7yzsfw9s01p0jx3vf78q7qsumj68do28kx28nka7a2bdtifzeai2yv134h9ifmh0h5t0rn6hz3arl4crvhm32014obpmp8lsrd1xzcoiy4n54d2rjguudzdt5mtae7gjygjmj7eaqeg71z11366zw9y148zyd4s2oqifddr0d4q3bsemrw3zik6xs6w8lmiskwjd6s5dqhnp9wr0x0c9sr1hrqvkl1cnnjinpsoieoivpuqgy294w5xmgpuejsj9fx19we6pysh82k832oyap8vqz6h0rfh5ii3h55gwud97brhrowmt7p33s05f5itx4c048i8q12htvwo3t5krr3cmhe2o5watjz48p7svwaykvhyra7x80pwzsa2tbc4c4ex6aefwgwggmbu8hrp86570jc1mjbv6wxen8xjw9m9notszd38o3c0xmp7v3atnks0kk87x4hjat2oks9v6fjz7k55zqjr7wr136pkl8kiiv8wenzt7kf77eqjow327shdzsmzq51b9yaxn9sgxait9caublddpztzoaonqh5lfbp43gi3kvaeqbqw89yhx9mdwucz8dr304tp5rmf2gnhuko847vlaq3poyq20t4judtgtsv8xo78wmym5qxme2x5sd03ih90gagxb98irdm0qcuz46ng2lxyfc87gnxb1uyegsjofupbzbfe4e2vejqqel6eruhmvxgv7divd9cawne5opm29hqmiesiequ3ihjz6twni1xj0f2mmmuo0ukn6ty8xbdvt35mwcrcaj7tuyjpq6745pzwpwj5h64xue8gtf1ohc2a6bnzuumqcimnnj82ban1cihzgcq5xak2eaqc2p3dw6jl2dmv67pd4enzing3fbdfx8nhsnp4cro46xgxhbxfe198btw1jimsenit7qanznzfh0rasntjd2lngxdd6dmjcjsbgr9p4jlknzgot7g78r4dw01vmya4u4vl4m8vdsq2otd0lyyzgztnsc8qb7opxvsc6ddlw4dk0g153c2hmmpby1t8vxqg6z1vo4h1jfjm3pjfm44pjwf7zlezk7u2voryzvn5fygo68nweaubra59v139gs57ojuwmb8jf39jxg53xoxd901r8kkro3kfu6zhnzou2p5hg7ey1kz7sibk8u92fp1cnmuam810exunlea2x2ncvg5opok3onipwj7795ejka7bph0ce2tek28q2x6grln4if0dyiwkic88asctkncx25mrk067iqqrmoyvuzhl4aylrkhps5hccpk2wey2d9g2uaoegl4mx676smhj844438p7k98qaprsb3xn47of7nkxxox34c46ckwljphyn4yrwpxnsikvqpb6geq6cu5frbt6eh1qr34pg58ssaoyrx90be3nn39hif86bhsa0c43iiiqjj422sxcmm8ucm98ifgqoxr3xxv37y29lasf6j0v48hbld6eo049d8qeyrq6co30oq96cstmo4ffib8m7wq5ieeyhrk3ths8fdvvcmkx0lnmm83vv8on2hualsgw58br8dhh0l44evyv0jldtsytgsauv2kbgacbuxrttus45rc0n9qz2upl495sjswhkuk7xpt9jpiusig4abagvx1kepqzxjbusgtom4gd7uwa1jqkfjvh78og0jk51i3mryj7ecjukwbwcv1nxpbl722wh2v11od9rzc4wyiimg7ra6x610nl0n0iq6scu97kf558omvnhpb5ygi2a3dzoxx1xwe1qrpdizpmovqtgijuae45f2dqrz4be6fk52yxn4vcjxrhxay5t62yk12fbjr5c5hfh465uqxox4r2aqkte1fsswivz4yml8e3k2oh4eef7h702ry8sbfroeb8fxwdlxp9f7ehsaiageolz6x9gp54xkl180gvosmebrh64 00:35:35.962 00:19:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:35:35.962 00:19:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:35:35.962 00:19:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:35:35.962 00:19:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:35:35.962 { 00:35:35.962 "subsystems": [ 00:35:35.962 { 00:35:35.962 "subsystem": "bdev", 00:35:35.962 "config": [ 00:35:35.962 { 00:35:35.962 "params": { 00:35:35.962 "trtype": "pcie", 00:35:35.962 "traddr": "0000:00:10.0", 00:35:35.962 "name": "Nvme0" 00:35:35.962 }, 00:35:35.962 "method": "bdev_nvme_attach_controller" 00:35:35.962 }, 00:35:35.962 { 00:35:35.962 "method": "bdev_wait_for_examine" 00:35:35.962 } 00:35:35.962 ] 00:35:35.962 } 00:35:35.962 ] 00:35:35.962 } 00:35:35.962 [2024-07-25 00:19:31.762269] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:35.962 [2024-07-25 00:19:31.762443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116182 ] 00:35:36.220 [2024-07-25 00:19:31.934291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.220 [2024-07-25 00:19:32.083807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:37.720  Copying: 4096/4096 [B] (average 4000 kBps) 00:35:37.720 00:35:37.720 00:19:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:35:37.720 00:19:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:35:37.720 00:19:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:35:37.720 00:19:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:35:37.721 { 00:35:37.721 "subsystems": [ 00:35:37.721 { 00:35:37.721 "subsystem": "bdev", 00:35:37.721 "config": [ 00:35:37.721 { 00:35:37.721 "params": { 00:35:37.721 "trtype": "pcie", 00:35:37.721 "traddr": "0000:00:10.0", 00:35:37.721 "name": "Nvme0" 00:35:37.721 }, 00:35:37.721 "method": "bdev_nvme_attach_controller" 00:35:37.721 }, 00:35:37.721 { 00:35:37.721 "method": "bdev_wait_for_examine" 00:35:37.721 } 00:35:37.721 ] 00:35:37.721 } 00:35:37.721 ] 00:35:37.721 } 00:35:37.721 [2024-07-25 00:19:33.350249] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:37.721 [2024-07-25 00:19:33.350421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116207 ] 00:35:37.721 [2024-07-25 00:19:33.517658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:37.979 [2024-07-25 00:19:33.669731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:39.188  Copying: 4096/4096 [B] (average 4000 kBps) 00:35:39.188 00:35:39.188 00:19:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:35:39.188 00:19:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 4hwcp1s9t9bouyac068cy1db7bykzi2x9c7m8b2p7x5t17wdqjsa05pbq4lo4y6cdh3qmh65rfo0np4z2um008fsldcssk26jnn9pkmzex5ulbo07gqpom4f5cirlov8lvmnx19ojvmlswa5wn93eh5pjb37rdflikunvvou8awipseti5dqstxcgkk1zwz0b5wnujb9u436mz0xtjj30azwdbjx64a5fal5nf5shv7qw5y5qu2isr4ry17omipkbu77yw3890ioxhwzhycs332dp7cehkle9slsp6khit02ovahki6zqrw54fh5pd9ciehuas0ksbw9jz5s0043ljpx0eyvxe7qzqc8vzzdoqppf8h9sdew3h66z65gktosmvi2exh9evurstlqzqeido3m4a3plvr4m0lt9qnvo6yk83a2dtbdyxsw5tlevvy9z2xb9qwxx0730zd3dujkf8dw9j9fap1bp4v11njcxdue8ez0yxejq5g2fa2c102jqhxdeh2q9iffu32fi1d4ufoa561l7oj6dqtpo1umowk04se13bsmhyre6mdrij1kewdzn12e8eufp5saxejzthu46efoym3diy7qnkexy0kxel9327nqdt7eri47ewpooyhqgl01ptpn9srz21cg0se8u72i8rxz058mo9t6u6qnl11j1uhtbhjkfponqygbftwruqr3k9ulq5wb3ekrh0tcnarx9s12clbrawgfiltal069m2wxibmb10kt3wb7m8j7xq791krlkr0taovw0urd7i8n2ufrbvgyk8b5r9u70gevsyuj6pl45f2pig5467dxlytyw3gcmhjydyw5dwgwkw1citr2udd88xteugldbrh3uesugkidh8c027nks4mrzzlak7qildxdhg3d3zjkh50pcprtz6luk7r1nld2tgag1hpk3odb182ck68oumorbso3zjrjq6lhmpiosm1f1pnrztl3rdr452md8gofy0jjvd761dnqrfmuzqq9ijsn847ute51e3ax3wmmbbj1t6pdyvnqsfh8rr05ws7va6bkh7tosc2k45zscyv69dshb5sbpn8vmklxgw07np1liixj77kggw8qam532b7gwr8ss4jw7y3jvleize49q2aacfs43woyrjgcy837rq9c1pctjnynmgwvcwt9byleenu1kn3f5op6tdqkeonrgxytd99y7o5vqkq68bv1jo3sxa6f46fn4var4t8tp4xj5qclcbac3vqk52210v7h003ajwnc93by1ya9czbmbklr4pgv0ounh20cph5bfktz1qlbpw9m92bop0rjpbu0ttvnhrkwnwpwmbcppgude6fn09nvufoajmnew2rrplfos41975gnnm4t72oyfu1n0zwdyyirb2bmbtfpd7rczpmhu652lvg0ldyt0j7r6mlsec1mdh0dfcvk6p974j0a1x6qsfev9q0k7v6k79tanmj0d6zqzpwp1ugkn7oznnd4ajbomc4pwlfe2c4iope3cc92gpr0osamnhq0nuubnylk7dcwb4h2jhpxtte4uybix1hecc7z3kdauc8ik8i8ia2mojj0hc20lov3lq89ow30q8jh5plib14hvoj53rb0un00fv6hg0740l2xvcwuaahrer28pomtv28alu6y0sxvfn071ub0k9zhz5xm063srfezadatuekjj108n5kuqihp91kdznbsvuphovlyfhkw1sdcs8nylbklddlry7sardj5gxn4gbenacibqrc6nx5rmu66yhyu9y61n2vyvpavuuyxu07zk3s3irest4g58lvczhgdtesfushqg6ew4z6vbmxxg1n5lxulelwmwtx9nvpk57vbsr1hp4bc1g1molpbwe6e81kgrx0u7spyd52molosas8potivimh27y5ibggopihm72btxenciwomsvfbtb5d3py0khj9lml7wmi6u1f7cua0lr5b3cv11moq3uj71e13kubj6lo1exf8z9fniy1nucarlwtrofonw1edi52wvh2dokodky9wynlh8z2huhtulx2t79gbmct2jo7azsm3bilmf8dxl9s0lvkk4m2v1yba8xa4xxxbngeyke96hpfw8u87wwbm7yzsfw9s01p0jx3vf78q7qsumj68do28kx28nka7a2bdtifzeai2yv134h9ifmh0h5t0rn6hz3arl4crvhm32014obpmp8lsrd1xzcoiy4n54d2rjguudzdt5mtae7gjygjmj7eaqeg71z11366zw9y148zyd4s2oqifddr0d4q3bsemrw3zik6xs6w8lmiskwjd6s5dqhnp9wr0x0c9sr1hrqvkl1cnnjinpsoieoivpuqgy294w5xmgpuejsj9fx19we6pysh82k832oyap8vqz6h0rfh5ii3h55gwud97brhrowmt7p33s05f5itx4c048i8q12htvwo3t5krr3cmhe2o5watjz48p7svwaykvhyra7x80pwzsa2tbc4c4ex6aefwgwggmbu8hrp86570jc1mjbv6wxen8xjw9m9notszd38o3c0xmp7v3atnks0kk87x4hjat2oks9v6fjz7k55zqjr7wr136pkl8kiiv8wenzt7kf77eqjow327shdzsmzq51b9yaxn9sgxait9caublddpztzoaonqh5lfbp43gi3kvaeqbqw89yhx9mdwucz8dr304tp5rmf2gnhuko847vlaq3poyq20t4judtgtsv8xo78wmym5qxme2x5sd03ih90gagxb98irdm0qcuz46ng2lxyfc87gnxb1uyegsjofupbzbfe4e2vejqqel6eruhmvxgv7divd9cawne5opm29hqmiesiequ3ihjz6twni1xj0f2mmmuo0ukn6ty8xbdvt35mwcrcaj7tuyjpq6745pzwpwj5h64xue8gtf1ohc2a6bnzuumqcimnnj82ban1cihzgcq5xak2eaqc2p3dw6jl2dmv67pd4enzing3fbdfx8nhsnp4cro46xgxhbxfe198btw1jimsenit7qanznzfh0rasntjd2lngxdd6dmjcjsbgr9p4jlknzgot7g78r4dw01vmya4u4vl4m8vdsq2otd0lyyzgztnsc8qb7opxvsc6ddlw4dk0g153c2hmmpby1t8vxqg6z1vo4h1jfjm3pjfm44pjwf7zlezk7u2voryzvn5fygo68nweaubra59v139gs57ojuwmb8jf39jxg53xoxd901r8kkro3kfu6zhnzou2p5hg7ey1kz7sibk8u92fp1cnmuam810exunlea2x2ncvg5opok3onipwj7795ejka7bph0ce2tek28q2x6grln4if0dyiwkic88asctkncx25mrk067iqqrmoyvuzhl4aylrkhps5hccpk2wey2d9g2uaoegl4mx676smhj844438p7k98qaprsb3xn47of7nkxxox34c46ckwljphyn4yrwpxnsikvqpb6geq6cu5frbt6eh1qr34pg58ssaoyrx90be3nn39hif86bhsa0c43iiiqjj422sxcmm8ucm98ifgqoxr3xxv37y29lasf6j0v48hbld6eo049d8qeyrq6co30oq96cstmo4ffib8m7wq5ieeyhrk3ths8fdvvcmkx0lnmm83vv8on2hualsgw58br8dhh0l44evyv0jldtsytgsauv2kbgacbuxrttus45rc0n9qz2upl495sjswhkuk7xpt9jpiusig4abagvx1kepqzxjbusgtom4gd7uwa1jqkfjvh78og0jk51i3mryj7ecjukwbwcv1nxpbl722wh2v11od9rzc4wyiimg7ra6x610nl0n0iq6scu97kf558omvnhpb5ygi2a3dzoxx1xwe1qrpdizpmovqtgijuae45f2dqrz4be6fk52yxn4vcjxrhxay5t62yk12fbjr5c5hfh465uqxox4r2aqkte1fsswivz4yml8e3k2oh4eef7h702ry8sbfroeb8fxwdlxp9f7ehsaiageolz6x9gp54xkl180gvosmebrh64 == \4\h\w\c\p\1\s\9\t\9\b\o\u\y\a\c\0\6\8\c\y\1\d\b\7\b\y\k\z\i\2\x\9\c\7\m\8\b\2\p\7\x\5\t\1\7\w\d\q\j\s\a\0\5\p\b\q\4\l\o\4\y\6\c\d\h\3\q\m\h\6\5\r\f\o\0\n\p\4\z\2\u\m\0\0\8\f\s\l\d\c\s\s\k\2\6\j\n\n\9\p\k\m\z\e\x\5\u\l\b\o\0\7\g\q\p\o\m\4\f\5\c\i\r\l\o\v\8\l\v\m\n\x\1\9\o\j\v\m\l\s\w\a\5\w\n\9\3\e\h\5\p\j\b\3\7\r\d\f\l\i\k\u\n\v\v\o\u\8\a\w\i\p\s\e\t\i\5\d\q\s\t\x\c\g\k\k\1\z\w\z\0\b\5\w\n\u\j\b\9\u\4\3\6\m\z\0\x\t\j\j\3\0\a\z\w\d\b\j\x\6\4\a\5\f\a\l\5\n\f\5\s\h\v\7\q\w\5\y\5\q\u\2\i\s\r\4\r\y\1\7\o\m\i\p\k\b\u\7\7\y\w\3\8\9\0\i\o\x\h\w\z\h\y\c\s\3\3\2\d\p\7\c\e\h\k\l\e\9\s\l\s\p\6\k\h\i\t\0\2\o\v\a\h\k\i\6\z\q\r\w\5\4\f\h\5\p\d\9\c\i\e\h\u\a\s\0\k\s\b\w\9\j\z\5\s\0\0\4\3\l\j\p\x\0\e\y\v\x\e\7\q\z\q\c\8\v\z\z\d\o\q\p\p\f\8\h\9\s\d\e\w\3\h\6\6\z\6\5\g\k\t\o\s\m\v\i\2\e\x\h\9\e\v\u\r\s\t\l\q\z\q\e\i\d\o\3\m\4\a\3\p\l\v\r\4\m\0\l\t\9\q\n\v\o\6\y\k\8\3\a\2\d\t\b\d\y\x\s\w\5\t\l\e\v\v\y\9\z\2\x\b\9\q\w\x\x\0\7\3\0\z\d\3\d\u\j\k\f\8\d\w\9\j\9\f\a\p\1\b\p\4\v\1\1\n\j\c\x\d\u\e\8\e\z\0\y\x\e\j\q\5\g\2\f\a\2\c\1\0\2\j\q\h\x\d\e\h\2\q\9\i\f\f\u\3\2\f\i\1\d\4\u\f\o\a\5\6\1\l\7\o\j\6\d\q\t\p\o\1\u\m\o\w\k\0\4\s\e\1\3\b\s\m\h\y\r\e\6\m\d\r\i\j\1\k\e\w\d\z\n\1\2\e\8\e\u\f\p\5\s\a\x\e\j\z\t\h\u\4\6\e\f\o\y\m\3\d\i\y\7\q\n\k\e\x\y\0\k\x\e\l\9\3\2\7\n\q\d\t\7\e\r\i\4\7\e\w\p\o\o\y\h\q\g\l\0\1\p\t\p\n\9\s\r\z\2\1\c\g\0\s\e\8\u\7\2\i\8\r\x\z\0\5\8\m\o\9\t\6\u\6\q\n\l\1\1\j\1\u\h\t\b\h\j\k\f\p\o\n\q\y\g\b\f\t\w\r\u\q\r\3\k\9\u\l\q\5\w\b\3\e\k\r\h\0\t\c\n\a\r\x\9\s\1\2\c\l\b\r\a\w\g\f\i\l\t\a\l\0\6\9\m\2\w\x\i\b\m\b\1\0\k\t\3\w\b\7\m\8\j\7\x\q\7\9\1\k\r\l\k\r\0\t\a\o\v\w\0\u\r\d\7\i\8\n\2\u\f\r\b\v\g\y\k\8\b\5\r\9\u\7\0\g\e\v\s\y\u\j\6\p\l\4\5\f\2\p\i\g\5\4\6\7\d\x\l\y\t\y\w\3\g\c\m\h\j\y\d\y\w\5\d\w\g\w\k\w\1\c\i\t\r\2\u\d\d\8\8\x\t\e\u\g\l\d\b\r\h\3\u\e\s\u\g\k\i\d\h\8\c\0\2\7\n\k\s\4\m\r\z\z\l\a\k\7\q\i\l\d\x\d\h\g\3\d\3\z\j\k\h\5\0\p\c\p\r\t\z\6\l\u\k\7\r\1\n\l\d\2\t\g\a\g\1\h\p\k\3\o\d\b\1\8\2\c\k\6\8\o\u\m\o\r\b\s\o\3\z\j\r\j\q\6\l\h\m\p\i\o\s\m\1\f\1\p\n\r\z\t\l\3\r\d\r\4\5\2\m\d\8\g\o\f\y\0\j\j\v\d\7\6\1\d\n\q\r\f\m\u\z\q\q\9\i\j\s\n\8\4\7\u\t\e\5\1\e\3\a\x\3\w\m\m\b\b\j\1\t\6\p\d\y\v\n\q\s\f\h\8\r\r\0\5\w\s\7\v\a\6\b\k\h\7\t\o\s\c\2\k\4\5\z\s\c\y\v\6\9\d\s\h\b\5\s\b\p\n\8\v\m\k\l\x\g\w\0\7\n\p\1\l\i\i\x\j\7\7\k\g\g\w\8\q\a\m\5\3\2\b\7\g\w\r\8\s\s\4\j\w\7\y\3\j\v\l\e\i\z\e\4\9\q\2\a\a\c\f\s\4\3\w\o\y\r\j\g\c\y\8\3\7\r\q\9\c\1\p\c\t\j\n\y\n\m\g\w\v\c\w\t\9\b\y\l\e\e\n\u\1\k\n\3\f\5\o\p\6\t\d\q\k\e\o\n\r\g\x\y\t\d\9\9\y\7\o\5\v\q\k\q\6\8\b\v\1\j\o\3\s\x\a\6\f\4\6\f\n\4\v\a\r\4\t\8\t\p\4\x\j\5\q\c\l\c\b\a\c\3\v\q\k\5\2\2\1\0\v\7\h\0\0\3\a\j\w\n\c\9\3\b\y\1\y\a\9\c\z\b\m\b\k\l\r\4\p\g\v\0\o\u\n\h\2\0\c\p\h\5\b\f\k\t\z\1\q\l\b\p\w\9\m\9\2\b\o\p\0\r\j\p\b\u\0\t\t\v\n\h\r\k\w\n\w\p\w\m\b\c\p\p\g\u\d\e\6\f\n\0\9\n\v\u\f\o\a\j\m\n\e\w\2\r\r\p\l\f\o\s\4\1\9\7\5\g\n\n\m\4\t\7\2\o\y\f\u\1\n\0\z\w\d\y\y\i\r\b\2\b\m\b\t\f\p\d\7\r\c\z\p\m\h\u\6\5\2\l\v\g\0\l\d\y\t\0\j\7\r\6\m\l\s\e\c\1\m\d\h\0\d\f\c\v\k\6\p\9\7\4\j\0\a\1\x\6\q\s\f\e\v\9\q\0\k\7\v\6\k\7\9\t\a\n\m\j\0\d\6\z\q\z\p\w\p\1\u\g\k\n\7\o\z\n\n\d\4\a\j\b\o\m\c\4\p\w\l\f\e\2\c\4\i\o\p\e\3\c\c\9\2\g\p\r\0\o\s\a\m\n\h\q\0\n\u\u\b\n\y\l\k\7\d\c\w\b\4\h\2\j\h\p\x\t\t\e\4\u\y\b\i\x\1\h\e\c\c\7\z\3\k\d\a\u\c\8\i\k\8\i\8\i\a\2\m\o\j\j\0\h\c\2\0\l\o\v\3\l\q\8\9\o\w\3\0\q\8\j\h\5\p\l\i\b\1\4\h\v\o\j\5\3\r\b\0\u\n\0\0\f\v\6\h\g\0\7\4\0\l\2\x\v\c\w\u\a\a\h\r\e\r\2\8\p\o\m\t\v\2\8\a\l\u\6\y\0\s\x\v\f\n\0\7\1\u\b\0\k\9\z\h\z\5\x\m\0\6\3\s\r\f\e\z\a\d\a\t\u\e\k\j\j\1\0\8\n\5\k\u\q\i\h\p\9\1\k\d\z\n\b\s\v\u\p\h\o\v\l\y\f\h\k\w\1\s\d\c\s\8\n\y\l\b\k\l\d\d\l\r\y\7\s\a\r\d\j\5\g\x\n\4\g\b\e\n\a\c\i\b\q\r\c\6\n\x\5\r\m\u\6\6\y\h\y\u\9\y\6\1\n\2\v\y\v\p\a\v\u\u\y\x\u\0\7\z\k\3\s\3\i\r\e\s\t\4\g\5\8\l\v\c\z\h\g\d\t\e\s\f\u\s\h\q\g\6\e\w\4\z\6\v\b\m\x\x\g\1\n\5\l\x\u\l\e\l\w\m\w\t\x\9\n\v\p\k\5\7\v\b\s\r\1\h\p\4\b\c\1\g\1\m\o\l\p\b\w\e\6\e\8\1\k\g\r\x\0\u\7\s\p\y\d\5\2\m\o\l\o\s\a\s\8\p\o\t\i\v\i\m\h\2\7\y\5\i\b\g\g\o\p\i\h\m\7\2\b\t\x\e\n\c\i\w\o\m\s\v\f\b\t\b\5\d\3\p\y\0\k\h\j\9\l\m\l\7\w\m\i\6\u\1\f\7\c\u\a\0\l\r\5\b\3\c\v\1\1\m\o\q\3\u\j\7\1\e\1\3\k\u\b\j\6\l\o\1\e\x\f\8\z\9\f\n\i\y\1\n\u\c\a\r\l\w\t\r\o\f\o\n\w\1\e\d\i\5\2\w\v\h\2\d\o\k\o\d\k\y\9\w\y\n\l\h\8\z\2\h\u\h\t\u\l\x\2\t\7\9\g\b\m\c\t\2\j\o\7\a\z\s\m\3\b\i\l\m\f\8\d\x\l\9\s\0\l\v\k\k\4\m\2\v\1\y\b\a\8\x\a\4\x\x\x\b\n\g\e\y\k\e\9\6\h\p\f\w\8\u\8\7\w\w\b\m\7\y\z\s\f\w\9\s\0\1\p\0\j\x\3\v\f\7\8\q\7\q\s\u\m\j\6\8\d\o\2\8\k\x\2\8\n\k\a\7\a\2\b\d\t\i\f\z\e\a\i\2\y\v\1\3\4\h\9\i\f\m\h\0\h\5\t\0\r\n\6\h\z\3\a\r\l\4\c\r\v\h\m\3\2\0\1\4\o\b\p\m\p\8\l\s\r\d\1\x\z\c\o\i\y\4\n\5\4\d\2\r\j\g\u\u\d\z\d\t\5\m\t\a\e\7\g\j\y\g\j\m\j\7\e\a\q\e\g\7\1\z\1\1\3\6\6\z\w\9\y\1\4\8\z\y\d\4\s\2\o\q\i\f\d\d\r\0\d\4\q\3\b\s\e\m\r\w\3\z\i\k\6\x\s\6\w\8\l\m\i\s\k\w\j\d\6\s\5\d\q\h\n\p\9\w\r\0\x\0\c\9\s\r\1\h\r\q\v\k\l\1\c\n\n\j\i\n\p\s\o\i\e\o\i\v\p\u\q\g\y\2\9\4\w\5\x\m\g\p\u\e\j\s\j\9\f\x\1\9\w\e\6\p\y\s\h\8\2\k\8\3\2\o\y\a\p\8\v\q\z\6\h\0\r\f\h\5\i\i\3\h\5\5\g\w\u\d\9\7\b\r\h\r\o\w\m\t\7\p\3\3\s\0\5\f\5\i\t\x\4\c\0\4\8\i\8\q\1\2\h\t\v\w\o\3\t\5\k\r\r\3\c\m\h\e\2\o\5\w\a\t\j\z\4\8\p\7\s\v\w\a\y\k\v\h\y\r\a\7\x\8\0\p\w\z\s\a\2\t\b\c\4\c\4\e\x\6\a\e\f\w\g\w\g\g\m\b\u\8\h\r\p\8\6\5\7\0\j\c\1\m\j\b\v\6\w\x\e\n\8\x\j\w\9\m\9\n\o\t\s\z\d\3\8\o\3\c\0\x\m\p\7\v\3\a\t\n\k\s\0\k\k\8\7\x\4\h\j\a\t\2\o\k\s\9\v\6\f\j\z\7\k\5\5\z\q\j\r\7\w\r\1\3\6\p\k\l\8\k\i\i\v\8\w\e\n\z\t\7\k\f\7\7\e\q\j\o\w\3\2\7\s\h\d\z\s\m\z\q\5\1\b\9\y\a\x\n\9\s\g\x\a\i\t\9\c\a\u\b\l\d\d\p\z\t\z\o\a\o\n\q\h\5\l\f\b\p\4\3\g\i\3\k\v\a\e\q\b\q\w\8\9\y\h\x\9\m\d\w\u\c\z\8\d\r\3\0\4\t\p\5\r\m\f\2\g\n\h\u\k\o\8\4\7\v\l\a\q\3\p\o\y\q\2\0\t\4\j\u\d\t\g\t\s\v\8\x\o\7\8\w\m\y\m\5\q\x\m\e\2\x\5\s\d\0\3\i\h\9\0\g\a\g\x\b\9\8\i\r\d\m\0\q\c\u\z\4\6\n\g\2\l\x\y\f\c\8\7\g\n\x\b\1\u\y\e\g\s\j\o\f\u\p\b\z\b\f\e\4\e\2\v\e\j\q\q\e\l\6\e\r\u\h\m\v\x\g\v\7\d\i\v\d\9\c\a\w\n\e\5\o\p\m\2\9\h\q\m\i\e\s\i\e\q\u\3\i\h\j\z\6\t\w\n\i\1\x\j\0\f\2\m\m\m\u\o\0\u\k\n\6\t\y\8\x\b\d\v\t\3\5\m\w\c\r\c\a\j\7\t\u\y\j\p\q\6\7\4\5\p\z\w\p\w\j\5\h\6\4\x\u\e\8\g\t\f\1\o\h\c\2\a\6\b\n\z\u\u\m\q\c\i\m\n\n\j\8\2\b\a\n\1\c\i\h\z\g\c\q\5\x\a\k\2\e\a\q\c\2\p\3\d\w\6\j\l\2\d\m\v\6\7\p\d\4\e\n\z\i\n\g\3\f\b\d\f\x\8\n\h\s\n\p\4\c\r\o\4\6\x\g\x\h\b\x\f\e\1\9\8\b\t\w\1\j\i\m\s\e\n\i\t\7\q\a\n\z\n\z\f\h\0\r\a\s\n\t\j\d\2\l\n\g\x\d\d\6\d\m\j\c\j\s\b\g\r\9\p\4\j\l\k\n\z\g\o\t\7\g\7\8\r\4\d\w\0\1\v\m\y\a\4\u\4\v\l\4\m\8\v\d\s\q\2\o\t\d\0\l\y\y\z\g\z\t\n\s\c\8\q\b\7\o\p\x\v\s\c\6\d\d\l\w\4\d\k\0\g\1\5\3\c\2\h\m\m\p\b\y\1\t\8\v\x\q\g\6\z\1\v\o\4\h\1\j\f\j\m\3\p\j\f\m\4\4\p\j\w\f\7\z\l\e\z\k\7\u\2\v\o\r\y\z\v\n\5\f\y\g\o\6\8\n\w\e\a\u\b\r\a\5\9\v\1\3\9\g\s\5\7\o\j\u\w\m\b\8\j\f\3\9\j\x\g\5\3\x\o\x\d\9\0\1\r\8\k\k\r\o\3\k\f\u\6\z\h\n\z\o\u\2\p\5\h\g\7\e\y\1\k\z\7\s\i\b\k\8\u\9\2\f\p\1\c\n\m\u\a\m\8\1\0\e\x\u\n\l\e\a\2\x\2\n\c\v\g\5\o\p\o\k\3\o\n\i\p\w\j\7\7\9\5\e\j\k\a\7\b\p\h\0\c\e\2\t\e\k\2\8\q\2\x\6\g\r\l\n\4\i\f\0\d\y\i\w\k\i\c\8\8\a\s\c\t\k\n\c\x\2\5\m\r\k\0\6\7\i\q\q\r\m\o\y\v\u\z\h\l\4\a\y\l\r\k\h\p\s\5\h\c\c\p\k\2\w\e\y\2\d\9\g\2\u\a\o\e\g\l\4\m\x\6\7\6\s\m\h\j\8\4\4\4\3\8\p\7\k\9\8\q\a\p\r\s\b\3\x\n\4\7\o\f\7\n\k\x\x\o\x\3\4\c\4\6\c\k\w\l\j\p\h\y\n\4\y\r\w\p\x\n\s\i\k\v\q\p\b\6\g\e\q\6\c\u\5\f\r\b\t\6\e\h\1\q\r\3\4\p\g\5\8\s\s\a\o\y\r\x\9\0\b\e\3\n\n\3\9\h\i\f\8\6\b\h\s\a\0\c\4\3\i\i\i\q\j\j\4\2\2\s\x\c\m\m\8\u\c\m\9\8\i\f\g\q\o\x\r\3\x\x\v\3\7\y\2\9\l\a\s\f\6\j\0\v\4\8\h\b\l\d\6\e\o\0\4\9\d\8\q\e\y\r\q\6\c\o\3\0\o\q\9\6\c\s\t\m\o\4\f\f\i\b\8\m\7\w\q\5\i\e\e\y\h\r\k\3\t\h\s\8\f\d\v\v\c\m\k\x\0\l\n\m\m\8\3\v\v\8\o\n\2\h\u\a\l\s\g\w\5\8\b\r\8\d\h\h\0\l\4\4\e\v\y\v\0\j\l\d\t\s\y\t\g\s\a\u\v\2\k\b\g\a\c\b\u\x\r\t\t\u\s\4\5\r\c\0\n\9\q\z\2\u\p\l\4\9\5\s\j\s\w\h\k\u\k\7\x\p\t\9\j\p\i\u\s\i\g\4\a\b\a\g\v\x\1\k\e\p\q\z\x\j\b\u\s\g\t\o\m\4\g\d\7\u\w\a\1\j\q\k\f\j\v\h\7\8\o\g\0\j\k\5\1\i\3\m\r\y\j\7\e\c\j\u\k\w\b\w\c\v\1\n\x\p\b\l\7\2\2\w\h\2\v\1\1\o\d\9\r\z\c\4\w\y\i\i\m\g\7\r\a\6\x\6\1\0\n\l\0\n\0\i\q\6\s\c\u\9\7\k\f\5\5\8\o\m\v\n\h\p\b\5\y\g\i\2\a\3\d\z\o\x\x\1\x\w\e\1\q\r\p\d\i\z\p\m\o\v\q\t\g\i\j\u\a\e\4\5\f\2\d\q\r\z\4\b\e\6\f\k\5\2\y\x\n\4\v\c\j\x\r\h\x\a\y\5\t\6\2\y\k\1\2\f\b\j\r\5\c\5\h\f\h\4\6\5\u\q\x\o\x\4\r\2\a\q\k\t\e\1\f\s\s\w\i\v\z\4\y\m\l\8\e\3\k\2\o\h\4\e\e\f\7\h\7\0\2\r\y\8\s\b\f\r\o\e\b\8\f\x\w\d\l\x\p\9\f\7\e\h\s\a\i\a\g\e\o\l\z\6\x\9\g\p\5\4\x\k\l\1\8\0\g\v\o\s\m\e\b\r\h\6\4 ]] 00:35:39.188 00:35:39.188 real 0m3.068s 00:35:39.188 user 0m2.488s 00:35:39.188 sys 0m0.392s 00:35:39.188 00:19:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:39.188 00:19:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:35:39.188 ************************************ 00:35:39.188 END TEST dd_rw_offset 00:35:39.188 ************************************ 00:35:39.188 00:19:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:35:39.188 00:19:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:35:39.188 00:19:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:35:39.188 00:19:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:35:39.188 00:19:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:35:39.188 00:19:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:35:39.188 00:19:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:35:39.188 00:19:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:35:39.189 00:19:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:35:39.189 00:19:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:39.189 00:19:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:35:39.189 { 00:35:39.189 "subsystems": [ 00:35:39.189 { 00:35:39.189 "subsystem": "bdev", 00:35:39.189 "config": [ 00:35:39.189 { 00:35:39.189 "params": { 00:35:39.189 "trtype": "pcie", 00:35:39.189 "traddr": "0000:00:10.0", 00:35:39.189 "name": "Nvme0" 00:35:39.189 }, 00:35:39.189 "method": "bdev_nvme_attach_controller" 00:35:39.189 }, 00:35:39.189 { 00:35:39.189 "method": "bdev_wait_for_examine" 00:35:39.189 } 00:35:39.189 ] 00:35:39.189 } 00:35:39.189 ] 00:35:39.189 } 00:35:39.189 [2024-07-25 00:19:34.822043] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:39.189 [2024-07-25 00:19:34.822213] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116241 ] 00:35:39.189 [2024-07-25 00:19:34.994015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.458 [2024-07-25 00:19:35.149390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:40.650  Copying: 1024/1024 [kB] (average 500 MBps) 00:35:40.650 00:35:40.650 00:19:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:40.650 00:35:40.650 real 0m36.873s 00:35:40.650 user 0m29.838s 00:35:40.650 sys 0m4.906s 00:35:40.650 00:19:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:40.650 00:19:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:35:40.650 ************************************ 00:35:40.650 END TEST spdk_dd_basic_rw 00:35:40.650 ************************************ 00:35:40.650 00:19:36 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:35:40.650 00:19:36 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:40.650 00:19:36 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:40.650 00:19:36 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:35:40.650 ************************************ 00:35:40.650 START TEST spdk_dd_posix 00:35:40.650 ************************************ 00:35:40.650 00:19:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:35:40.650 * Looking for test storage... 00:35:40.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:35:40.650 00:19:36 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:40.650 00:19:36 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:40.650 00:19:36 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:40.650 00:19:36 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:40.650 00:19:36 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:40.650 00:19:36 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:40.650 00:19:36 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:40.650 00:19:36 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:40.650 00:19:36 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # export PATH 00:35:40.650 00:19:36 spdk_dd.spdk_dd_posix -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:40.650 00:19:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:35:40.650 00:19:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:35:40.650 00:19:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:35:40.650 00:19:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:35:40.650 00:19:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:35:40.650 00:19:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:40.650 00:19:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:35:40.651 00:19:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:35:40.651 * First test run, liburing in use 00:35:40.651 00:19:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:35:40.651 00:19:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:40.651 00:19:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:40.651 00:19:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:35:40.651 ************************************ 00:35:40.651 START TEST dd_flag_append 00:35:40.651 ************************************ 00:35:40.651 00:19:36 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:35:40.651 00:19:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:35:40.651 00:19:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:35:40.651 00:19:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:35:40.651 00:19:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:35:40.651 00:19:36 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:35:40.651 00:19:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=2do6jdqyq3yfnen7e72jvv2vpc4kj31j 00:35:40.651 00:19:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:35:40.651 00:19:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:35:40.651 00:19:36 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:35:40.909 00:19:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=o1llnvy6ki1s8ug4vblw4m8wei43wgk4 00:35:40.909 00:19:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 2do6jdqyq3yfnen7e72jvv2vpc4kj31j 00:35:40.909 00:19:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s o1llnvy6ki1s8ug4vblw4m8wei43wgk4 00:35:40.909 00:19:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:35:40.909 [2024-07-25 00:19:36.563736] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:40.909 [2024-07-25 00:19:36.563910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116311 ] 00:35:40.909 [2024-07-25 00:19:36.709732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:41.168 [2024-07-25 00:19:36.869271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:42.363  Copying: 32/32 [B] (average 31 kBps) 00:35:42.363 00:35:42.363 00:19:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ o1llnvy6ki1s8ug4vblw4m8wei43wgk42do6jdqyq3yfnen7e72jvv2vpc4kj31j == \o\1\l\l\n\v\y\6\k\i\1\s\8\u\g\4\v\b\l\w\4\m\8\w\e\i\4\3\w\g\k\4\2\d\o\6\j\d\q\y\q\3\y\f\n\e\n\7\e\7\2\j\v\v\2\v\p\c\4\k\j\3\1\j ]] 00:35:42.363 00:35:42.363 real 0m1.484s 00:35:42.363 user 0m1.188s 00:35:42.363 sys 0m0.185s 00:35:42.363 ************************************ 00:35:42.363 END TEST dd_flag_append 00:35:42.363 ************************************ 00:35:42.363 00:19:37 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:42.363 00:19:37 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:35:42.363 00:19:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:35:42.363 00:19:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:42.363 00:19:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:42.363 00:19:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:35:42.363 ************************************ 00:35:42.363 START TEST dd_flag_directory 00:35:42.363 ************************************ 00:35:42.363 00:19:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:35:42.363 00:19:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:35:42.363 00:19:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:35:42.363 00:19:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:35:42.363 00:19:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:42.363 00:19:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:42.363 00:19:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:42.363 00:19:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:42.363 00:19:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:42.363 00:19:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:42.363 00:19:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:42.363 00:19:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:35:42.363 00:19:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:35:42.363 [2024-07-25 00:19:38.112045] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:42.363 [2024-07-25 00:19:38.112221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116350 ] 00:35:42.622 [2024-07-25 00:19:38.283065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.622 [2024-07-25 00:19:38.442368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:42.882 [2024-07-25 00:19:38.660636] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:35:42.882 [2024-07-25 00:19:38.660726] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:35:42.882 [2024-07-25 00:19:38.660747] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:43.450 [2024-07-25 00:19:39.209399] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:35:43.709 00:19:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:35:43.709 00:19:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:43.709 00:19:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:35:43.709 00:19:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:35:43.709 00:19:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:35:43.709 00:19:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:43.709 00:19:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:35:43.709 00:19:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:35:43.709 00:19:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:35:43.709 00:19:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:43.709 00:19:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:43.709 00:19:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:43.709 00:19:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:43.709 00:19:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:43.709 00:19:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:43.709 00:19:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:43.709 00:19:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:35:43.709 00:19:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:35:43.968 [2024-07-25 00:19:39.610968] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:43.968 [2024-07-25 00:19:39.611148] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116366 ] 00:35:43.968 [2024-07-25 00:19:39.782441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:44.227 [2024-07-25 00:19:39.941579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:44.486 [2024-07-25 00:19:40.162915] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:35:44.486 [2024-07-25 00:19:40.162988] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:35:44.486 [2024-07-25 00:19:40.163008] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:45.055 [2024-07-25 00:19:40.705008] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:45.314 00:35:45.314 real 0m3.001s 00:35:45.314 user 0m2.412s 00:35:45.314 sys 0m0.387s 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:45.314 ************************************ 00:35:45.314 END TEST dd_flag_directory 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:35:45.314 ************************************ 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:35:45.314 ************************************ 00:35:45.314 START TEST dd_flag_nofollow 00:35:45.314 ************************************ 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:35:45.314 00:19:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:45.314 [2024-07-25 00:19:41.165219] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:45.314 [2024-07-25 00:19:41.165405] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116407 ] 00:35:45.573 [2024-07-25 00:19:41.338172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:45.833 [2024-07-25 00:19:41.490478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:45.833 [2024-07-25 00:19:41.702575] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:35:45.833 [2024-07-25 00:19:41.702687] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:35:45.833 [2024-07-25 00:19:41.702712] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:46.401 [2024-07-25 00:19:42.249039] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:35:46.969 00:19:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:35:46.969 00:19:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:46.969 00:19:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:35:46.969 00:19:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:35:46.969 00:19:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:35:46.969 00:19:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:46.969 00:19:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:35:46.969 00:19:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:35:46.969 00:19:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:35:46.969 00:19:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:46.969 00:19:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:46.969 00:19:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:46.969 00:19:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:46.969 00:19:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:46.969 00:19:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:46.969 00:19:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:46.969 00:19:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:35:46.969 00:19:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:35:46.969 [2024-07-25 00:19:42.657396] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:46.969 [2024-07-25 00:19:42.657589] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116427 ] 00:35:46.969 [2024-07-25 00:19:42.827177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:47.229 [2024-07-25 00:19:42.981372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:47.488 [2024-07-25 00:19:43.193753] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:35:47.488 [2024-07-25 00:19:43.193838] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:35:47.488 [2024-07-25 00:19:43.193861] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:48.054 [2024-07-25 00:19:43.735809] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:35:48.312 00:19:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:35:48.312 00:19:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:48.312 00:19:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:35:48.312 00:19:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:35:48.312 00:19:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:35:48.312 00:19:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:48.312 00:19:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:35:48.312 00:19:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:35:48.312 00:19:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:35:48.312 00:19:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:48.312 [2024-07-25 00:19:44.161734] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:48.312 [2024-07-25 00:19:44.161955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116447 ] 00:35:48.571 [2024-07-25 00:19:44.328839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:48.830 [2024-07-25 00:19:44.475769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:49.768  Copying: 512/512 [B] (average 500 kBps) 00:35:49.768 00:35:49.768 00:19:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ t8v270irzf6fe7xuze9gt8c254ynfjdpdpsglkq6ylcoeh12mnoe409lxwawsoke5mw830q75s2qffaz7cmwswhy2hwomhdr5oy0fa4xx5v462kwhl0bndwihf2er5pgw4fyj5bufkyxlasvt4jp7xctsozokzeibd55ksct64eqmrx84xkr5tbq6k9cwhd62asfne3jneyoqg79h1nhflp03rfd11b2s7chjagyge8bt3yxs9v3pi78kkwo65rsz1q00v5oawkx55i0v9shr87dclqtysf7d2ouu7csfcexqqyjvrbe8uiotlwphz2u5v8nocknwte1wwnn5rexmr6nm4qx5vy2x4zdzkjyakz7ax3heuoy7eqrj0amhvlpfd8uds7hillcly3256nxkng7k08d1amfqm37hzxv2lp3k2m798uhtqmr7nujhjnd33czigsqz6ebgnpzzgilodg91qszgmitahleg4mmh6oo6s19m02bzo9xsxgr7r5g == \t\8\v\2\7\0\i\r\z\f\6\f\e\7\x\u\z\e\9\g\t\8\c\2\5\4\y\n\f\j\d\p\d\p\s\g\l\k\q\6\y\l\c\o\e\h\1\2\m\n\o\e\4\0\9\l\x\w\a\w\s\o\k\e\5\m\w\8\3\0\q\7\5\s\2\q\f\f\a\z\7\c\m\w\s\w\h\y\2\h\w\o\m\h\d\r\5\o\y\0\f\a\4\x\x\5\v\4\6\2\k\w\h\l\0\b\n\d\w\i\h\f\2\e\r\5\p\g\w\4\f\y\j\5\b\u\f\k\y\x\l\a\s\v\t\4\j\p\7\x\c\t\s\o\z\o\k\z\e\i\b\d\5\5\k\s\c\t\6\4\e\q\m\r\x\8\4\x\k\r\5\t\b\q\6\k\9\c\w\h\d\6\2\a\s\f\n\e\3\j\n\e\y\o\q\g\7\9\h\1\n\h\f\l\p\0\3\r\f\d\1\1\b\2\s\7\c\h\j\a\g\y\g\e\8\b\t\3\y\x\s\9\v\3\p\i\7\8\k\k\w\o\6\5\r\s\z\1\q\0\0\v\5\o\a\w\k\x\5\5\i\0\v\9\s\h\r\8\7\d\c\l\q\t\y\s\f\7\d\2\o\u\u\7\c\s\f\c\e\x\q\q\y\j\v\r\b\e\8\u\i\o\t\l\w\p\h\z\2\u\5\v\8\n\o\c\k\n\w\t\e\1\w\w\n\n\5\r\e\x\m\r\6\n\m\4\q\x\5\v\y\2\x\4\z\d\z\k\j\y\a\k\z\7\a\x\3\h\e\u\o\y\7\e\q\r\j\0\a\m\h\v\l\p\f\d\8\u\d\s\7\h\i\l\l\c\l\y\3\2\5\6\n\x\k\n\g\7\k\0\8\d\1\a\m\f\q\m\3\7\h\z\x\v\2\l\p\3\k\2\m\7\9\8\u\h\t\q\m\r\7\n\u\j\h\j\n\d\3\3\c\z\i\g\s\q\z\6\e\b\g\n\p\z\z\g\i\l\o\d\g\9\1\q\s\z\g\m\i\t\a\h\l\e\g\4\m\m\h\6\o\o\6\s\1\9\m\0\2\b\z\o\9\x\s\x\g\r\7\r\5\g ]] 00:35:49.768 00:35:49.768 real 0m4.504s 00:35:49.768 user 0m3.604s 00:35:49.768 sys 0m0.586s 00:35:49.768 00:19:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:49.768 ************************************ 00:35:49.768 END TEST dd_flag_nofollow 00:35:49.768 ************************************ 00:35:49.768 00:19:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:35:50.027 00:19:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:35:50.027 00:19:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:50.027 00:19:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:50.027 00:19:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:35:50.027 ************************************ 00:35:50.027 START TEST dd_flag_noatime 00:35:50.027 ************************************ 00:35:50.027 00:19:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:35:50.027 00:19:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:35:50.027 00:19:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:35:50.027 00:19:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:35:50.027 00:19:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:35:50.027 00:19:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:35:50.027 00:19:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:35:50.027 00:19:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721866784 00:35:50.027 00:19:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:50.027 00:19:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721866785 00:35:50.027 00:19:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:35:50.964 00:19:46 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:50.964 [2024-07-25 00:19:46.743259] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:50.964 [2024-07-25 00:19:46.743443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116490 ] 00:35:51.223 [2024-07-25 00:19:46.914983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:51.223 [2024-07-25 00:19:47.068309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:52.420  Copying: 512/512 [B] (average 500 kBps) 00:35:52.420 00:35:52.420 00:19:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:35:52.420 00:19:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721866784 )) 00:35:52.420 00:19:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:52.420 00:19:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721866785 )) 00:35:52.420 00:19:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:52.420 [2024-07-25 00:19:48.263637] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:52.420 [2024-07-25 00:19:48.263842] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116518 ] 00:35:52.679 [2024-07-25 00:19:48.434487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.938 [2024-07-25 00:19:48.586428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:53.875  Copying: 512/512 [B] (average 500 kBps) 00:35:53.875 00:35:53.875 00:19:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:35:53.875 00:19:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721866788 )) 00:35:53.875 00:35:53.875 real 0m4.065s 00:35:53.875 user 0m2.444s 00:35:53.875 sys 0m0.398s 00:35:53.875 00:19:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:53.875 00:19:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:35:53.875 ************************************ 00:35:53.875 END TEST dd_flag_noatime 00:35:53.875 ************************************ 00:35:54.134 00:19:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:35:54.135 00:19:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:54.135 00:19:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:54.135 00:19:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:35:54.135 ************************************ 00:35:54.135 START TEST dd_flags_misc 00:35:54.135 ************************************ 00:35:54.135 00:19:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:35:54.135 00:19:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:35:54.135 00:19:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:35:54.135 00:19:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:35:54.135 00:19:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:35:54.135 00:19:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:35:54.135 00:19:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:35:54.135 00:19:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:35:54.135 00:19:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:35:54.135 00:19:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:35:54.135 [2024-07-25 00:19:49.833671] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:54.135 [2024-07-25 00:19:49.833907] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116551 ] 00:35:54.429 [2024-07-25 00:19:50.007366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:54.429 [2024-07-25 00:19:50.167891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:55.625  Copying: 512/512 [B] (average 500 kBps) 00:35:55.625 00:35:55.625 00:19:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ e6m04mvddgkwtiprs0g04hxdn9z21mx2gwbnb73k9b2flcis9ufi7uwcmxniuatw5x84si63j8nx4kxufuqkj54dpn91hwvt1t8g7w5d6ur0wvrvaap73pql4cuwiawrk0c5walmj79mh94y76ty943856sf9zhsf2llwns11rr4x3o4bodie4lnir4jsthl218adfxtokjma0cziu2cl8a386iddbq96jlbnc5luw0dzldb7ntjj9luyokdlao7avp5i7i039hajprlaz3o08xf509zbz4j1204p3hosf461jhlc3v9gpdx49d9nnhaaa05pmcdmjhqv6bupktuadtqjw2etkai4n6p8km42pcfve7hmiive93tekmhax410ftcttwyjhvw4xzv1wfsq9s98igmd6h31uhox8ywk80n9r761b1lv935euzicmkv00r5mx9jzjv3uwb371xh0ce9qr95fl94ags6mhg7509ghastriz0k0bw6484vh5i == \e\6\m\0\4\m\v\d\d\g\k\w\t\i\p\r\s\0\g\0\4\h\x\d\n\9\z\2\1\m\x\2\g\w\b\n\b\7\3\k\9\b\2\f\l\c\i\s\9\u\f\i\7\u\w\c\m\x\n\i\u\a\t\w\5\x\8\4\s\i\6\3\j\8\n\x\4\k\x\u\f\u\q\k\j\5\4\d\p\n\9\1\h\w\v\t\1\t\8\g\7\w\5\d\6\u\r\0\w\v\r\v\a\a\p\7\3\p\q\l\4\c\u\w\i\a\w\r\k\0\c\5\w\a\l\m\j\7\9\m\h\9\4\y\7\6\t\y\9\4\3\8\5\6\s\f\9\z\h\s\f\2\l\l\w\n\s\1\1\r\r\4\x\3\o\4\b\o\d\i\e\4\l\n\i\r\4\j\s\t\h\l\2\1\8\a\d\f\x\t\o\k\j\m\a\0\c\z\i\u\2\c\l\8\a\3\8\6\i\d\d\b\q\9\6\j\l\b\n\c\5\l\u\w\0\d\z\l\d\b\7\n\t\j\j\9\l\u\y\o\k\d\l\a\o\7\a\v\p\5\i\7\i\0\3\9\h\a\j\p\r\l\a\z\3\o\0\8\x\f\5\0\9\z\b\z\4\j\1\2\0\4\p\3\h\o\s\f\4\6\1\j\h\l\c\3\v\9\g\p\d\x\4\9\d\9\n\n\h\a\a\a\0\5\p\m\c\d\m\j\h\q\v\6\b\u\p\k\t\u\a\d\t\q\j\w\2\e\t\k\a\i\4\n\6\p\8\k\m\4\2\p\c\f\v\e\7\h\m\i\i\v\e\9\3\t\e\k\m\h\a\x\4\1\0\f\t\c\t\t\w\y\j\h\v\w\4\x\z\v\1\w\f\s\q\9\s\9\8\i\g\m\d\6\h\3\1\u\h\o\x\8\y\w\k\8\0\n\9\r\7\6\1\b\1\l\v\9\3\5\e\u\z\i\c\m\k\v\0\0\r\5\m\x\9\j\z\j\v\3\u\w\b\3\7\1\x\h\0\c\e\9\q\r\9\5\f\l\9\4\a\g\s\6\m\h\g\7\5\0\9\g\h\a\s\t\r\i\z\0\k\0\b\w\6\4\8\4\v\h\5\i ]] 00:35:55.625 00:19:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:35:55.625 00:19:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:35:55.625 [2024-07-25 00:19:51.357262] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:55.625 [2024-07-25 00:19:51.357432] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116571 ] 00:35:55.885 [2024-07-25 00:19:51.525784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:55.885 [2024-07-25 00:19:51.677210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:57.081  Copying: 512/512 [B] (average 500 kBps) 00:35:57.081 00:35:57.081 00:19:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ e6m04mvddgkwtiprs0g04hxdn9z21mx2gwbnb73k9b2flcis9ufi7uwcmxniuatw5x84si63j8nx4kxufuqkj54dpn91hwvt1t8g7w5d6ur0wvrvaap73pql4cuwiawrk0c5walmj79mh94y76ty943856sf9zhsf2llwns11rr4x3o4bodie4lnir4jsthl218adfxtokjma0cziu2cl8a386iddbq96jlbnc5luw0dzldb7ntjj9luyokdlao7avp5i7i039hajprlaz3o08xf509zbz4j1204p3hosf461jhlc3v9gpdx49d9nnhaaa05pmcdmjhqv6bupktuadtqjw2etkai4n6p8km42pcfve7hmiive93tekmhax410ftcttwyjhvw4xzv1wfsq9s98igmd6h31uhox8ywk80n9r761b1lv935euzicmkv00r5mx9jzjv3uwb371xh0ce9qr95fl94ags6mhg7509ghastriz0k0bw6484vh5i == \e\6\m\0\4\m\v\d\d\g\k\w\t\i\p\r\s\0\g\0\4\h\x\d\n\9\z\2\1\m\x\2\g\w\b\n\b\7\3\k\9\b\2\f\l\c\i\s\9\u\f\i\7\u\w\c\m\x\n\i\u\a\t\w\5\x\8\4\s\i\6\3\j\8\n\x\4\k\x\u\f\u\q\k\j\5\4\d\p\n\9\1\h\w\v\t\1\t\8\g\7\w\5\d\6\u\r\0\w\v\r\v\a\a\p\7\3\p\q\l\4\c\u\w\i\a\w\r\k\0\c\5\w\a\l\m\j\7\9\m\h\9\4\y\7\6\t\y\9\4\3\8\5\6\s\f\9\z\h\s\f\2\l\l\w\n\s\1\1\r\r\4\x\3\o\4\b\o\d\i\e\4\l\n\i\r\4\j\s\t\h\l\2\1\8\a\d\f\x\t\o\k\j\m\a\0\c\z\i\u\2\c\l\8\a\3\8\6\i\d\d\b\q\9\6\j\l\b\n\c\5\l\u\w\0\d\z\l\d\b\7\n\t\j\j\9\l\u\y\o\k\d\l\a\o\7\a\v\p\5\i\7\i\0\3\9\h\a\j\p\r\l\a\z\3\o\0\8\x\f\5\0\9\z\b\z\4\j\1\2\0\4\p\3\h\o\s\f\4\6\1\j\h\l\c\3\v\9\g\p\d\x\4\9\d\9\n\n\h\a\a\a\0\5\p\m\c\d\m\j\h\q\v\6\b\u\p\k\t\u\a\d\t\q\j\w\2\e\t\k\a\i\4\n\6\p\8\k\m\4\2\p\c\f\v\e\7\h\m\i\i\v\e\9\3\t\e\k\m\h\a\x\4\1\0\f\t\c\t\t\w\y\j\h\v\w\4\x\z\v\1\w\f\s\q\9\s\9\8\i\g\m\d\6\h\3\1\u\h\o\x\8\y\w\k\8\0\n\9\r\7\6\1\b\1\l\v\9\3\5\e\u\z\i\c\m\k\v\0\0\r\5\m\x\9\j\z\j\v\3\u\w\b\3\7\1\x\h\0\c\e\9\q\r\9\5\f\l\9\4\a\g\s\6\m\h\g\7\5\0\9\g\h\a\s\t\r\i\z\0\k\0\b\w\6\4\8\4\v\h\5\i ]] 00:35:57.081 00:19:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:35:57.081 00:19:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:35:57.081 [2024-07-25 00:19:52.869138] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:57.081 [2024-07-25 00:19:52.869317] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116586 ] 00:35:57.340 [2024-07-25 00:19:53.038861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.340 [2024-07-25 00:19:53.188427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:58.535  Copying: 512/512 [B] (average 71 kBps) 00:35:58.535 00:35:58.535 00:19:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ e6m04mvddgkwtiprs0g04hxdn9z21mx2gwbnb73k9b2flcis9ufi7uwcmxniuatw5x84si63j8nx4kxufuqkj54dpn91hwvt1t8g7w5d6ur0wvrvaap73pql4cuwiawrk0c5walmj79mh94y76ty943856sf9zhsf2llwns11rr4x3o4bodie4lnir4jsthl218adfxtokjma0cziu2cl8a386iddbq96jlbnc5luw0dzldb7ntjj9luyokdlao7avp5i7i039hajprlaz3o08xf509zbz4j1204p3hosf461jhlc3v9gpdx49d9nnhaaa05pmcdmjhqv6bupktuadtqjw2etkai4n6p8km42pcfve7hmiive93tekmhax410ftcttwyjhvw4xzv1wfsq9s98igmd6h31uhox8ywk80n9r761b1lv935euzicmkv00r5mx9jzjv3uwb371xh0ce9qr95fl94ags6mhg7509ghastriz0k0bw6484vh5i == \e\6\m\0\4\m\v\d\d\g\k\w\t\i\p\r\s\0\g\0\4\h\x\d\n\9\z\2\1\m\x\2\g\w\b\n\b\7\3\k\9\b\2\f\l\c\i\s\9\u\f\i\7\u\w\c\m\x\n\i\u\a\t\w\5\x\8\4\s\i\6\3\j\8\n\x\4\k\x\u\f\u\q\k\j\5\4\d\p\n\9\1\h\w\v\t\1\t\8\g\7\w\5\d\6\u\r\0\w\v\r\v\a\a\p\7\3\p\q\l\4\c\u\w\i\a\w\r\k\0\c\5\w\a\l\m\j\7\9\m\h\9\4\y\7\6\t\y\9\4\3\8\5\6\s\f\9\z\h\s\f\2\l\l\w\n\s\1\1\r\r\4\x\3\o\4\b\o\d\i\e\4\l\n\i\r\4\j\s\t\h\l\2\1\8\a\d\f\x\t\o\k\j\m\a\0\c\z\i\u\2\c\l\8\a\3\8\6\i\d\d\b\q\9\6\j\l\b\n\c\5\l\u\w\0\d\z\l\d\b\7\n\t\j\j\9\l\u\y\o\k\d\l\a\o\7\a\v\p\5\i\7\i\0\3\9\h\a\j\p\r\l\a\z\3\o\0\8\x\f\5\0\9\z\b\z\4\j\1\2\0\4\p\3\h\o\s\f\4\6\1\j\h\l\c\3\v\9\g\p\d\x\4\9\d\9\n\n\h\a\a\a\0\5\p\m\c\d\m\j\h\q\v\6\b\u\p\k\t\u\a\d\t\q\j\w\2\e\t\k\a\i\4\n\6\p\8\k\m\4\2\p\c\f\v\e\7\h\m\i\i\v\e\9\3\t\e\k\m\h\a\x\4\1\0\f\t\c\t\t\w\y\j\h\v\w\4\x\z\v\1\w\f\s\q\9\s\9\8\i\g\m\d\6\h\3\1\u\h\o\x\8\y\w\k\8\0\n\9\r\7\6\1\b\1\l\v\9\3\5\e\u\z\i\c\m\k\v\0\0\r\5\m\x\9\j\z\j\v\3\u\w\b\3\7\1\x\h\0\c\e\9\q\r\9\5\f\l\9\4\a\g\s\6\m\h\g\7\5\0\9\g\h\a\s\t\r\i\z\0\k\0\b\w\6\4\8\4\v\h\5\i ]] 00:35:58.535 00:19:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:35:58.535 00:19:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:35:58.535 [2024-07-25 00:19:54.391300] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:35:58.535 [2024-07-25 00:19:54.391476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116606 ] 00:35:58.793 [2024-07-25 00:19:54.560676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:59.052 [2024-07-25 00:19:54.710735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:00.246  Copying: 512/512 [B] (average 125 kBps) 00:36:00.246 00:36:00.246 00:19:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ e6m04mvddgkwtiprs0g04hxdn9z21mx2gwbnb73k9b2flcis9ufi7uwcmxniuatw5x84si63j8nx4kxufuqkj54dpn91hwvt1t8g7w5d6ur0wvrvaap73pql4cuwiawrk0c5walmj79mh94y76ty943856sf9zhsf2llwns11rr4x3o4bodie4lnir4jsthl218adfxtokjma0cziu2cl8a386iddbq96jlbnc5luw0dzldb7ntjj9luyokdlao7avp5i7i039hajprlaz3o08xf509zbz4j1204p3hosf461jhlc3v9gpdx49d9nnhaaa05pmcdmjhqv6bupktuadtqjw2etkai4n6p8km42pcfve7hmiive93tekmhax410ftcttwyjhvw4xzv1wfsq9s98igmd6h31uhox8ywk80n9r761b1lv935euzicmkv00r5mx9jzjv3uwb371xh0ce9qr95fl94ags6mhg7509ghastriz0k0bw6484vh5i == \e\6\m\0\4\m\v\d\d\g\k\w\t\i\p\r\s\0\g\0\4\h\x\d\n\9\z\2\1\m\x\2\g\w\b\n\b\7\3\k\9\b\2\f\l\c\i\s\9\u\f\i\7\u\w\c\m\x\n\i\u\a\t\w\5\x\8\4\s\i\6\3\j\8\n\x\4\k\x\u\f\u\q\k\j\5\4\d\p\n\9\1\h\w\v\t\1\t\8\g\7\w\5\d\6\u\r\0\w\v\r\v\a\a\p\7\3\p\q\l\4\c\u\w\i\a\w\r\k\0\c\5\w\a\l\m\j\7\9\m\h\9\4\y\7\6\t\y\9\4\3\8\5\6\s\f\9\z\h\s\f\2\l\l\w\n\s\1\1\r\r\4\x\3\o\4\b\o\d\i\e\4\l\n\i\r\4\j\s\t\h\l\2\1\8\a\d\f\x\t\o\k\j\m\a\0\c\z\i\u\2\c\l\8\a\3\8\6\i\d\d\b\q\9\6\j\l\b\n\c\5\l\u\w\0\d\z\l\d\b\7\n\t\j\j\9\l\u\y\o\k\d\l\a\o\7\a\v\p\5\i\7\i\0\3\9\h\a\j\p\r\l\a\z\3\o\0\8\x\f\5\0\9\z\b\z\4\j\1\2\0\4\p\3\h\o\s\f\4\6\1\j\h\l\c\3\v\9\g\p\d\x\4\9\d\9\n\n\h\a\a\a\0\5\p\m\c\d\m\j\h\q\v\6\b\u\p\k\t\u\a\d\t\q\j\w\2\e\t\k\a\i\4\n\6\p\8\k\m\4\2\p\c\f\v\e\7\h\m\i\i\v\e\9\3\t\e\k\m\h\a\x\4\1\0\f\t\c\t\t\w\y\j\h\v\w\4\x\z\v\1\w\f\s\q\9\s\9\8\i\g\m\d\6\h\3\1\u\h\o\x\8\y\w\k\8\0\n\9\r\7\6\1\b\1\l\v\9\3\5\e\u\z\i\c\m\k\v\0\0\r\5\m\x\9\j\z\j\v\3\u\w\b\3\7\1\x\h\0\c\e\9\q\r\9\5\f\l\9\4\a\g\s\6\m\h\g\7\5\0\9\g\h\a\s\t\r\i\z\0\k\0\b\w\6\4\8\4\v\h\5\i ]] 00:36:00.246 00:19:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:36:00.246 00:19:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:36:00.246 00:19:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:36:00.246 00:19:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:36:00.246 00:19:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:00.246 00:19:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:36:00.246 [2024-07-25 00:19:55.915355] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:00.246 [2024-07-25 00:19:55.915531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116620 ] 00:36:00.246 [2024-07-25 00:19:56.086638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:00.505 [2024-07-25 00:19:56.236722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:01.698  Copying: 512/512 [B] (average 500 kBps) 00:36:01.698 00:36:01.699 00:19:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ b5u0jirxzkyl32jzkgqa723emmoijb27xldzo4rb5iimca9gimqf8iex3ih7c0xjv3xyxuarqtblej7i1288qon0kd5hux879fogqu5w6u8puk6lc052yh8jihwh8tq132kqltkg1t5kasp5ga3grl97dsmqhjdvsw3a96bby6srspbgtsuj0mn41ewkl19q8jfz7wpzkc4wcb94k27moa87a067rp49ypkrx5238q263hkgu6h0v8ng6nqirlswvmf9snyhqg76jpbt5so5yg1w36niswk7p63zvo3j3dzrsjqiz8kyijnb0e3zzkfflesidds67wb3m8oxtmwtrg6hx9zncr2qlghe0ei65y9etlcoxz5ldj8w0tjfo4frg461pqxxh87gfenifuks0mwywvlfwowcdid3xh2tli2u4q0r9px4lk33huu1s6schbg81i092l18wllmob89hwets07r4o13a83hrodhxewer7vlrwarb91xnkrpxzvi == \b\5\u\0\j\i\r\x\z\k\y\l\3\2\j\z\k\g\q\a\7\2\3\e\m\m\o\i\j\b\2\7\x\l\d\z\o\4\r\b\5\i\i\m\c\a\9\g\i\m\q\f\8\i\e\x\3\i\h\7\c\0\x\j\v\3\x\y\x\u\a\r\q\t\b\l\e\j\7\i\1\2\8\8\q\o\n\0\k\d\5\h\u\x\8\7\9\f\o\g\q\u\5\w\6\u\8\p\u\k\6\l\c\0\5\2\y\h\8\j\i\h\w\h\8\t\q\1\3\2\k\q\l\t\k\g\1\t\5\k\a\s\p\5\g\a\3\g\r\l\9\7\d\s\m\q\h\j\d\v\s\w\3\a\9\6\b\b\y\6\s\r\s\p\b\g\t\s\u\j\0\m\n\4\1\e\w\k\l\1\9\q\8\j\f\z\7\w\p\z\k\c\4\w\c\b\9\4\k\2\7\m\o\a\8\7\a\0\6\7\r\p\4\9\y\p\k\r\x\5\2\3\8\q\2\6\3\h\k\g\u\6\h\0\v\8\n\g\6\n\q\i\r\l\s\w\v\m\f\9\s\n\y\h\q\g\7\6\j\p\b\t\5\s\o\5\y\g\1\w\3\6\n\i\s\w\k\7\p\6\3\z\v\o\3\j\3\d\z\r\s\j\q\i\z\8\k\y\i\j\n\b\0\e\3\z\z\k\f\f\l\e\s\i\d\d\s\6\7\w\b\3\m\8\o\x\t\m\w\t\r\g\6\h\x\9\z\n\c\r\2\q\l\g\h\e\0\e\i\6\5\y\9\e\t\l\c\o\x\z\5\l\d\j\8\w\0\t\j\f\o\4\f\r\g\4\6\1\p\q\x\x\h\8\7\g\f\e\n\i\f\u\k\s\0\m\w\y\w\v\l\f\w\o\w\c\d\i\d\3\x\h\2\t\l\i\2\u\4\q\0\r\9\p\x\4\l\k\3\3\h\u\u\1\s\6\s\c\h\b\g\8\1\i\0\9\2\l\1\8\w\l\l\m\o\b\8\9\h\w\e\t\s\0\7\r\4\o\1\3\a\8\3\h\r\o\d\h\x\e\w\e\r\7\v\l\r\w\a\r\b\9\1\x\n\k\r\p\x\z\v\i ]] 00:36:01.699 00:19:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:01.699 00:19:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:36:01.699 [2024-07-25 00:19:57.427848] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:01.699 [2024-07-25 00:19:57.428030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116640 ] 00:36:01.957 [2024-07-25 00:19:57.597856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:01.957 [2024-07-25 00:19:57.745423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:03.150  Copying: 512/512 [B] (average 500 kBps) 00:36:03.150 00:36:03.150 00:19:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ b5u0jirxzkyl32jzkgqa723emmoijb27xldzo4rb5iimca9gimqf8iex3ih7c0xjv3xyxuarqtblej7i1288qon0kd5hux879fogqu5w6u8puk6lc052yh8jihwh8tq132kqltkg1t5kasp5ga3grl97dsmqhjdvsw3a96bby6srspbgtsuj0mn41ewkl19q8jfz7wpzkc4wcb94k27moa87a067rp49ypkrx5238q263hkgu6h0v8ng6nqirlswvmf9snyhqg76jpbt5so5yg1w36niswk7p63zvo3j3dzrsjqiz8kyijnb0e3zzkfflesidds67wb3m8oxtmwtrg6hx9zncr2qlghe0ei65y9etlcoxz5ldj8w0tjfo4frg461pqxxh87gfenifuks0mwywvlfwowcdid3xh2tli2u4q0r9px4lk33huu1s6schbg81i092l18wllmob89hwets07r4o13a83hrodhxewer7vlrwarb91xnkrpxzvi == \b\5\u\0\j\i\r\x\z\k\y\l\3\2\j\z\k\g\q\a\7\2\3\e\m\m\o\i\j\b\2\7\x\l\d\z\o\4\r\b\5\i\i\m\c\a\9\g\i\m\q\f\8\i\e\x\3\i\h\7\c\0\x\j\v\3\x\y\x\u\a\r\q\t\b\l\e\j\7\i\1\2\8\8\q\o\n\0\k\d\5\h\u\x\8\7\9\f\o\g\q\u\5\w\6\u\8\p\u\k\6\l\c\0\5\2\y\h\8\j\i\h\w\h\8\t\q\1\3\2\k\q\l\t\k\g\1\t\5\k\a\s\p\5\g\a\3\g\r\l\9\7\d\s\m\q\h\j\d\v\s\w\3\a\9\6\b\b\y\6\s\r\s\p\b\g\t\s\u\j\0\m\n\4\1\e\w\k\l\1\9\q\8\j\f\z\7\w\p\z\k\c\4\w\c\b\9\4\k\2\7\m\o\a\8\7\a\0\6\7\r\p\4\9\y\p\k\r\x\5\2\3\8\q\2\6\3\h\k\g\u\6\h\0\v\8\n\g\6\n\q\i\r\l\s\w\v\m\f\9\s\n\y\h\q\g\7\6\j\p\b\t\5\s\o\5\y\g\1\w\3\6\n\i\s\w\k\7\p\6\3\z\v\o\3\j\3\d\z\r\s\j\q\i\z\8\k\y\i\j\n\b\0\e\3\z\z\k\f\f\l\e\s\i\d\d\s\6\7\w\b\3\m\8\o\x\t\m\w\t\r\g\6\h\x\9\z\n\c\r\2\q\l\g\h\e\0\e\i\6\5\y\9\e\t\l\c\o\x\z\5\l\d\j\8\w\0\t\j\f\o\4\f\r\g\4\6\1\p\q\x\x\h\8\7\g\f\e\n\i\f\u\k\s\0\m\w\y\w\v\l\f\w\o\w\c\d\i\d\3\x\h\2\t\l\i\2\u\4\q\0\r\9\p\x\4\l\k\3\3\h\u\u\1\s\6\s\c\h\b\g\8\1\i\0\9\2\l\1\8\w\l\l\m\o\b\8\9\h\w\e\t\s\0\7\r\4\o\1\3\a\8\3\h\r\o\d\h\x\e\w\e\r\7\v\l\r\w\a\r\b\9\1\x\n\k\r\p\x\z\v\i ]] 00:36:03.150 00:19:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:03.150 00:19:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:36:03.150 [2024-07-25 00:19:58.937955] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:03.150 [2024-07-25 00:19:58.938131] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116655 ] 00:36:03.408 [2024-07-25 00:19:59.109547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:03.408 [2024-07-25 00:19:59.259128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:04.603  Copying: 512/512 [B] (average 166 kBps) 00:36:04.603 00:36:04.603 00:20:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ b5u0jirxzkyl32jzkgqa723emmoijb27xldzo4rb5iimca9gimqf8iex3ih7c0xjv3xyxuarqtblej7i1288qon0kd5hux879fogqu5w6u8puk6lc052yh8jihwh8tq132kqltkg1t5kasp5ga3grl97dsmqhjdvsw3a96bby6srspbgtsuj0mn41ewkl19q8jfz7wpzkc4wcb94k27moa87a067rp49ypkrx5238q263hkgu6h0v8ng6nqirlswvmf9snyhqg76jpbt5so5yg1w36niswk7p63zvo3j3dzrsjqiz8kyijnb0e3zzkfflesidds67wb3m8oxtmwtrg6hx9zncr2qlghe0ei65y9etlcoxz5ldj8w0tjfo4frg461pqxxh87gfenifuks0mwywvlfwowcdid3xh2tli2u4q0r9px4lk33huu1s6schbg81i092l18wllmob89hwets07r4o13a83hrodhxewer7vlrwarb91xnkrpxzvi == \b\5\u\0\j\i\r\x\z\k\y\l\3\2\j\z\k\g\q\a\7\2\3\e\m\m\o\i\j\b\2\7\x\l\d\z\o\4\r\b\5\i\i\m\c\a\9\g\i\m\q\f\8\i\e\x\3\i\h\7\c\0\x\j\v\3\x\y\x\u\a\r\q\t\b\l\e\j\7\i\1\2\8\8\q\o\n\0\k\d\5\h\u\x\8\7\9\f\o\g\q\u\5\w\6\u\8\p\u\k\6\l\c\0\5\2\y\h\8\j\i\h\w\h\8\t\q\1\3\2\k\q\l\t\k\g\1\t\5\k\a\s\p\5\g\a\3\g\r\l\9\7\d\s\m\q\h\j\d\v\s\w\3\a\9\6\b\b\y\6\s\r\s\p\b\g\t\s\u\j\0\m\n\4\1\e\w\k\l\1\9\q\8\j\f\z\7\w\p\z\k\c\4\w\c\b\9\4\k\2\7\m\o\a\8\7\a\0\6\7\r\p\4\9\y\p\k\r\x\5\2\3\8\q\2\6\3\h\k\g\u\6\h\0\v\8\n\g\6\n\q\i\r\l\s\w\v\m\f\9\s\n\y\h\q\g\7\6\j\p\b\t\5\s\o\5\y\g\1\w\3\6\n\i\s\w\k\7\p\6\3\z\v\o\3\j\3\d\z\r\s\j\q\i\z\8\k\y\i\j\n\b\0\e\3\z\z\k\f\f\l\e\s\i\d\d\s\6\7\w\b\3\m\8\o\x\t\m\w\t\r\g\6\h\x\9\z\n\c\r\2\q\l\g\h\e\0\e\i\6\5\y\9\e\t\l\c\o\x\z\5\l\d\j\8\w\0\t\j\f\o\4\f\r\g\4\6\1\p\q\x\x\h\8\7\g\f\e\n\i\f\u\k\s\0\m\w\y\w\v\l\f\w\o\w\c\d\i\d\3\x\h\2\t\l\i\2\u\4\q\0\r\9\p\x\4\l\k\3\3\h\u\u\1\s\6\s\c\h\b\g\8\1\i\0\9\2\l\1\8\w\l\l\m\o\b\8\9\h\w\e\t\s\0\7\r\4\o\1\3\a\8\3\h\r\o\d\h\x\e\w\e\r\7\v\l\r\w\a\r\b\9\1\x\n\k\r\p\x\z\v\i ]] 00:36:04.603 00:20:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:04.603 00:20:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:36:04.603 [2024-07-25 00:20:00.469751] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:04.603 [2024-07-25 00:20:00.469945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116675 ] 00:36:04.862 [2024-07-25 00:20:00.638914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:05.120 [2024-07-25 00:20:00.792693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:06.314  Copying: 512/512 [B] (average 125 kBps) 00:36:06.314 00:36:06.314 ************************************ 00:36:06.314 END TEST dd_flags_misc 00:36:06.314 ************************************ 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ b5u0jirxzkyl32jzkgqa723emmoijb27xldzo4rb5iimca9gimqf8iex3ih7c0xjv3xyxuarqtblej7i1288qon0kd5hux879fogqu5w6u8puk6lc052yh8jihwh8tq132kqltkg1t5kasp5ga3grl97dsmqhjdvsw3a96bby6srspbgtsuj0mn41ewkl19q8jfz7wpzkc4wcb94k27moa87a067rp49ypkrx5238q263hkgu6h0v8ng6nqirlswvmf9snyhqg76jpbt5so5yg1w36niswk7p63zvo3j3dzrsjqiz8kyijnb0e3zzkfflesidds67wb3m8oxtmwtrg6hx9zncr2qlghe0ei65y9etlcoxz5ldj8w0tjfo4frg461pqxxh87gfenifuks0mwywvlfwowcdid3xh2tli2u4q0r9px4lk33huu1s6schbg81i092l18wllmob89hwets07r4o13a83hrodhxewer7vlrwarb91xnkrpxzvi == \b\5\u\0\j\i\r\x\z\k\y\l\3\2\j\z\k\g\q\a\7\2\3\e\m\m\o\i\j\b\2\7\x\l\d\z\o\4\r\b\5\i\i\m\c\a\9\g\i\m\q\f\8\i\e\x\3\i\h\7\c\0\x\j\v\3\x\y\x\u\a\r\q\t\b\l\e\j\7\i\1\2\8\8\q\o\n\0\k\d\5\h\u\x\8\7\9\f\o\g\q\u\5\w\6\u\8\p\u\k\6\l\c\0\5\2\y\h\8\j\i\h\w\h\8\t\q\1\3\2\k\q\l\t\k\g\1\t\5\k\a\s\p\5\g\a\3\g\r\l\9\7\d\s\m\q\h\j\d\v\s\w\3\a\9\6\b\b\y\6\s\r\s\p\b\g\t\s\u\j\0\m\n\4\1\e\w\k\l\1\9\q\8\j\f\z\7\w\p\z\k\c\4\w\c\b\9\4\k\2\7\m\o\a\8\7\a\0\6\7\r\p\4\9\y\p\k\r\x\5\2\3\8\q\2\6\3\h\k\g\u\6\h\0\v\8\n\g\6\n\q\i\r\l\s\w\v\m\f\9\s\n\y\h\q\g\7\6\j\p\b\t\5\s\o\5\y\g\1\w\3\6\n\i\s\w\k\7\p\6\3\z\v\o\3\j\3\d\z\r\s\j\q\i\z\8\k\y\i\j\n\b\0\e\3\z\z\k\f\f\l\e\s\i\d\d\s\6\7\w\b\3\m\8\o\x\t\m\w\t\r\g\6\h\x\9\z\n\c\r\2\q\l\g\h\e\0\e\i\6\5\y\9\e\t\l\c\o\x\z\5\l\d\j\8\w\0\t\j\f\o\4\f\r\g\4\6\1\p\q\x\x\h\8\7\g\f\e\n\i\f\u\k\s\0\m\w\y\w\v\l\f\w\o\w\c\d\i\d\3\x\h\2\t\l\i\2\u\4\q\0\r\9\p\x\4\l\k\3\3\h\u\u\1\s\6\s\c\h\b\g\8\1\i\0\9\2\l\1\8\w\l\l\m\o\b\8\9\h\w\e\t\s\0\7\r\4\o\1\3\a\8\3\h\r\o\d\h\x\e\w\e\r\7\v\l\r\w\a\r\b\9\1\x\n\k\r\p\x\z\v\i ]] 00:36:06.314 00:36:06.314 real 0m12.154s 00:36:06.314 user 0m9.724s 00:36:06.314 sys 0m1.508s 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:36:06.314 * Second test run, disabling liburing, forcing AIO 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:36:06.314 ************************************ 00:36:06.314 START TEST dd_flag_append_forced_aio 00:36:06.314 ************************************ 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=m49503jeqtd8zia1bsg6voz31jruv0fl 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=npv7af3ngd6by824krjj21ak8nraacmg 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s m49503jeqtd8zia1bsg6voz31jruv0fl 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s npv7af3ngd6by824krjj21ak8nraacmg 00:36:06.314 00:20:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:36:06.314 [2024-07-25 00:20:02.044869] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:06.314 [2024-07-25 00:20:02.045055] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116708 ] 00:36:06.574 [2024-07-25 00:20:02.210069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:06.574 [2024-07-25 00:20:02.363995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:07.774  Copying: 32/32 [B] (average 31 kBps) 00:36:07.774 00:36:07.774 00:20:03 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ npv7af3ngd6by824krjj21ak8nraacmgm49503jeqtd8zia1bsg6voz31jruv0fl == \n\p\v\7\a\f\3\n\g\d\6\b\y\8\2\4\k\r\j\j\2\1\a\k\8\n\r\a\a\c\m\g\m\4\9\5\0\3\j\e\q\t\d\8\z\i\a\1\b\s\g\6\v\o\z\3\1\j\r\u\v\0\f\l ]] 00:36:07.774 00:36:07.774 real 0m1.520s 00:36:07.774 user 0m1.222s 00:36:07.774 sys 0m0.185s 00:36:07.774 00:20:03 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:07.774 ************************************ 00:36:07.774 END TEST dd_flag_append_forced_aio 00:36:07.774 ************************************ 00:36:07.774 00:20:03 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:36:07.774 00:20:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:36:07.774 00:20:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:07.774 00:20:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:07.774 00:20:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:36:07.774 ************************************ 00:36:07.774 START TEST dd_flag_directory_forced_aio 00:36:07.774 ************************************ 00:36:07.774 00:20:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:36:07.774 00:20:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:36:07.774 00:20:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:36:07.774 00:20:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:36:07.774 00:20:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:07.774 00:20:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:07.774 00:20:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:07.774 00:20:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:07.774 00:20:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:07.774 00:20:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:07.774 00:20:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:07.774 00:20:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:36:07.774 00:20:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:36:07.774 [2024-07-25 00:20:03.613601] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:07.774 [2024-07-25 00:20:03.613775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116747 ] 00:36:08.033 [2024-07-25 00:20:03.787053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:08.292 [2024-07-25 00:20:03.939951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:08.292 [2024-07-25 00:20:04.154335] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:36:08.292 [2024-07-25 00:20:04.154411] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:36:08.292 [2024-07-25 00:20:04.154431] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:08.859 [2024-07-25 00:20:04.700579] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:36:09.427 00:20:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:36:09.428 00:20:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:09.428 00:20:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:36:09.428 00:20:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:36:09.428 00:20:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:36:09.428 00:20:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:09.428 00:20:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:36:09.428 00:20:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:36:09.428 00:20:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:36:09.428 00:20:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:09.428 00:20:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:09.428 00:20:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:09.428 00:20:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:09.428 00:20:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:09.428 00:20:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:09.428 00:20:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:09.428 00:20:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:36:09.428 00:20:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:36:09.428 [2024-07-25 00:20:05.108946] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:09.428 [2024-07-25 00:20:05.109139] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116764 ] 00:36:09.428 [2024-07-25 00:20:05.278781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:09.731 [2024-07-25 00:20:05.436698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:09.994 [2024-07-25 00:20:05.660091] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:36:09.994 [2024-07-25 00:20:05.660167] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:36:09.994 [2024-07-25 00:20:05.660191] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:10.562 [2024-07-25 00:20:06.202323] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:10.822 00:36:10.822 real 0m3.008s 00:36:10.822 user 0m2.403s 00:36:10.822 sys 0m0.403s 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:10.822 ************************************ 00:36:10.822 END TEST dd_flag_directory_forced_aio 00:36:10.822 ************************************ 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:36:10.822 ************************************ 00:36:10.822 START TEST dd_flag_nofollow_forced_aio 00:36:10.822 ************************************ 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:36:10.822 00:20:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:10.822 [2024-07-25 00:20:06.671689] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:10.822 [2024-07-25 00:20:06.671881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116808 ] 00:36:11.081 [2024-07-25 00:20:06.842454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:11.340 [2024-07-25 00:20:06.991776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:11.340 [2024-07-25 00:20:07.200776] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:36:11.340 [2024-07-25 00:20:07.200860] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:36:11.340 [2024-07-25 00:20:07.200882] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:11.907 [2024-07-25 00:20:07.748829] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:36:12.474 00:20:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:36:12.474 00:20:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:12.474 00:20:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:36:12.474 00:20:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:36:12.474 00:20:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:36:12.474 00:20:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:12.474 00:20:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:36:12.474 00:20:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:36:12.474 00:20:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:36:12.474 00:20:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:12.474 00:20:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:12.474 00:20:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:12.474 00:20:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:12.474 00:20:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:12.474 00:20:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:12.474 00:20:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:12.474 00:20:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:36:12.474 00:20:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:36:12.474 [2024-07-25 00:20:08.169483] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:12.475 [2024-07-25 00:20:08.169657] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116824 ] 00:36:12.475 [2024-07-25 00:20:08.338868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:12.733 [2024-07-25 00:20:08.486575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:12.990 [2024-07-25 00:20:08.704671] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:36:12.990 [2024-07-25 00:20:08.704775] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:36:12.990 [2024-07-25 00:20:08.704796] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:13.557 [2024-07-25 00:20:09.246880] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:36:13.816 00:20:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:36:13.816 00:20:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:13.816 00:20:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:36:13.816 00:20:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:36:13.816 00:20:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:36:13.816 00:20:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:13.816 00:20:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:36:13.816 00:20:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:36:13.816 00:20:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:36:13.816 00:20:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:13.816 [2024-07-25 00:20:09.673928] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:13.816 [2024-07-25 00:20:09.674115] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116844 ] 00:36:14.075 [2024-07-25 00:20:09.846207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:14.333 [2024-07-25 00:20:09.993882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:15.527  Copying: 512/512 [B] (average 500 kBps) 00:36:15.527 00:36:15.527 ************************************ 00:36:15.527 END TEST dd_flag_nofollow_forced_aio 00:36:15.527 ************************************ 00:36:15.527 00:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ jn683hhb4n8tt6bgjnk6qwv12owj66p8cvvvv9of9hj0zm7trac4xsg5ojq4ra410fdyythb20nxtfq0so1l79wwxf2cuhj6j0nkk6tkdd1ah1a9fyb75v3nd4kf1hxny2u6mcvv81z8fvjcwm9iih7vvu4c4te2ns4fix7nnrydx0c1fqeyc21n89520tkg6gqqxqgy6hcdhhmiya3gawcw1tpzqozbuksncwx6iqm3colsjogcuog3o5qkfn92ntpo938r4jxxuywmnwgizchqnhv32kmzb7ij7c0cr6otxz9s86wk6j7it56jeu2g6snzjg0fzhsviwf51sj1vgtli9lj3mmexrqv21rb1okqvonth01hj6iicbmz8wrbb4g87ep0sfrypu0ok2w8a9xtqcn3od0l7cjc6m1asx62aflkw5lzaljr6rgb1h0slovynpa7dtjcz3d2jaetiv37eldfv01iw02tlwczid83fx2pwd06entxzir31mue == \j\n\6\8\3\h\h\b\4\n\8\t\t\6\b\g\j\n\k\6\q\w\v\1\2\o\w\j\6\6\p\8\c\v\v\v\v\9\o\f\9\h\j\0\z\m\7\t\r\a\c\4\x\s\g\5\o\j\q\4\r\a\4\1\0\f\d\y\y\t\h\b\2\0\n\x\t\f\q\0\s\o\1\l\7\9\w\w\x\f\2\c\u\h\j\6\j\0\n\k\k\6\t\k\d\d\1\a\h\1\a\9\f\y\b\7\5\v\3\n\d\4\k\f\1\h\x\n\y\2\u\6\m\c\v\v\8\1\z\8\f\v\j\c\w\m\9\i\i\h\7\v\v\u\4\c\4\t\e\2\n\s\4\f\i\x\7\n\n\r\y\d\x\0\c\1\f\q\e\y\c\2\1\n\8\9\5\2\0\t\k\g\6\g\q\q\x\q\g\y\6\h\c\d\h\h\m\i\y\a\3\g\a\w\c\w\1\t\p\z\q\o\z\b\u\k\s\n\c\w\x\6\i\q\m\3\c\o\l\s\j\o\g\c\u\o\g\3\o\5\q\k\f\n\9\2\n\t\p\o\9\3\8\r\4\j\x\x\u\y\w\m\n\w\g\i\z\c\h\q\n\h\v\3\2\k\m\z\b\7\i\j\7\c\0\c\r\6\o\t\x\z\9\s\8\6\w\k\6\j\7\i\t\5\6\j\e\u\2\g\6\s\n\z\j\g\0\f\z\h\s\v\i\w\f\5\1\s\j\1\v\g\t\l\i\9\l\j\3\m\m\e\x\r\q\v\2\1\r\b\1\o\k\q\v\o\n\t\h\0\1\h\j\6\i\i\c\b\m\z\8\w\r\b\b\4\g\8\7\e\p\0\s\f\r\y\p\u\0\o\k\2\w\8\a\9\x\t\q\c\n\3\o\d\0\l\7\c\j\c\6\m\1\a\s\x\6\2\a\f\l\k\w\5\l\z\a\l\j\r\6\r\g\b\1\h\0\s\l\o\v\y\n\p\a\7\d\t\j\c\z\3\d\2\j\a\e\t\i\v\3\7\e\l\d\f\v\0\1\i\w\0\2\t\l\w\c\z\i\d\8\3\f\x\2\p\w\d\0\6\e\n\t\x\z\i\r\3\1\m\u\e ]] 00:36:15.527 00:36:15.527 real 0m4.528s 00:36:15.527 user 0m3.627s 00:36:15.527 sys 0m0.582s 00:36:15.527 00:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:15.527 00:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:36:15.527 00:20:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:36:15.527 00:20:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:15.527 00:20:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:15.527 00:20:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:36:15.528 ************************************ 00:36:15.528 START TEST dd_flag_noatime_forced_aio 00:36:15.528 ************************************ 00:36:15.528 00:20:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:36:15.528 00:20:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:36:15.528 00:20:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:36:15.528 00:20:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:36:15.528 00:20:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:36:15.528 00:20:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:36:15.528 00:20:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:36:15.528 00:20:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721866810 00:36:15.528 00:20:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:15.528 00:20:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721866811 00:36:15.528 00:20:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:36:16.464 00:20:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:16.464 [2024-07-25 00:20:12.272845] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:16.464 [2024-07-25 00:20:12.273033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116891 ] 00:36:16.723 [2024-07-25 00:20:12.443166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:16.981 [2024-07-25 00:20:12.595268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:17.917  Copying: 512/512 [B] (average 500 kBps) 00:36:17.917 00:36:17.917 00:20:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:36:17.917 00:20:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721866810 )) 00:36:17.917 00:20:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:17.917 00:20:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721866811 )) 00:36:17.917 00:20:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:18.174 [2024-07-25 00:20:13.795211] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:18.174 [2024-07-25 00:20:13.795405] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116915 ] 00:36:18.174 [2024-07-25 00:20:13.966491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.431 [2024-07-25 00:20:14.114633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:19.624  Copying: 512/512 [B] (average 500 kBps) 00:36:19.624 00:36:19.624 00:20:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:36:19.624 00:20:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721866814 )) 00:36:19.624 00:36:19.624 real 0m4.071s 00:36:19.624 user 0m2.448s 00:36:19.624 sys 0m0.400s 00:36:19.624 00:20:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:19.624 ************************************ 00:36:19.624 END TEST dd_flag_noatime_forced_aio 00:36:19.624 00:20:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:36:19.624 ************************************ 00:36:19.624 00:20:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:36:19.624 00:20:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:19.624 00:20:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:19.624 00:20:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:36:19.624 ************************************ 00:36:19.624 START TEST dd_flags_misc_forced_aio 00:36:19.624 ************************************ 00:36:19.624 00:20:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:36:19.624 00:20:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:36:19.624 00:20:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:36:19.624 00:20:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:36:19.624 00:20:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:36:19.624 00:20:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:36:19.624 00:20:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:36:19.624 00:20:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:36:19.624 00:20:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:19.624 00:20:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:36:19.624 [2024-07-25 00:20:15.383479] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:19.624 [2024-07-25 00:20:15.383661] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116952 ] 00:36:19.883 [2024-07-25 00:20:15.544866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:19.883 [2024-07-25 00:20:15.693071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:21.077  Copying: 512/512 [B] (average 500 kBps) 00:36:21.077 00:36:21.077 00:20:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 2fhnhusrb9h6ch6h5lulxb1akkjcn00jayvti1ta33hpmedk3ew48pj337vaw566ltutnwmeenxrac86xuopqp0j46ndyp9wb0xze65x0mki4ucfw8d8o3mrwd8jujzuq79937c5w13cgx7onkyxo7i81thryrjuuvakt74ext0ltk9nxot9c651e1j8v6u7q29a26296vjjmlz7vdctfb3zj2tujnkjedkgv2ynizse9cgw7qp0xlqsy4pfbqag6jle3d9uljmics2bvcwba27k30udsvrz2il1e5phg57ohmnxm00tzrc2c739vs2mn2ifqd56f2tzmhj9glpgqfpjsvcgje57hfkk5i5a7s5pjme0ehoe3xecs7zgdjfxxeel8bl2039gmtjp6lfy5edky5pwnzln7f86jd862iumovb7zoobdzs509b4hzoxc9wz76ypm6cthe9vpeoa5tegxsz4rjd7zh73zzozwrq2j88ezgtz39jv5poj8p4x == \2\f\h\n\h\u\s\r\b\9\h\6\c\h\6\h\5\l\u\l\x\b\1\a\k\k\j\c\n\0\0\j\a\y\v\t\i\1\t\a\3\3\h\p\m\e\d\k\3\e\w\4\8\p\j\3\3\7\v\a\w\5\6\6\l\t\u\t\n\w\m\e\e\n\x\r\a\c\8\6\x\u\o\p\q\p\0\j\4\6\n\d\y\p\9\w\b\0\x\z\e\6\5\x\0\m\k\i\4\u\c\f\w\8\d\8\o\3\m\r\w\d\8\j\u\j\z\u\q\7\9\9\3\7\c\5\w\1\3\c\g\x\7\o\n\k\y\x\o\7\i\8\1\t\h\r\y\r\j\u\u\v\a\k\t\7\4\e\x\t\0\l\t\k\9\n\x\o\t\9\c\6\5\1\e\1\j\8\v\6\u\7\q\2\9\a\2\6\2\9\6\v\j\j\m\l\z\7\v\d\c\t\f\b\3\z\j\2\t\u\j\n\k\j\e\d\k\g\v\2\y\n\i\z\s\e\9\c\g\w\7\q\p\0\x\l\q\s\y\4\p\f\b\q\a\g\6\j\l\e\3\d\9\u\l\j\m\i\c\s\2\b\v\c\w\b\a\2\7\k\3\0\u\d\s\v\r\z\2\i\l\1\e\5\p\h\g\5\7\o\h\m\n\x\m\0\0\t\z\r\c\2\c\7\3\9\v\s\2\m\n\2\i\f\q\d\5\6\f\2\t\z\m\h\j\9\g\l\p\g\q\f\p\j\s\v\c\g\j\e\5\7\h\f\k\k\5\i\5\a\7\s\5\p\j\m\e\0\e\h\o\e\3\x\e\c\s\7\z\g\d\j\f\x\x\e\e\l\8\b\l\2\0\3\9\g\m\t\j\p\6\l\f\y\5\e\d\k\y\5\p\w\n\z\l\n\7\f\8\6\j\d\8\6\2\i\u\m\o\v\b\7\z\o\o\b\d\z\s\5\0\9\b\4\h\z\o\x\c\9\w\z\7\6\y\p\m\6\c\t\h\e\9\v\p\e\o\a\5\t\e\g\x\s\z\4\r\j\d\7\z\h\7\3\z\z\o\z\w\r\q\2\j\8\8\e\z\g\t\z\3\9\j\v\5\p\o\j\8\p\4\x ]] 00:36:21.077 00:20:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:21.077 00:20:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:36:21.077 [2024-07-25 00:20:16.886038] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:21.077 [2024-07-25 00:20:16.886218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116973 ] 00:36:21.336 [2024-07-25 00:20:17.058563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:21.595 [2024-07-25 00:20:17.208434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:22.530  Copying: 512/512 [B] (average 500 kBps) 00:36:22.530 00:36:22.531 00:20:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 2fhnhusrb9h6ch6h5lulxb1akkjcn00jayvti1ta33hpmedk3ew48pj337vaw566ltutnwmeenxrac86xuopqp0j46ndyp9wb0xze65x0mki4ucfw8d8o3mrwd8jujzuq79937c5w13cgx7onkyxo7i81thryrjuuvakt74ext0ltk9nxot9c651e1j8v6u7q29a26296vjjmlz7vdctfb3zj2tujnkjedkgv2ynizse9cgw7qp0xlqsy4pfbqag6jle3d9uljmics2bvcwba27k30udsvrz2il1e5phg57ohmnxm00tzrc2c739vs2mn2ifqd56f2tzmhj9glpgqfpjsvcgje57hfkk5i5a7s5pjme0ehoe3xecs7zgdjfxxeel8bl2039gmtjp6lfy5edky5pwnzln7f86jd862iumovb7zoobdzs509b4hzoxc9wz76ypm6cthe9vpeoa5tegxsz4rjd7zh73zzozwrq2j88ezgtz39jv5poj8p4x == \2\f\h\n\h\u\s\r\b\9\h\6\c\h\6\h\5\l\u\l\x\b\1\a\k\k\j\c\n\0\0\j\a\y\v\t\i\1\t\a\3\3\h\p\m\e\d\k\3\e\w\4\8\p\j\3\3\7\v\a\w\5\6\6\l\t\u\t\n\w\m\e\e\n\x\r\a\c\8\6\x\u\o\p\q\p\0\j\4\6\n\d\y\p\9\w\b\0\x\z\e\6\5\x\0\m\k\i\4\u\c\f\w\8\d\8\o\3\m\r\w\d\8\j\u\j\z\u\q\7\9\9\3\7\c\5\w\1\3\c\g\x\7\o\n\k\y\x\o\7\i\8\1\t\h\r\y\r\j\u\u\v\a\k\t\7\4\e\x\t\0\l\t\k\9\n\x\o\t\9\c\6\5\1\e\1\j\8\v\6\u\7\q\2\9\a\2\6\2\9\6\v\j\j\m\l\z\7\v\d\c\t\f\b\3\z\j\2\t\u\j\n\k\j\e\d\k\g\v\2\y\n\i\z\s\e\9\c\g\w\7\q\p\0\x\l\q\s\y\4\p\f\b\q\a\g\6\j\l\e\3\d\9\u\l\j\m\i\c\s\2\b\v\c\w\b\a\2\7\k\3\0\u\d\s\v\r\z\2\i\l\1\e\5\p\h\g\5\7\o\h\m\n\x\m\0\0\t\z\r\c\2\c\7\3\9\v\s\2\m\n\2\i\f\q\d\5\6\f\2\t\z\m\h\j\9\g\l\p\g\q\f\p\j\s\v\c\g\j\e\5\7\h\f\k\k\5\i\5\a\7\s\5\p\j\m\e\0\e\h\o\e\3\x\e\c\s\7\z\g\d\j\f\x\x\e\e\l\8\b\l\2\0\3\9\g\m\t\j\p\6\l\f\y\5\e\d\k\y\5\p\w\n\z\l\n\7\f\8\6\j\d\8\6\2\i\u\m\o\v\b\7\z\o\o\b\d\z\s\5\0\9\b\4\h\z\o\x\c\9\w\z\7\6\y\p\m\6\c\t\h\e\9\v\p\e\o\a\5\t\e\g\x\s\z\4\r\j\d\7\z\h\7\3\z\z\o\z\w\r\q\2\j\8\8\e\z\g\t\z\3\9\j\v\5\p\o\j\8\p\4\x ]] 00:36:22.531 00:20:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:22.531 00:20:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:36:22.531 [2024-07-25 00:20:18.398236] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:22.531 [2024-07-25 00:20:18.398421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116987 ] 00:36:22.789 [2024-07-25 00:20:18.568064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:23.047 [2024-07-25 00:20:18.719496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:24.258  Copying: 512/512 [B] (average 62 kBps) 00:36:24.258 00:36:24.258 00:20:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 2fhnhusrb9h6ch6h5lulxb1akkjcn00jayvti1ta33hpmedk3ew48pj337vaw566ltutnwmeenxrac86xuopqp0j46ndyp9wb0xze65x0mki4ucfw8d8o3mrwd8jujzuq79937c5w13cgx7onkyxo7i81thryrjuuvakt74ext0ltk9nxot9c651e1j8v6u7q29a26296vjjmlz7vdctfb3zj2tujnkjedkgv2ynizse9cgw7qp0xlqsy4pfbqag6jle3d9uljmics2bvcwba27k30udsvrz2il1e5phg57ohmnxm00tzrc2c739vs2mn2ifqd56f2tzmhj9glpgqfpjsvcgje57hfkk5i5a7s5pjme0ehoe3xecs7zgdjfxxeel8bl2039gmtjp6lfy5edky5pwnzln7f86jd862iumovb7zoobdzs509b4hzoxc9wz76ypm6cthe9vpeoa5tegxsz4rjd7zh73zzozwrq2j88ezgtz39jv5poj8p4x == \2\f\h\n\h\u\s\r\b\9\h\6\c\h\6\h\5\l\u\l\x\b\1\a\k\k\j\c\n\0\0\j\a\y\v\t\i\1\t\a\3\3\h\p\m\e\d\k\3\e\w\4\8\p\j\3\3\7\v\a\w\5\6\6\l\t\u\t\n\w\m\e\e\n\x\r\a\c\8\6\x\u\o\p\q\p\0\j\4\6\n\d\y\p\9\w\b\0\x\z\e\6\5\x\0\m\k\i\4\u\c\f\w\8\d\8\o\3\m\r\w\d\8\j\u\j\z\u\q\7\9\9\3\7\c\5\w\1\3\c\g\x\7\o\n\k\y\x\o\7\i\8\1\t\h\r\y\r\j\u\u\v\a\k\t\7\4\e\x\t\0\l\t\k\9\n\x\o\t\9\c\6\5\1\e\1\j\8\v\6\u\7\q\2\9\a\2\6\2\9\6\v\j\j\m\l\z\7\v\d\c\t\f\b\3\z\j\2\t\u\j\n\k\j\e\d\k\g\v\2\y\n\i\z\s\e\9\c\g\w\7\q\p\0\x\l\q\s\y\4\p\f\b\q\a\g\6\j\l\e\3\d\9\u\l\j\m\i\c\s\2\b\v\c\w\b\a\2\7\k\3\0\u\d\s\v\r\z\2\i\l\1\e\5\p\h\g\5\7\o\h\m\n\x\m\0\0\t\z\r\c\2\c\7\3\9\v\s\2\m\n\2\i\f\q\d\5\6\f\2\t\z\m\h\j\9\g\l\p\g\q\f\p\j\s\v\c\g\j\e\5\7\h\f\k\k\5\i\5\a\7\s\5\p\j\m\e\0\e\h\o\e\3\x\e\c\s\7\z\g\d\j\f\x\x\e\e\l\8\b\l\2\0\3\9\g\m\t\j\p\6\l\f\y\5\e\d\k\y\5\p\w\n\z\l\n\7\f\8\6\j\d\8\6\2\i\u\m\o\v\b\7\z\o\o\b\d\z\s\5\0\9\b\4\h\z\o\x\c\9\w\z\7\6\y\p\m\6\c\t\h\e\9\v\p\e\o\a\5\t\e\g\x\s\z\4\r\j\d\7\z\h\7\3\z\z\o\z\w\r\q\2\j\8\8\e\z\g\t\z\3\9\j\v\5\p\o\j\8\p\4\x ]] 00:36:24.258 00:20:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:24.258 00:20:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:36:24.258 [2024-07-25 00:20:19.907734] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:24.258 [2024-07-25 00:20:19.907889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117007 ] 00:36:24.258 [2024-07-25 00:20:20.059173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.527 [2024-07-25 00:20:20.213006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:25.732  Copying: 512/512 [B] (average 100 kBps) 00:36:25.732 00:36:25.732 00:20:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 2fhnhusrb9h6ch6h5lulxb1akkjcn00jayvti1ta33hpmedk3ew48pj337vaw566ltutnwmeenxrac86xuopqp0j46ndyp9wb0xze65x0mki4ucfw8d8o3mrwd8jujzuq79937c5w13cgx7onkyxo7i81thryrjuuvakt74ext0ltk9nxot9c651e1j8v6u7q29a26296vjjmlz7vdctfb3zj2tujnkjedkgv2ynizse9cgw7qp0xlqsy4pfbqag6jle3d9uljmics2bvcwba27k30udsvrz2il1e5phg57ohmnxm00tzrc2c739vs2mn2ifqd56f2tzmhj9glpgqfpjsvcgje57hfkk5i5a7s5pjme0ehoe3xecs7zgdjfxxeel8bl2039gmtjp6lfy5edky5pwnzln7f86jd862iumovb7zoobdzs509b4hzoxc9wz76ypm6cthe9vpeoa5tegxsz4rjd7zh73zzozwrq2j88ezgtz39jv5poj8p4x == \2\f\h\n\h\u\s\r\b\9\h\6\c\h\6\h\5\l\u\l\x\b\1\a\k\k\j\c\n\0\0\j\a\y\v\t\i\1\t\a\3\3\h\p\m\e\d\k\3\e\w\4\8\p\j\3\3\7\v\a\w\5\6\6\l\t\u\t\n\w\m\e\e\n\x\r\a\c\8\6\x\u\o\p\q\p\0\j\4\6\n\d\y\p\9\w\b\0\x\z\e\6\5\x\0\m\k\i\4\u\c\f\w\8\d\8\o\3\m\r\w\d\8\j\u\j\z\u\q\7\9\9\3\7\c\5\w\1\3\c\g\x\7\o\n\k\y\x\o\7\i\8\1\t\h\r\y\r\j\u\u\v\a\k\t\7\4\e\x\t\0\l\t\k\9\n\x\o\t\9\c\6\5\1\e\1\j\8\v\6\u\7\q\2\9\a\2\6\2\9\6\v\j\j\m\l\z\7\v\d\c\t\f\b\3\z\j\2\t\u\j\n\k\j\e\d\k\g\v\2\y\n\i\z\s\e\9\c\g\w\7\q\p\0\x\l\q\s\y\4\p\f\b\q\a\g\6\j\l\e\3\d\9\u\l\j\m\i\c\s\2\b\v\c\w\b\a\2\7\k\3\0\u\d\s\v\r\z\2\i\l\1\e\5\p\h\g\5\7\o\h\m\n\x\m\0\0\t\z\r\c\2\c\7\3\9\v\s\2\m\n\2\i\f\q\d\5\6\f\2\t\z\m\h\j\9\g\l\p\g\q\f\p\j\s\v\c\g\j\e\5\7\h\f\k\k\5\i\5\a\7\s\5\p\j\m\e\0\e\h\o\e\3\x\e\c\s\7\z\g\d\j\f\x\x\e\e\l\8\b\l\2\0\3\9\g\m\t\j\p\6\l\f\y\5\e\d\k\y\5\p\w\n\z\l\n\7\f\8\6\j\d\8\6\2\i\u\m\o\v\b\7\z\o\o\b\d\z\s\5\0\9\b\4\h\z\o\x\c\9\w\z\7\6\y\p\m\6\c\t\h\e\9\v\p\e\o\a\5\t\e\g\x\s\z\4\r\j\d\7\z\h\7\3\z\z\o\z\w\r\q\2\j\8\8\e\z\g\t\z\3\9\j\v\5\p\o\j\8\p\4\x ]] 00:36:25.732 00:20:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:36:25.732 00:20:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:36:25.732 00:20:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:36:25.732 00:20:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:36:25.732 00:20:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:25.732 00:20:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:36:25.732 [2024-07-25 00:20:21.421220] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:25.732 [2024-07-25 00:20:21.421402] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117021 ] 00:36:25.732 [2024-07-25 00:20:21.587711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:25.990 [2024-07-25 00:20:21.740482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:27.180  Copying: 512/512 [B] (average 500 kBps) 00:36:27.180 00:36:27.180 00:20:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zj8a9h267ho3xz5n4atcwrr0cl5oknmr7r4kvq1o8wkw76y46r4fqmi6v0tkfut9dwsiwsj6rndxb5w5zc3kwwfhj9viu84wkuuz5axe4776foyn6q21ovv4uhyhxs9mxooq0rcol0h797mln04pg5ydsl924bsdeg45lzl48w0pa15wxprd4o2esj5lb5gs5glm7rvwwmgk3jqvhgviym1pbyk7hhcl410qs3bml27i40b7iifl754vi7sve3819yi91r8aqhk00xx0myu5b24sjqegozgn1t9gf3w6yf7b9knuxclssj12mvy82xz61cfmge3ro95boxv8uv3cpdokmbdris9aj5bh2tsocww9uj6kpj83uv8bhjxw2mxkfiim0i2a6srmwkgmgatf4fibf5w3lxpccqyl9eg3zwu428hvom115fqetftlq6k12o6iswinvzggwk33fmfujaflcuzdqf96too44j793axpnupi0azbmisz01zrg0nw == \z\j\8\a\9\h\2\6\7\h\o\3\x\z\5\n\4\a\t\c\w\r\r\0\c\l\5\o\k\n\m\r\7\r\4\k\v\q\1\o\8\w\k\w\7\6\y\4\6\r\4\f\q\m\i\6\v\0\t\k\f\u\t\9\d\w\s\i\w\s\j\6\r\n\d\x\b\5\w\5\z\c\3\k\w\w\f\h\j\9\v\i\u\8\4\w\k\u\u\z\5\a\x\e\4\7\7\6\f\o\y\n\6\q\2\1\o\v\v\4\u\h\y\h\x\s\9\m\x\o\o\q\0\r\c\o\l\0\h\7\9\7\m\l\n\0\4\p\g\5\y\d\s\l\9\2\4\b\s\d\e\g\4\5\l\z\l\4\8\w\0\p\a\1\5\w\x\p\r\d\4\o\2\e\s\j\5\l\b\5\g\s\5\g\l\m\7\r\v\w\w\m\g\k\3\j\q\v\h\g\v\i\y\m\1\p\b\y\k\7\h\h\c\l\4\1\0\q\s\3\b\m\l\2\7\i\4\0\b\7\i\i\f\l\7\5\4\v\i\7\s\v\e\3\8\1\9\y\i\9\1\r\8\a\q\h\k\0\0\x\x\0\m\y\u\5\b\2\4\s\j\q\e\g\o\z\g\n\1\t\9\g\f\3\w\6\y\f\7\b\9\k\n\u\x\c\l\s\s\j\1\2\m\v\y\8\2\x\z\6\1\c\f\m\g\e\3\r\o\9\5\b\o\x\v\8\u\v\3\c\p\d\o\k\m\b\d\r\i\s\9\a\j\5\b\h\2\t\s\o\c\w\w\9\u\j\6\k\p\j\8\3\u\v\8\b\h\j\x\w\2\m\x\k\f\i\i\m\0\i\2\a\6\s\r\m\w\k\g\m\g\a\t\f\4\f\i\b\f\5\w\3\l\x\p\c\c\q\y\l\9\e\g\3\z\w\u\4\2\8\h\v\o\m\1\1\5\f\q\e\t\f\t\l\q\6\k\1\2\o\6\i\s\w\i\n\v\z\g\g\w\k\3\3\f\m\f\u\j\a\f\l\c\u\z\d\q\f\9\6\t\o\o\4\4\j\7\9\3\a\x\p\n\u\p\i\0\a\z\b\m\i\s\z\0\1\z\r\g\0\n\w ]] 00:36:27.180 00:20:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:27.180 00:20:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:36:27.180 [2024-07-25 00:20:22.926633] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:27.180 [2024-07-25 00:20:22.926876] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117036 ] 00:36:27.438 [2024-07-25 00:20:23.097885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:27.438 [2024-07-25 00:20:23.247589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:28.630  Copying: 512/512 [B] (average 500 kBps) 00:36:28.630 00:36:28.630 00:20:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zj8a9h267ho3xz5n4atcwrr0cl5oknmr7r4kvq1o8wkw76y46r4fqmi6v0tkfut9dwsiwsj6rndxb5w5zc3kwwfhj9viu84wkuuz5axe4776foyn6q21ovv4uhyhxs9mxooq0rcol0h797mln04pg5ydsl924bsdeg45lzl48w0pa15wxprd4o2esj5lb5gs5glm7rvwwmgk3jqvhgviym1pbyk7hhcl410qs3bml27i40b7iifl754vi7sve3819yi91r8aqhk00xx0myu5b24sjqegozgn1t9gf3w6yf7b9knuxclssj12mvy82xz61cfmge3ro95boxv8uv3cpdokmbdris9aj5bh2tsocww9uj6kpj83uv8bhjxw2mxkfiim0i2a6srmwkgmgatf4fibf5w3lxpccqyl9eg3zwu428hvom115fqetftlq6k12o6iswinvzggwk33fmfujaflcuzdqf96too44j793axpnupi0azbmisz01zrg0nw == \z\j\8\a\9\h\2\6\7\h\o\3\x\z\5\n\4\a\t\c\w\r\r\0\c\l\5\o\k\n\m\r\7\r\4\k\v\q\1\o\8\w\k\w\7\6\y\4\6\r\4\f\q\m\i\6\v\0\t\k\f\u\t\9\d\w\s\i\w\s\j\6\r\n\d\x\b\5\w\5\z\c\3\k\w\w\f\h\j\9\v\i\u\8\4\w\k\u\u\z\5\a\x\e\4\7\7\6\f\o\y\n\6\q\2\1\o\v\v\4\u\h\y\h\x\s\9\m\x\o\o\q\0\r\c\o\l\0\h\7\9\7\m\l\n\0\4\p\g\5\y\d\s\l\9\2\4\b\s\d\e\g\4\5\l\z\l\4\8\w\0\p\a\1\5\w\x\p\r\d\4\o\2\e\s\j\5\l\b\5\g\s\5\g\l\m\7\r\v\w\w\m\g\k\3\j\q\v\h\g\v\i\y\m\1\p\b\y\k\7\h\h\c\l\4\1\0\q\s\3\b\m\l\2\7\i\4\0\b\7\i\i\f\l\7\5\4\v\i\7\s\v\e\3\8\1\9\y\i\9\1\r\8\a\q\h\k\0\0\x\x\0\m\y\u\5\b\2\4\s\j\q\e\g\o\z\g\n\1\t\9\g\f\3\w\6\y\f\7\b\9\k\n\u\x\c\l\s\s\j\1\2\m\v\y\8\2\x\z\6\1\c\f\m\g\e\3\r\o\9\5\b\o\x\v\8\u\v\3\c\p\d\o\k\m\b\d\r\i\s\9\a\j\5\b\h\2\t\s\o\c\w\w\9\u\j\6\k\p\j\8\3\u\v\8\b\h\j\x\w\2\m\x\k\f\i\i\m\0\i\2\a\6\s\r\m\w\k\g\m\g\a\t\f\4\f\i\b\f\5\w\3\l\x\p\c\c\q\y\l\9\e\g\3\z\w\u\4\2\8\h\v\o\m\1\1\5\f\q\e\t\f\t\l\q\6\k\1\2\o\6\i\s\w\i\n\v\z\g\g\w\k\3\3\f\m\f\u\j\a\f\l\c\u\z\d\q\f\9\6\t\o\o\4\4\j\7\9\3\a\x\p\n\u\p\i\0\a\z\b\m\i\s\z\0\1\z\r\g\0\n\w ]] 00:36:28.630 00:20:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:28.630 00:20:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:36:28.630 [2024-07-25 00:20:24.437766] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:28.630 [2024-07-25 00:20:24.437961] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117055 ] 00:36:28.888 [2024-07-25 00:20:24.608938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:29.146 [2024-07-25 00:20:24.759710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.080  Copying: 512/512 [B] (average 100 kBps) 00:36:30.080 00:36:30.080 00:20:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zj8a9h267ho3xz5n4atcwrr0cl5oknmr7r4kvq1o8wkw76y46r4fqmi6v0tkfut9dwsiwsj6rndxb5w5zc3kwwfhj9viu84wkuuz5axe4776foyn6q21ovv4uhyhxs9mxooq0rcol0h797mln04pg5ydsl924bsdeg45lzl48w0pa15wxprd4o2esj5lb5gs5glm7rvwwmgk3jqvhgviym1pbyk7hhcl410qs3bml27i40b7iifl754vi7sve3819yi91r8aqhk00xx0myu5b24sjqegozgn1t9gf3w6yf7b9knuxclssj12mvy82xz61cfmge3ro95boxv8uv3cpdokmbdris9aj5bh2tsocww9uj6kpj83uv8bhjxw2mxkfiim0i2a6srmwkgmgatf4fibf5w3lxpccqyl9eg3zwu428hvom115fqetftlq6k12o6iswinvzggwk33fmfujaflcuzdqf96too44j793axpnupi0azbmisz01zrg0nw == \z\j\8\a\9\h\2\6\7\h\o\3\x\z\5\n\4\a\t\c\w\r\r\0\c\l\5\o\k\n\m\r\7\r\4\k\v\q\1\o\8\w\k\w\7\6\y\4\6\r\4\f\q\m\i\6\v\0\t\k\f\u\t\9\d\w\s\i\w\s\j\6\r\n\d\x\b\5\w\5\z\c\3\k\w\w\f\h\j\9\v\i\u\8\4\w\k\u\u\z\5\a\x\e\4\7\7\6\f\o\y\n\6\q\2\1\o\v\v\4\u\h\y\h\x\s\9\m\x\o\o\q\0\r\c\o\l\0\h\7\9\7\m\l\n\0\4\p\g\5\y\d\s\l\9\2\4\b\s\d\e\g\4\5\l\z\l\4\8\w\0\p\a\1\5\w\x\p\r\d\4\o\2\e\s\j\5\l\b\5\g\s\5\g\l\m\7\r\v\w\w\m\g\k\3\j\q\v\h\g\v\i\y\m\1\p\b\y\k\7\h\h\c\l\4\1\0\q\s\3\b\m\l\2\7\i\4\0\b\7\i\i\f\l\7\5\4\v\i\7\s\v\e\3\8\1\9\y\i\9\1\r\8\a\q\h\k\0\0\x\x\0\m\y\u\5\b\2\4\s\j\q\e\g\o\z\g\n\1\t\9\g\f\3\w\6\y\f\7\b\9\k\n\u\x\c\l\s\s\j\1\2\m\v\y\8\2\x\z\6\1\c\f\m\g\e\3\r\o\9\5\b\o\x\v\8\u\v\3\c\p\d\o\k\m\b\d\r\i\s\9\a\j\5\b\h\2\t\s\o\c\w\w\9\u\j\6\k\p\j\8\3\u\v\8\b\h\j\x\w\2\m\x\k\f\i\i\m\0\i\2\a\6\s\r\m\w\k\g\m\g\a\t\f\4\f\i\b\f\5\w\3\l\x\p\c\c\q\y\l\9\e\g\3\z\w\u\4\2\8\h\v\o\m\1\1\5\f\q\e\t\f\t\l\q\6\k\1\2\o\6\i\s\w\i\n\v\z\g\g\w\k\3\3\f\m\f\u\j\a\f\l\c\u\z\d\q\f\9\6\t\o\o\4\4\j\7\9\3\a\x\p\n\u\p\i\0\a\z\b\m\i\s\z\0\1\z\r\g\0\n\w ]] 00:36:30.080 00:20:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:30.080 00:20:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:36:30.339 [2024-07-25 00:20:25.955003] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:30.339 [2024-07-25 00:20:25.955176] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117069 ] 00:36:30.339 [2024-07-25 00:20:26.126371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.598 [2024-07-25 00:20:26.278564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:31.792  Copying: 512/512 [B] (average 125 kBps) 00:36:31.792 00:36:31.792 ************************************ 00:36:31.792 END TEST dd_flags_misc_forced_aio 00:36:31.792 ************************************ 00:36:31.792 00:20:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zj8a9h267ho3xz5n4atcwrr0cl5oknmr7r4kvq1o8wkw76y46r4fqmi6v0tkfut9dwsiwsj6rndxb5w5zc3kwwfhj9viu84wkuuz5axe4776foyn6q21ovv4uhyhxs9mxooq0rcol0h797mln04pg5ydsl924bsdeg45lzl48w0pa15wxprd4o2esj5lb5gs5glm7rvwwmgk3jqvhgviym1pbyk7hhcl410qs3bml27i40b7iifl754vi7sve3819yi91r8aqhk00xx0myu5b24sjqegozgn1t9gf3w6yf7b9knuxclssj12mvy82xz61cfmge3ro95boxv8uv3cpdokmbdris9aj5bh2tsocww9uj6kpj83uv8bhjxw2mxkfiim0i2a6srmwkgmgatf4fibf5w3lxpccqyl9eg3zwu428hvom115fqetftlq6k12o6iswinvzggwk33fmfujaflcuzdqf96too44j793axpnupi0azbmisz01zrg0nw == \z\j\8\a\9\h\2\6\7\h\o\3\x\z\5\n\4\a\t\c\w\r\r\0\c\l\5\o\k\n\m\r\7\r\4\k\v\q\1\o\8\w\k\w\7\6\y\4\6\r\4\f\q\m\i\6\v\0\t\k\f\u\t\9\d\w\s\i\w\s\j\6\r\n\d\x\b\5\w\5\z\c\3\k\w\w\f\h\j\9\v\i\u\8\4\w\k\u\u\z\5\a\x\e\4\7\7\6\f\o\y\n\6\q\2\1\o\v\v\4\u\h\y\h\x\s\9\m\x\o\o\q\0\r\c\o\l\0\h\7\9\7\m\l\n\0\4\p\g\5\y\d\s\l\9\2\4\b\s\d\e\g\4\5\l\z\l\4\8\w\0\p\a\1\5\w\x\p\r\d\4\o\2\e\s\j\5\l\b\5\g\s\5\g\l\m\7\r\v\w\w\m\g\k\3\j\q\v\h\g\v\i\y\m\1\p\b\y\k\7\h\h\c\l\4\1\0\q\s\3\b\m\l\2\7\i\4\0\b\7\i\i\f\l\7\5\4\v\i\7\s\v\e\3\8\1\9\y\i\9\1\r\8\a\q\h\k\0\0\x\x\0\m\y\u\5\b\2\4\s\j\q\e\g\o\z\g\n\1\t\9\g\f\3\w\6\y\f\7\b\9\k\n\u\x\c\l\s\s\j\1\2\m\v\y\8\2\x\z\6\1\c\f\m\g\e\3\r\o\9\5\b\o\x\v\8\u\v\3\c\p\d\o\k\m\b\d\r\i\s\9\a\j\5\b\h\2\t\s\o\c\w\w\9\u\j\6\k\p\j\8\3\u\v\8\b\h\j\x\w\2\m\x\k\f\i\i\m\0\i\2\a\6\s\r\m\w\k\g\m\g\a\t\f\4\f\i\b\f\5\w\3\l\x\p\c\c\q\y\l\9\e\g\3\z\w\u\4\2\8\h\v\o\m\1\1\5\f\q\e\t\f\t\l\q\6\k\1\2\o\6\i\s\w\i\n\v\z\g\g\w\k\3\3\f\m\f\u\j\a\f\l\c\u\z\d\q\f\9\6\t\o\o\4\4\j\7\9\3\a\x\p\n\u\p\i\0\a\z\b\m\i\s\z\0\1\z\r\g\0\n\w ]] 00:36:31.792 00:36:31.792 real 0m12.106s 00:36:31.792 user 0m9.644s 00:36:31.792 sys 0m1.537s 00:36:31.792 00:20:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:31.792 00:20:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:36:31.792 00:20:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:36:31.792 00:20:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:36:31.792 00:20:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:36:31.792 00:36:31.792 real 0m51.056s 00:36:31.792 user 0m38.905s 00:36:31.792 sys 0m6.557s 00:36:31.792 00:20:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:31.792 ************************************ 00:36:31.792 END TEST spdk_dd_posix 00:36:31.792 ************************************ 00:36:31.792 00:20:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:36:31.792 00:20:27 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:36:31.792 00:20:27 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:31.792 00:20:27 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:31.792 00:20:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:36:31.792 ************************************ 00:36:31.792 START TEST spdk_dd_malloc 00:36:31.792 ************************************ 00:36:31.792 00:20:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:36:31.792 * Looking for test storage... 00:36:31.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:36:31.792 00:20:27 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:31.792 00:20:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:31.792 00:20:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:31.792 00:20:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:31.792 00:20:27 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:36:31.792 00:20:27 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:36:31.793 00:20:27 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:36:31.793 00:20:27 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:36:31.793 00:20:27 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # export PATH 00:36:31.793 00:20:27 spdk_dd.spdk_dd_malloc -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:36:31.793 00:20:27 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:36:31.793 00:20:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:31.793 00:20:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:31.793 00:20:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:36:31.793 ************************************ 00:36:31.793 START TEST dd_malloc_copy 00:36:31.793 ************************************ 00:36:31.793 00:20:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:36:31.793 00:20:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:36:31.793 00:20:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:36:31.793 00:20:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:36:31.793 00:20:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:36:31.793 00:20:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:36:31.793 00:20:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:36:31.793 00:20:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:36:31.793 00:20:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:36:31.793 00:20:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:36:31.793 00:20:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:36:31.793 { 00:36:31.793 "subsystems": [ 00:36:31.793 { 00:36:31.793 "subsystem": "bdev", 00:36:31.793 "config": [ 00:36:31.793 { 00:36:31.793 "params": { 00:36:31.793 "block_size": 512, 00:36:31.793 "num_blocks": 1048576, 00:36:31.793 "name": "malloc0" 00:36:31.793 }, 00:36:31.793 "method": "bdev_malloc_create" 00:36:31.793 }, 00:36:31.793 { 00:36:31.793 "params": { 00:36:31.793 "block_size": 512, 00:36:31.793 "num_blocks": 1048576, 00:36:31.793 "name": "malloc1" 00:36:31.793 }, 00:36:31.793 "method": "bdev_malloc_create" 00:36:31.793 }, 00:36:31.793 { 00:36:31.793 "method": "bdev_wait_for_examine" 00:36:31.793 } 00:36:31.793 ] 00:36:31.793 } 00:36:31.793 ] 00:36:31.793 } 00:36:32.051 [2024-07-25 00:20:27.671376] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:32.051 [2024-07-25 00:20:27.671513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117150 ] 00:36:32.051 [2024-07-25 00:20:27.827442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:32.310 [2024-07-25 00:20:27.983780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:38.622  Copying: 192/512 [MB] (192 MBps) Copying: 383/512 [MB] (191 MBps) Copying: 512/512 [MB] (average 192 MBps) 00:36:38.622 00:36:38.622 00:20:34 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:36:38.622 00:20:34 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:36:38.622 00:20:34 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:36:38.622 00:20:34 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:36:38.622 { 00:36:38.622 "subsystems": [ 00:36:38.622 { 00:36:38.622 "subsystem": "bdev", 00:36:38.622 "config": [ 00:36:38.622 { 00:36:38.622 "params": { 00:36:38.622 "block_size": 512, 00:36:38.622 "num_blocks": 1048576, 00:36:38.622 "name": "malloc0" 00:36:38.622 }, 00:36:38.622 "method": "bdev_malloc_create" 00:36:38.622 }, 00:36:38.622 { 00:36:38.622 "params": { 00:36:38.622 "block_size": 512, 00:36:38.622 "num_blocks": 1048576, 00:36:38.622 "name": "malloc1" 00:36:38.622 }, 00:36:38.622 "method": "bdev_malloc_create" 00:36:38.622 }, 00:36:38.622 { 00:36:38.622 "method": "bdev_wait_for_examine" 00:36:38.622 } 00:36:38.622 ] 00:36:38.622 } 00:36:38.622 ] 00:36:38.622 } 00:36:38.880 [2024-07-25 00:20:34.499304] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:38.880 [2024-07-25 00:20:34.499480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117227 ] 00:36:38.880 [2024-07-25 00:20:34.670513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.139 [2024-07-25 00:20:34.822888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:45.467  Copying: 193/512 [MB] (193 MBps) Copying: 387/512 [MB] (194 MBps) Copying: 512/512 [MB] (average 193 MBps) 00:36:45.467 00:36:45.467 00:36:45.467 real 0m13.632s 00:36:45.467 user 0m12.538s 00:36:45.467 sys 0m0.902s 00:36:45.467 00:20:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:45.467 00:20:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:36:45.467 ************************************ 00:36:45.467 END TEST dd_malloc_copy 00:36:45.467 ************************************ 00:36:45.467 00:36:45.467 real 0m13.771s 00:36:45.467 user 0m12.588s 00:36:45.467 sys 0m0.991s 00:36:45.467 00:20:41 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:45.467 ************************************ 00:36:45.467 END TEST spdk_dd_malloc 00:36:45.467 ************************************ 00:36:45.467 00:20:41 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:36:45.467 00:20:41 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:36:45.467 00:20:41 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:45.467 00:20:41 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:45.467 00:20:41 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:36:45.726 ************************************ 00:36:45.726 START TEST spdk_dd_bdev_to_bdev 00:36:45.726 ************************************ 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:36:45.726 * Looking for test storage... 00:36:45.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # export PATH 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:36:45.726 00:20:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:36:45.726 [2024-07-25 00:20:41.493977] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:45.726 [2024-07-25 00:20:41.494159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117359 ] 00:36:45.985 [2024-07-25 00:20:41.665308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:45.985 [2024-07-25 00:20:41.811526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:47.488  Copying: 256/256 [MB] (average 1910 MBps) 00:36:47.488 00:36:47.488 00:20:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:36:47.488 00:20:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:47.488 00:20:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:36:47.488 00:20:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:36:47.488 00:20:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:36:47.488 00:20:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:36:47.488 00:20:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:47.488 00:20:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:36:47.488 ************************************ 00:36:47.488 START TEST dd_inflate_file 00:36:47.488 ************************************ 00:36:47.488 00:20:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:36:47.488 [2024-07-25 00:20:43.157297] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:47.488 [2024-07-25 00:20:43.157484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117375 ] 00:36:47.488 [2024-07-25 00:20:43.327868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:47.746 [2024-07-25 00:20:43.489853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:48.938  Copying: 64/64 [MB] (average 1828 MBps) 00:36:48.939 00:36:48.939 00:36:48.939 real 0m1.541s 00:36:48.939 user 0m1.225s 00:36:48.939 sys 0m0.202s 00:36:48.939 ************************************ 00:36:48.939 END TEST dd_inflate_file 00:36:48.939 ************************************ 00:36:48.939 00:20:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:48.939 00:20:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:36:48.939 00:20:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:36:48.939 00:20:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:36:48.939 00:20:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:36:48.939 00:20:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:36:48.939 00:20:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:36:48.939 00:20:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:36:48.939 00:20:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:48.939 00:20:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:36:48.939 00:20:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:36:48.939 ************************************ 00:36:48.939 START TEST dd_copy_to_out_bdev 00:36:48.939 ************************************ 00:36:48.939 00:20:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:36:48.939 { 00:36:48.939 "subsystems": [ 00:36:48.939 { 00:36:48.939 "subsystem": "bdev", 00:36:48.939 "config": [ 00:36:48.939 { 00:36:48.939 "params": { 00:36:48.939 "block_size": 4096, 00:36:48.939 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:36:48.939 "name": "aio1" 00:36:48.939 }, 00:36:48.939 "method": "bdev_aio_create" 00:36:48.939 }, 00:36:48.939 { 00:36:48.939 "params": { 00:36:48.939 "trtype": "pcie", 00:36:48.939 "traddr": "0000:00:10.0", 00:36:48.939 "name": "Nvme0" 00:36:48.939 }, 00:36:48.939 "method": "bdev_nvme_attach_controller" 00:36:48.939 }, 00:36:48.939 { 00:36:48.939 "method": "bdev_wait_for_examine" 00:36:48.939 } 00:36:48.939 ] 00:36:48.939 } 00:36:48.939 ] 00:36:48.939 } 00:36:48.939 [2024-07-25 00:20:44.754567] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:48.939 [2024-07-25 00:20:44.754756] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117420 ] 00:36:49.198 [2024-07-25 00:20:44.927190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:49.457 [2024-07-25 00:20:45.080288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:51.974  Copying: 38/64 [MB] (38 MBps) Copying: 64/64 [MB] (average 38 MBps) 00:36:51.974 00:36:52.233 00:36:52.233 real 0m3.163s 00:36:52.233 user 0m2.808s 00:36:52.233 sys 0m0.253s 00:36:52.233 ************************************ 00:36:52.233 END TEST dd_copy_to_out_bdev 00:36:52.233 ************************************ 00:36:52.233 00:20:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:52.233 00:20:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:36:52.233 00:20:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:36:52.233 00:20:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:36:52.233 00:20:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:52.233 00:20:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:52.233 00:20:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:36:52.233 ************************************ 00:36:52.233 START TEST dd_offset_magic 00:36:52.233 ************************************ 00:36:52.233 00:20:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:36:52.233 00:20:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:36:52.233 00:20:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:36:52.233 00:20:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:36:52.233 00:20:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:36:52.233 00:20:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:36:52.233 00:20:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:36:52.233 00:20:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:36:52.233 00:20:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:36:52.233 { 00:36:52.233 "subsystems": [ 00:36:52.233 { 00:36:52.233 "subsystem": "bdev", 00:36:52.233 "config": [ 00:36:52.233 { 00:36:52.233 "params": { 00:36:52.233 "block_size": 4096, 00:36:52.233 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:36:52.233 "name": "aio1" 00:36:52.233 }, 00:36:52.233 "method": "bdev_aio_create" 00:36:52.233 }, 00:36:52.233 { 00:36:52.233 "params": { 00:36:52.233 "trtype": "pcie", 00:36:52.233 "traddr": "0000:00:10.0", 00:36:52.233 "name": "Nvme0" 00:36:52.233 }, 00:36:52.233 "method": "bdev_nvme_attach_controller" 00:36:52.233 }, 00:36:52.233 { 00:36:52.233 "method": "bdev_wait_for_examine" 00:36:52.233 } 00:36:52.233 ] 00:36:52.233 } 00:36:52.233 ] 00:36:52.233 } 00:36:52.233 [2024-07-25 00:20:47.974638] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:52.233 [2024-07-25 00:20:47.974847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117474 ] 00:36:52.492 [2024-07-25 00:20:48.146349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:52.492 [2024-07-25 00:20:48.295379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:54.364  Copying: 65/65 [MB] (average 144 MBps) 00:36:54.364 00:36:54.364 00:20:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:36:54.364 00:20:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:36:54.364 00:20:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:36:54.364 00:20:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:36:54.364 { 00:36:54.364 "subsystems": [ 00:36:54.364 { 00:36:54.364 "subsystem": "bdev", 00:36:54.364 "config": [ 00:36:54.364 { 00:36:54.364 "params": { 00:36:54.364 "block_size": 4096, 00:36:54.364 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:36:54.364 "name": "aio1" 00:36:54.364 }, 00:36:54.364 "method": "bdev_aio_create" 00:36:54.364 }, 00:36:54.364 { 00:36:54.364 "params": { 00:36:54.364 "trtype": "pcie", 00:36:54.364 "traddr": "0000:00:10.0", 00:36:54.364 "name": "Nvme0" 00:36:54.364 }, 00:36:54.364 "method": "bdev_nvme_attach_controller" 00:36:54.364 }, 00:36:54.364 { 00:36:54.364 "method": "bdev_wait_for_examine" 00:36:54.364 } 00:36:54.364 ] 00:36:54.364 } 00:36:54.364 ] 00:36:54.364 } 00:36:54.364 [2024-07-25 00:20:50.070618] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:54.364 [2024-07-25 00:20:50.070787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117502 ] 00:36:54.623 [2024-07-25 00:20:50.242942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:54.623 [2024-07-25 00:20:50.400015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:55.756  Copying: 1024/1024 [kB] (average 1000 MBps) 00:36:55.756 00:36:56.015 00:20:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:36:56.015 00:20:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:36:56.015 00:20:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:36:56.015 00:20:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:36:56.015 00:20:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:36:56.015 00:20:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:36:56.015 00:20:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:36:56.015 { 00:36:56.015 "subsystems": [ 00:36:56.015 { 00:36:56.015 "subsystem": "bdev", 00:36:56.015 "config": [ 00:36:56.015 { 00:36:56.015 "params": { 00:36:56.015 "block_size": 4096, 00:36:56.015 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:36:56.015 "name": "aio1" 00:36:56.015 }, 00:36:56.015 "method": "bdev_aio_create" 00:36:56.015 }, 00:36:56.015 { 00:36:56.015 "params": { 00:36:56.015 "trtype": "pcie", 00:36:56.015 "traddr": "0000:00:10.0", 00:36:56.015 "name": "Nvme0" 00:36:56.015 }, 00:36:56.015 "method": "bdev_nvme_attach_controller" 00:36:56.015 }, 00:36:56.015 { 00:36:56.015 "method": "bdev_wait_for_examine" 00:36:56.015 } 00:36:56.015 ] 00:36:56.015 } 00:36:56.015 ] 00:36:56.015 } 00:36:56.015 [2024-07-25 00:20:51.681674] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:56.015 [2024-07-25 00:20:51.681851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117529 ] 00:36:56.015 [2024-07-25 00:20:51.834483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:56.273 [2024-07-25 00:20:51.982321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:57.468  Copying: 65/65 [MB] (average 1300 MBps) 00:36:57.468 00:36:57.468 00:20:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:36:57.468 00:20:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:36:57.468 00:20:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:36:57.468 00:20:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:36:57.468 { 00:36:57.468 "subsystems": [ 00:36:57.468 { 00:36:57.468 "subsystem": "bdev", 00:36:57.468 "config": [ 00:36:57.468 { 00:36:57.468 "params": { 00:36:57.468 "block_size": 4096, 00:36:57.468 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:36:57.468 "name": "aio1" 00:36:57.468 }, 00:36:57.468 "method": "bdev_aio_create" 00:36:57.468 }, 00:36:57.468 { 00:36:57.468 "params": { 00:36:57.468 "trtype": "pcie", 00:36:57.468 "traddr": "0000:00:10.0", 00:36:57.468 "name": "Nvme0" 00:36:57.468 }, 00:36:57.468 "method": "bdev_nvme_attach_controller" 00:36:57.468 }, 00:36:57.468 { 00:36:57.468 "method": "bdev_wait_for_examine" 00:36:57.468 } 00:36:57.468 ] 00:36:57.468 } 00:36:57.468 ] 00:36:57.468 } 00:36:57.468 [2024-07-25 00:20:53.328463] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:57.468 [2024-07-25 00:20:53.328634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117550 ] 00:36:57.726 [2024-07-25 00:20:53.498628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:57.985 [2024-07-25 00:20:53.666625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:59.207  Copying: 1024/1024 [kB] (average 1000 MBps) 00:36:59.207 00:36:59.207 00:20:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:36:59.207 ************************************ 00:36:59.207 END TEST dd_offset_magic 00:36:59.207 ************************************ 00:36:59.207 00:20:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:36:59.207 00:36:59.207 real 0m6.942s 00:36:59.207 user 0m5.192s 00:36:59.207 sys 0m0.946s 00:36:59.207 00:20:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:59.207 00:20:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:36:59.207 00:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:36:59.207 00:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:36:59.207 00:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:36:59.207 00:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:36:59.207 00:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:36:59.207 00:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:36:59.207 00:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:36:59.207 00:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:36:59.207 00:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:36:59.207 00:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:36:59.207 00:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:36:59.207 { 00:36:59.207 "subsystems": [ 00:36:59.207 { 00:36:59.207 "subsystem": "bdev", 00:36:59.207 "config": [ 00:36:59.207 { 00:36:59.207 "params": { 00:36:59.207 "block_size": 4096, 00:36:59.207 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:36:59.207 "name": "aio1" 00:36:59.207 }, 00:36:59.207 "method": "bdev_aio_create" 00:36:59.207 }, 00:36:59.207 { 00:36:59.207 "params": { 00:36:59.207 "trtype": "pcie", 00:36:59.207 "traddr": "0000:00:10.0", 00:36:59.207 "name": "Nvme0" 00:36:59.207 }, 00:36:59.207 "method": "bdev_nvme_attach_controller" 00:36:59.207 }, 00:36:59.207 { 00:36:59.207 "method": "bdev_wait_for_examine" 00:36:59.207 } 00:36:59.207 ] 00:36:59.207 } 00:36:59.207 ] 00:36:59.207 } 00:36:59.207 [2024-07-25 00:20:54.957785] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:36:59.207 [2024-07-25 00:20:54.958583] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117592 ] 00:36:59.466 [2024-07-25 00:20:55.130249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:59.466 [2024-07-25 00:20:55.283009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:00.968  Copying: 5120/5120 [kB] (average 1250 MBps) 00:37:00.968 00:37:00.968 00:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:37:00.968 00:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=aio1 00:37:00.968 00:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:37:00.968 00:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:37:00.968 00:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:37:00.968 00:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:37:00.968 00:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:37:00.968 00:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:37:00.968 00:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:37:00.968 00:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:37:00.968 { 00:37:00.968 "subsystems": [ 00:37:00.968 { 00:37:00.968 "subsystem": "bdev", 00:37:00.968 "config": [ 00:37:00.968 { 00:37:00.968 "params": { 00:37:00.968 "block_size": 4096, 00:37:00.968 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:37:00.968 "name": "aio1" 00:37:00.968 }, 00:37:00.968 "method": "bdev_aio_create" 00:37:00.968 }, 00:37:00.968 { 00:37:00.968 "params": { 00:37:00.968 "trtype": "pcie", 00:37:00.968 "traddr": "0000:00:10.0", 00:37:00.968 "name": "Nvme0" 00:37:00.969 }, 00:37:00.969 "method": "bdev_nvme_attach_controller" 00:37:00.969 }, 00:37:00.969 { 00:37:00.969 "method": "bdev_wait_for_examine" 00:37:00.969 } 00:37:00.969 ] 00:37:00.969 } 00:37:00.969 ] 00:37:00.969 } 00:37:00.969 [2024-07-25 00:20:56.582864] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:37:00.969 [2024-07-25 00:20:56.583247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117617 ] 00:37:00.969 [2024-07-25 00:20:56.755618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:01.227 [2024-07-25 00:20:56.910773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:02.421  Copying: 5120/5120 [kB] (average 1666 MBps) 00:37:02.421 00:37:02.421 00:20:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:37:02.421 ************************************ 00:37:02.421 END TEST spdk_dd_bdev_to_bdev 00:37:02.421 ************************************ 00:37:02.421 00:37:02.421 real 0m16.797s 00:37:02.421 user 0m13.079s 00:37:02.421 sys 0m2.356s 00:37:02.421 00:20:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:02.421 00:20:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:37:02.421 00:20:58 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:37:02.421 00:20:58 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:37:02.421 00:20:58 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:02.421 00:20:58 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:02.421 00:20:58 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:37:02.421 ************************************ 00:37:02.421 START TEST spdk_dd_sparse 00:37:02.421 ************************************ 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:37:02.421 * Looking for test storage... 00:37:02.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # export PATH 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:37:02.421 1+0 records in 00:37:02.421 1+0 records out 00:37:02.421 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00466592 s, 899 MB/s 00:37:02.421 00:20:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:37:02.681 1+0 records in 00:37:02.681 1+0 records out 00:37:02.681 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00524288 s, 800 MB/s 00:37:02.681 00:20:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:37:02.681 1+0 records in 00:37:02.681 1+0 records out 00:37:02.681 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00732678 s, 572 MB/s 00:37:02.681 00:20:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:37:02.681 00:20:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:02.681 00:20:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:02.681 00:20:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:37:02.681 ************************************ 00:37:02.681 START TEST dd_sparse_file_to_file 00:37:02.681 ************************************ 00:37:02.681 00:20:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:37:02.681 00:20:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:37:02.681 00:20:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:37:02.681 00:20:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:37:02.681 00:20:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:37:02.681 00:20:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:37:02.681 00:20:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:37:02.681 00:20:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:37:02.681 00:20:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:37:02.681 00:20:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:37:02.681 00:20:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:37:02.681 { 00:37:02.681 "subsystems": [ 00:37:02.681 { 00:37:02.681 "subsystem": "bdev", 00:37:02.681 "config": [ 00:37:02.681 { 00:37:02.681 "params": { 00:37:02.681 "block_size": 4096, 00:37:02.681 "filename": "dd_sparse_aio_disk", 00:37:02.681 "name": "dd_aio" 00:37:02.681 }, 00:37:02.681 "method": "bdev_aio_create" 00:37:02.681 }, 00:37:02.681 { 00:37:02.681 "params": { 00:37:02.681 "lvs_name": "dd_lvstore", 00:37:02.681 "bdev_name": "dd_aio" 00:37:02.681 }, 00:37:02.681 "method": "bdev_lvol_create_lvstore" 00:37:02.681 }, 00:37:02.681 { 00:37:02.681 "method": "bdev_wait_for_examine" 00:37:02.681 } 00:37:02.681 ] 00:37:02.681 } 00:37:02.681 ] 00:37:02.681 } 00:37:02.681 [2024-07-25 00:20:58.360321] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:37:02.681 [2024-07-25 00:20:58.360506] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117693 ] 00:37:02.681 [2024-07-25 00:20:58.513813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:02.940 [2024-07-25 00:20:58.663396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:04.135  Copying: 12/36 [MB] (average 1500 MBps) 00:37:04.135 00:37:04.135 00:20:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:37:04.135 00:20:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:37:04.136 00:20:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:37:04.136 00:20:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:37:04.136 00:20:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:37:04.136 00:20:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:37:04.136 00:20:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:37:04.136 00:20:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:37:04.136 ************************************ 00:37:04.136 END TEST dd_sparse_file_to_file 00:37:04.136 ************************************ 00:37:04.136 00:20:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:37:04.136 00:20:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:37:04.136 00:37:04.136 real 0m1.640s 00:37:04.136 user 0m1.280s 00:37:04.136 sys 0m0.246s 00:37:04.136 00:20:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:04.136 00:20:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:37:04.136 00:20:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:37:04.136 00:20:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:04.136 00:20:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:04.136 00:20:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:37:04.395 ************************************ 00:37:04.395 START TEST dd_sparse_file_to_bdev 00:37:04.395 ************************************ 00:37:04.395 00:21:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:37:04.395 00:21:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:37:04.395 00:21:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:37:04.395 00:21:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:37:04.395 00:21:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:37:04.395 00:21:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:37:04.395 00:21:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:37:04.395 00:21:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:37:04.395 00:21:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:37:04.395 { 00:37:04.395 "subsystems": [ 00:37:04.395 { 00:37:04.395 "subsystem": "bdev", 00:37:04.395 "config": [ 00:37:04.395 { 00:37:04.395 "params": { 00:37:04.395 "block_size": 4096, 00:37:04.395 "filename": "dd_sparse_aio_disk", 00:37:04.395 "name": "dd_aio" 00:37:04.395 }, 00:37:04.395 "method": "bdev_aio_create" 00:37:04.395 }, 00:37:04.395 { 00:37:04.395 "params": { 00:37:04.395 "lvs_name": "dd_lvstore", 00:37:04.395 "lvol_name": "dd_lvol", 00:37:04.395 "size_in_mib": 36, 00:37:04.395 "thin_provision": true 00:37:04.395 }, 00:37:04.395 "method": "bdev_lvol_create" 00:37:04.395 }, 00:37:04.395 { 00:37:04.395 "method": "bdev_wait_for_examine" 00:37:04.395 } 00:37:04.395 ] 00:37:04.395 } 00:37:04.395 ] 00:37:04.395 } 00:37:04.395 [2024-07-25 00:21:00.068717] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:37:04.395 [2024-07-25 00:21:00.069075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117745 ] 00:37:04.395 [2024-07-25 00:21:00.239093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:04.654 [2024-07-25 00:21:00.399919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:05.849  Copying: 12/36 [MB] (average 545 MBps) 00:37:05.849 00:37:05.849 00:37:05.849 real 0m1.687s 00:37:05.849 user 0m1.382s 00:37:05.849 sys 0m0.204s 00:37:05.849 00:21:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:05.849 ************************************ 00:37:05.849 END TEST dd_sparse_file_to_bdev 00:37:05.849 ************************************ 00:37:05.849 00:21:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:37:06.107 00:21:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:37:06.107 00:21:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:06.107 00:21:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:06.107 00:21:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:37:06.107 ************************************ 00:37:06.107 START TEST dd_sparse_bdev_to_file 00:37:06.107 ************************************ 00:37:06.107 00:21:01 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:37:06.107 00:21:01 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:37:06.107 00:21:01 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:37:06.107 00:21:01 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:37:06.107 00:21:01 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:37:06.107 00:21:01 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:37:06.107 00:21:01 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:37:06.107 00:21:01 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:37:06.107 00:21:01 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:37:06.107 { 00:37:06.107 "subsystems": [ 00:37:06.107 { 00:37:06.107 "subsystem": "bdev", 00:37:06.107 "config": [ 00:37:06.107 { 00:37:06.107 "params": { 00:37:06.107 "block_size": 4096, 00:37:06.107 "filename": "dd_sparse_aio_disk", 00:37:06.107 "name": "dd_aio" 00:37:06.107 }, 00:37:06.107 "method": "bdev_aio_create" 00:37:06.107 }, 00:37:06.107 { 00:37:06.107 "method": "bdev_wait_for_examine" 00:37:06.107 } 00:37:06.107 ] 00:37:06.107 } 00:37:06.107 ] 00:37:06.107 } 00:37:06.107 [2024-07-25 00:21:01.797388] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:37:06.107 [2024-07-25 00:21:01.797508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117784 ] 00:37:06.107 [2024-07-25 00:21:01.949038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:06.366 [2024-07-25 00:21:02.119304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:07.562  Copying: 12/36 [MB] (average 1333 MBps) 00:37:07.562 00:37:07.562 00:21:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:37:07.562 00:21:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:37:07.819 00:21:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:37:07.820 00:21:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:37:07.820 00:21:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:37:07.820 00:21:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:37:07.820 00:21:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:37:07.820 00:21:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:37:07.820 00:21:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:37:07.820 00:21:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:37:07.820 00:37:07.820 real 0m1.704s 00:37:07.820 user 0m1.358s 00:37:07.820 sys 0m0.242s 00:37:07.820 00:21:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:07.820 00:21:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:37:07.820 ************************************ 00:37:07.820 END TEST dd_sparse_bdev_to_file 00:37:07.820 ************************************ 00:37:07.820 00:21:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:37:07.820 00:21:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:37:07.820 00:21:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:37:07.820 00:21:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:37:07.820 00:21:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:37:07.820 ************************************ 00:37:07.820 END TEST spdk_dd_sparse 00:37:07.820 ************************************ 00:37:07.820 00:37:07.820 real 0m5.325s 00:37:07.820 user 0m4.115s 00:37:07.820 sys 0m0.878s 00:37:07.820 00:21:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:07.820 00:21:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:37:07.820 00:21:03 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:37:07.820 00:21:03 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:07.820 00:21:03 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:07.820 00:21:03 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:37:07.820 ************************************ 00:37:07.820 START TEST spdk_dd_negative 00:37:07.820 ************************************ 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:37:07.820 * Looking for test storage... 00:37:07.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # export PATH 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:07.820 ************************************ 00:37:07.820 START TEST dd_invalid_arguments 00:37:07.820 ************************************ 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:07.820 00:21:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:37:08.078 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:37:08.078 00:37:08.078 CPU options: 00:37:08.078 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:37:08.078 (like [0,1,10]) 00:37:08.078 --lcores lcore to CPU mapping list. The list is in the format: 00:37:08.078 [<,lcores[@CPUs]>...] 00:37:08.078 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:37:08.078 Within the group, '-' is used for range separator, 00:37:08.078 ',' is used for single number separator. 00:37:08.078 '( )' can be omitted for single element group, 00:37:08.078 '@' can be omitted if cpus and lcores have the same value 00:37:08.078 --disable-cpumask-locks Disable CPU core lock files. 00:37:08.078 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:37:08.078 pollers in the app support interrupt mode) 00:37:08.078 -p, --main-core main (primary) core for DPDK 00:37:08.078 00:37:08.078 Configuration options: 00:37:08.078 -c, --config, --json JSON config file 00:37:08.078 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:37:08.079 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:37:08.079 --wait-for-rpc wait for RPCs to initialize subsystems 00:37:08.079 --rpcs-allowed comma-separated list of permitted RPCS 00:37:08.079 --json-ignore-init-errors don't exit on invalid config entry 00:37:08.079 00:37:08.079 Memory options: 00:37:08.079 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:37:08.079 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:37:08.079 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:37:08.079 -R, --huge-unlink unlink huge files after initialization 00:37:08.079 -n, --mem-channels number of memory channels used for DPDK 00:37:08.079 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:37:08.079 --msg-mempool-size global message memory pool size in count (default: 262143) 00:37:08.079 --no-huge run without using hugepages 00:37:08.079 -i, --shm-id shared memory ID (optional) 00:37:08.079 -g, --single-file-segments force creating just one hugetlbfs file 00:37:08.079 00:37:08.079 PCI options: 00:37:08.079 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:37:08.079 -B, --pci-blocked pci addr to block (can be used more than once) 00:37:08.079 -u, --no-pci disable PCI access 00:37:08.079 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:37:08.079 00:37:08.079 Log options: 00:37:08.079 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:37:08.079 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:37:08.079 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, 00:37:08.079 bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, 00:37:08.079 blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:37:08.079 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:37:08.079 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:37:08.079 sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, 00:37:08.079 vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, 00:37:08.079 vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:37:08.079 virtio_vfio_user, vmd) 00:37:08.079 --silence-noticelog disable notice level logging to stderr 00:37:08.079 00:37:08.079 Trace options: 00:37:08.079 --num-trace-entries number of trace entries for each core, must be power of 2, 00:37:08.079 setting 0 to disable trace (default 32768) 00:37:08.079 Tracepoints vary in size and can use more than one trace entry. 00:37:08.079 -e, --tpoint-group [:] 00:37:08.079 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:37:08.079 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:37:08.079 [2024-07-25 00:21:03.720753] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:37:08.079 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:37:08.079 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:37:08.079 a tracepoint group. First tpoint inside a group can be enabled by 00:37:08.079 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:37:08.079 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:37:08.079 in /include/spdk_internal/trace_defs.h 00:37:08.079 00:37:08.079 Other options: 00:37:08.079 -h, --help show this usage 00:37:08.079 -v, --version print SPDK version 00:37:08.079 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:37:08.079 --env-context Opaque context for use of the env implementation 00:37:08.079 00:37:08.079 Application specific: 00:37:08.079 [--------- DD Options ---------] 00:37:08.079 --if Input file. Must specify either --if or --ib. 00:37:08.079 --ib Input bdev. Must specifier either --if or --ib 00:37:08.079 --of Output file. Must specify either --of or --ob. 00:37:08.079 --ob Output bdev. Must specify either --of or --ob. 00:37:08.079 --iflag Input file flags. 00:37:08.079 --oflag Output file flags. 00:37:08.079 --bs I/O unit size (default: 4096) 00:37:08.079 --qd Queue depth (default: 2) 00:37:08.079 --count I/O unit count. The number of I/O units to copy. (default: all) 00:37:08.079 --skip Skip this many I/O units at start of input. (default: 0) 00:37:08.079 --seek Skip this many I/O units at start of output. (default: 0) 00:37:08.079 --aio Force usage of AIO. (by default io_uring is used if available) 00:37:08.079 --sparse Enable hole skipping in input target 00:37:08.079 Available iflag and oflag values: 00:37:08.079 append - append mode 00:37:08.079 direct - use direct I/O for data 00:37:08.079 directory - fail unless a directory 00:37:08.079 dsync - use synchronized I/O for data 00:37:08.079 noatime - do not update access time 00:37:08.079 noctty - do not assign controlling terminal from file 00:37:08.079 nofollow - do not follow symlinks 00:37:08.079 nonblock - use non-blocking I/O 00:37:08.079 sync - use synchronized I/O for data and metadata 00:37:08.079 ************************************ 00:37:08.079 END TEST dd_invalid_arguments 00:37:08.079 ************************************ 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:08.079 00:37:08.079 real 0m0.118s 00:37:08.079 user 0m0.061s 00:37:08.079 sys 0m0.058s 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:08.079 ************************************ 00:37:08.079 START TEST dd_double_input 00:37:08.079 ************************************ 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:37:08.079 [2024-07-25 00:21:03.887014] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:37:08.079 ************************************ 00:37:08.079 END TEST dd_double_input 00:37:08.079 ************************************ 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:08.079 00:37:08.079 real 0m0.119s 00:37:08.079 user 0m0.073s 00:37:08.079 sys 0m0.046s 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:08.079 00:21:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:37:08.338 00:21:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:37:08.338 00:21:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:08.338 00:21:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:08.338 00:21:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:08.338 ************************************ 00:37:08.338 START TEST dd_double_output 00:37:08.338 ************************************ 00:37:08.338 00:21:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:37:08.338 00:21:03 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:37:08.338 00:21:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:37:08.338 00:21:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:37:08.338 00:21:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.338 00:21:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:08.338 00:21:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.338 00:21:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:08.338 00:21:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.338 00:21:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:08.338 00:21:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.338 00:21:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:08.338 00:21:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:37:08.338 [2024-07-25 00:21:04.052750] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:08.338 00:37:08.338 real 0m0.115s 00:37:08.338 user 0m0.067s 00:37:08.338 sys 0m0.049s 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:37:08.338 ************************************ 00:37:08.338 END TEST dd_double_output 00:37:08.338 ************************************ 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:08.338 ************************************ 00:37:08.338 START TEST dd_no_input 00:37:08.338 ************************************ 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:08.338 00:21:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:37:08.596 [2024-07-25 00:21:04.228718] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:08.596 00:37:08.596 real 0m0.115s 00:37:08.596 user 0m0.068s 00:37:08.596 sys 0m0.048s 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:37:08.596 ************************************ 00:37:08.596 END TEST dd_no_input 00:37:08.596 ************************************ 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:08.596 ************************************ 00:37:08.596 START TEST dd_no_output 00:37:08.596 ************************************ 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:37:08.596 [2024-07-25 00:21:04.377178] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:08.596 00:37:08.596 real 0m0.092s 00:37:08.596 user 0m0.054s 00:37:08.596 sys 0m0.038s 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:37:08.596 ************************************ 00:37:08.596 END TEST dd_no_output 00:37:08.596 ************************************ 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:08.596 00:21:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:08.855 ************************************ 00:37:08.855 START TEST dd_wrong_blocksize 00:37:08.855 ************************************ 00:37:08.855 00:21:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:37:08.855 00:21:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:37:08.855 00:21:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:37:08.855 00:21:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:37:08.855 00:21:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.855 00:21:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:08.855 00:21:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.855 00:21:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:37:08.856 [2024-07-25 00:21:04.531921] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:08.856 00:37:08.856 real 0m0.115s 00:37:08.856 user 0m0.064s 00:37:08.856 sys 0m0.052s 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:37:08.856 ************************************ 00:37:08.856 END TEST dd_wrong_blocksize 00:37:08.856 ************************************ 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:08.856 ************************************ 00:37:08.856 START TEST dd_smaller_blocksize 00:37:08.856 ************************************ 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:08.856 00:21:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:37:08.856 [2024-07-25 00:21:04.713282] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:37:08.856 [2024-07-25 00:21:04.713461] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118018 ] 00:37:09.115 [2024-07-25 00:21:04.886716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:09.374 [2024-07-25 00:21:05.123749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:09.942 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:37:09.942 [2024-07-25 00:21:05.568045] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:37:09.942 [2024-07-25 00:21:05.568117] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:10.510 [2024-07-25 00:21:06.120419] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:37:10.769 00:21:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:37:10.769 00:21:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:10.769 00:21:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:37:10.769 00:21:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:10.770 00:37:10.770 real 0m1.817s 00:37:10.770 user 0m1.350s 00:37:10.770 sys 0m0.366s 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:10.770 ************************************ 00:37:10.770 END TEST dd_smaller_blocksize 00:37:10.770 ************************************ 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:10.770 ************************************ 00:37:10.770 START TEST dd_invalid_count 00:37:10.770 ************************************ 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:37:10.770 [2024-07-25 00:21:06.574024] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:10.770 00:37:10.770 real 0m0.116s 00:37:10.770 user 0m0.068s 00:37:10.770 sys 0m0.049s 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:10.770 ************************************ 00:37:10.770 END TEST dd_invalid_count 00:37:10.770 ************************************ 00:37:10.770 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:11.029 ************************************ 00:37:11.029 START TEST dd_invalid_oflag 00:37:11.029 ************************************ 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:37:11.029 [2024-07-25 00:21:06.745394] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:11.029 00:37:11.029 real 0m0.118s 00:37:11.029 user 0m0.071s 00:37:11.029 sys 0m0.048s 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:11.029 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:37:11.029 ************************************ 00:37:11.029 END TEST dd_invalid_oflag 00:37:11.029 ************************************ 00:37:11.030 00:21:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:37:11.030 00:21:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:11.030 00:21:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:11.030 00:21:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:11.030 ************************************ 00:37:11.030 START TEST dd_invalid_iflag 00:37:11.030 ************************************ 00:37:11.030 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:37:11.030 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:37:11.030 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:37:11.030 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:37:11.030 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:11.030 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:11.030 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:11.030 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:11.030 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:11.030 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:11.030 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:11.030 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:11.030 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:37:11.289 [2024-07-25 00:21:06.920699] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:37:11.289 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:37:11.289 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:11.289 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:11.289 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:11.289 00:37:11.289 real 0m0.119s 00:37:11.289 user 0m0.065s 00:37:11.289 sys 0m0.054s 00:37:11.289 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:11.289 ************************************ 00:37:11.289 END TEST dd_invalid_iflag 00:37:11.289 ************************************ 00:37:11.289 00:21:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:37:11.289 00:21:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:37:11.289 00:21:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:11.289 00:21:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:11.289 00:21:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:11.289 ************************************ 00:37:11.289 START TEST dd_unknown_flag 00:37:11.289 ************************************ 00:37:11.289 00:21:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:37:11.289 00:21:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:37:11.289 00:21:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:37:11.289 00:21:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:37:11.289 00:21:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:11.289 00:21:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:11.289 00:21:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:11.289 00:21:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:11.289 00:21:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:11.289 00:21:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:11.289 00:21:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:11.289 00:21:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:11.289 00:21:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:37:11.289 [2024-07-25 00:21:07.095453] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:37:11.289 [2024-07-25 00:21:07.095623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118131 ] 00:37:11.548 [2024-07-25 00:21:07.266162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:11.548 [2024-07-25 00:21:07.415610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.807  Copying: 0/0 [B] (average 0 Bps)[2024-07-25 00:21:07.626552] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:37:11.807 [2024-07-25 00:21:07.626612] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:11.807 [2024-07-25 00:21:07.626760] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:37:12.375 [2024-07-25 00:21:08.181441] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:37:12.961 00:37:12.961 00:37:12.961 00:21:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:37:12.961 00:21:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:12.961 00:21:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:37:12.961 00:21:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:37:12.961 00:21:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:37:12.962 00:21:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:12.962 00:37:12.962 real 0m1.524s 00:37:12.962 user 0m1.219s 00:37:12.962 sys 0m0.190s 00:37:12.962 00:21:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:12.962 00:21:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:37:12.962 ************************************ 00:37:12.962 END TEST dd_unknown_flag 00:37:12.962 ************************************ 00:37:12.962 00:21:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:37:12.962 00:21:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:12.962 00:21:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:12.962 00:21:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:12.962 ************************************ 00:37:12.962 START TEST dd_invalid_json 00:37:12.962 ************************************ 00:37:12.962 00:21:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:37:12.962 00:21:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:37:12.962 00:21:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:37:12.962 00:21:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:37:12.962 00:21:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:37:12.962 00:21:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:12.962 00:21:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:12.962 00:21:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:12.962 00:21:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:12.962 00:21:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:12.962 00:21:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:12.962 00:21:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:12.962 00:21:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:12.962 00:21:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:37:12.962 [2024-07-25 00:21:08.656416] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:37:12.962 [2024-07-25 00:21:08.656541] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118165 ] 00:37:12.962 [2024-07-25 00:21:08.811267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:13.227 [2024-07-25 00:21:08.970339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:13.227 [2024-07-25 00:21:08.970424] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:37:13.227 [2024-07-25 00:21:08.970447] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:37:13.227 [2024-07-25 00:21:08.970463] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:13.227 [2024-07-25 00:21:08.970517] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:37:13.487 00:21:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:37:13.487 00:21:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:13.487 00:21:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:37:13.487 00:21:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:37:13.487 00:21:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:37:13.487 00:21:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:13.487 00:37:13.487 real 0m0.701s 00:37:13.487 user 0m0.485s 00:37:13.487 sys 0m0.117s 00:37:13.487 00:21:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:13.487 00:21:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:37:13.487 ************************************ 00:37:13.487 END TEST dd_invalid_json 00:37:13.487 ************************************ 00:37:13.487 00:37:13.487 real 0m5.784s 00:37:13.487 user 0m3.878s 00:37:13.487 sys 0m1.553s 00:37:13.487 00:21:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:13.487 00:21:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:13.487 ************************************ 00:37:13.487 END TEST spdk_dd_negative 00:37:13.487 ************************************ 00:37:13.746 00:37:13.746 real 2m10.945s 00:37:13.746 user 1m42.694s 00:37:13.746 sys 0m18.329s 00:37:13.746 00:21:09 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:13.746 00:21:09 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:37:13.746 ************************************ 00:37:13.746 END TEST spdk_dd 00:37:13.746 ************************************ 00:37:13.746 00:21:09 -- spdk/autotest.sh@215 -- # '[' 1 -eq 1 ']' 00:37:13.746 00:21:09 -- spdk/autotest.sh@216 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:37:13.746 00:21:09 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:13.746 00:21:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:13.747 00:21:09 -- common/autotest_common.sh@10 -- # set +x 00:37:13.747 ************************************ 00:37:13.747 START TEST blockdev_nvme 00:37:13.747 ************************************ 00:37:13.747 00:21:09 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:37:13.747 * Looking for test storage... 00:37:13.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:37:13.747 00:21:09 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=118251 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 118251 00:37:13.747 00:21:09 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 118251 ']' 00:37:13.747 00:21:09 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:37:13.747 00:21:09 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:13.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:13.747 00:21:09 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:13.747 00:21:09 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:13.747 00:21:09 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:13.747 00:21:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:37:13.747 [2024-07-25 00:21:09.614696] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:37:13.747 [2024-07-25 00:21:09.614903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118251 ] 00:37:14.006 [2024-07-25 00:21:09.786823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:14.264 [2024-07-25 00:21:09.939043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:14.832 00:21:10 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:14.832 00:21:10 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:37:14.832 00:21:10 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:37:14.832 00:21:10 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:37:14.832 00:21:10 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:37:14.832 00:21:10 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:37:14.832 00:21:10 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:37:14.832 00:21:10 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:37:14.832 00:21:10 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.832 00:21:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:37:14.832 00:21:10 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.832 00:21:10 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:37:14.832 00:21:10 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.832 00:21:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:37:14.832 00:21:10 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.832 00:21:10 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:37:14.832 00:21:10 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:37:14.832 00:21:10 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.832 00:21:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:37:14.832 00:21:10 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.832 00:21:10 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:37:14.832 00:21:10 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.832 00:21:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:37:14.832 00:21:10 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.832 00:21:10 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:37:14.832 00:21:10 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.832 00:21:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:37:15.092 00:21:10 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.092 00:21:10 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:37:15.092 00:21:10 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:37:15.092 00:21:10 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:37:15.092 00:21:10 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.092 00:21:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:37:15.092 00:21:10 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.092 00:21:10 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:37:15.092 00:21:10 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "0ea0278b-9858-47c6-8090-e472239e49cd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0ea0278b-9858-47c6-8090-e472239e49cd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:37:15.092 00:21:10 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:37:15.092 00:21:10 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:37:15.092 00:21:10 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:37:15.092 00:21:10 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:37:15.092 00:21:10 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 118251 00:37:15.092 00:21:10 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 118251 ']' 00:37:15.092 00:21:10 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 118251 00:37:15.092 00:21:10 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:37:15.092 00:21:10 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:15.092 00:21:10 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 118251 00:37:15.092 00:21:10 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:15.092 00:21:10 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:15.092 killing process with pid 118251 00:37:15.092 00:21:10 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 118251' 00:37:15.092 00:21:10 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 118251 00:37:15.092 00:21:10 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 118251 00:37:16.994 00:21:12 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:37:16.994 00:21:12 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:37:16.994 00:21:12 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:37:16.994 00:21:12 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:16.994 00:21:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:37:16.994 ************************************ 00:37:16.994 START TEST bdev_hello_world 00:37:16.994 ************************************ 00:37:16.994 00:21:12 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:37:16.994 [2024-07-25 00:21:12.562828] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:37:16.994 [2024-07-25 00:21:12.563000] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118321 ] 00:37:16.994 [2024-07-25 00:21:12.734243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.251 [2024-07-25 00:21:12.882565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:17.509 [2024-07-25 00:21:13.229881] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:37:17.509 [2024-07-25 00:21:13.229949] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:37:17.509 [2024-07-25 00:21:13.229988] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:37:17.509 [2024-07-25 00:21:13.232422] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:37:17.509 [2024-07-25 00:21:13.232889] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:37:17.509 [2024-07-25 00:21:13.232925] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:37:17.509 [2024-07-25 00:21:13.233201] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:37:17.509 00:37:17.509 [2024-07-25 00:21:13.233237] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:37:18.442 00:37:18.442 real 0m1.651s 00:37:18.442 user 0m1.359s 00:37:18.442 sys 0m0.192s 00:37:18.442 00:21:14 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:18.442 ************************************ 00:37:18.442 END TEST bdev_hello_world 00:37:18.442 ************************************ 00:37:18.442 00:21:14 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:37:18.442 00:21:14 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:37:18.442 00:21:14 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:18.442 00:21:14 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:18.442 00:21:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:37:18.442 ************************************ 00:37:18.442 START TEST bdev_bounds 00:37:18.442 ************************************ 00:37:18.442 00:21:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:37:18.442 00:21:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=118359 00:37:18.442 00:21:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:37:18.442 Process bdevio pid: 118359 00:37:18.442 00:21:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 118359' 00:37:18.442 00:21:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:37:18.442 00:21:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 118359 00:37:18.442 00:21:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 118359 ']' 00:37:18.442 00:21:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:18.442 00:21:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:18.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:18.442 00:21:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:18.442 00:21:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:18.442 00:21:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:37:18.442 [2024-07-25 00:21:14.264721] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:37:18.442 [2024-07-25 00:21:14.264936] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118359 ] 00:37:18.700 [2024-07-25 00:21:14.438998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:18.959 [2024-07-25 00:21:14.588038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:18.959 [2024-07-25 00:21:14.588157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:18.959 [2024-07-25 00:21:14.588191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:19.525 00:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:19.525 00:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:37:19.525 00:21:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:37:19.525 I/O targets: 00:37:19.525 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:37:19.525 00:37:19.525 00:37:19.525 CUnit - A unit testing framework for C - Version 2.1-3 00:37:19.525 http://cunit.sourceforge.net/ 00:37:19.525 00:37:19.525 00:37:19.525 Suite: bdevio tests on: Nvme0n1 00:37:19.525 Test: blockdev write read block ...passed 00:37:19.525 Test: blockdev write zeroes read block ...passed 00:37:19.525 Test: blockdev write zeroes read no split ...passed 00:37:19.525 Test: blockdev write zeroes read split ...passed 00:37:19.525 Test: blockdev write zeroes read split partial ...passed 00:37:19.525 Test: blockdev reset ...[2024-07-25 00:21:15.330878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:37:19.525 [2024-07-25 00:21:15.334657] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:19.525 passed 00:37:19.525 Test: blockdev write read 8 blocks ...passed 00:37:19.525 Test: blockdev write read size > 128k ...passed 00:37:19.525 Test: blockdev write read invalid size ...passed 00:37:19.525 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:37:19.525 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:37:19.525 Test: blockdev write read max offset ...passed 00:37:19.525 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:37:19.525 Test: blockdev writev readv 8 blocks ...passed 00:37:19.525 Test: blockdev writev readv 30 x 1block ...passed 00:37:19.525 Test: blockdev writev readv block ...passed 00:37:19.525 Test: blockdev writev readv size > 128k ...passed 00:37:19.525 Test: blockdev writev readv size > 128k in two iovs ...passed 00:37:19.525 Test: blockdev comparev and writev ...[2024-07-25 00:21:15.344693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ad40d000 len:0x1000 00:37:19.525 [2024-07-25 00:21:15.345096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:37:19.525 passed 00:37:19.525 Test: blockdev nvme passthru rw ...passed 00:37:19.525 Test: blockdev nvme passthru vendor specific ...[2024-07-25 00:21:15.346598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:37:19.525 [2024-07-25 00:21:15.347005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:37:19.525 passed 00:37:19.525 Test: blockdev nvme admin passthru ...passed 00:37:19.525 Test: blockdev copy ...passed 00:37:19.525 00:37:19.525 Run Summary: Type Total Ran Passed Failed Inactive 00:37:19.526 suites 1 1 n/a 0 0 00:37:19.526 tests 23 23 23 0 0 00:37:19.526 asserts 152 152 152 0 n/a 00:37:19.526 00:37:19.526 Elapsed time = 0.197 seconds 00:37:19.526 0 00:37:19.526 00:21:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 118359 00:37:19.526 00:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 118359 ']' 00:37:19.526 00:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 118359 00:37:19.526 00:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:37:19.526 00:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:19.526 00:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 118359 00:37:19.784 00:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:19.784 00:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:19.784 killing process with pid 118359 00:37:19.784 00:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 118359' 00:37:19.784 00:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 118359 00:37:19.784 00:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 118359 00:37:20.719 00:21:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:37:20.719 00:37:20.719 real 0m2.135s 00:37:20.719 user 0m5.063s 00:37:20.719 sys 0m0.347s 00:37:20.719 00:21:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:20.719 00:21:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:37:20.719 ************************************ 00:37:20.719 END TEST bdev_bounds 00:37:20.719 ************************************ 00:37:20.719 00:21:16 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:37:20.719 00:21:16 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:37:20.719 00:21:16 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:20.719 00:21:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:37:20.719 ************************************ 00:37:20.719 START TEST bdev_nbd 00:37:20.719 ************************************ 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1') 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1') 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=118409 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 118409 /var/tmp/spdk-nbd.sock 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 118409 ']' 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:20.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:20.719 00:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:37:20.719 [2024-07-25 00:21:16.445326] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:37:20.719 [2024-07-25 00:21:16.445481] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:20.977 [2024-07-25 00:21:16.597140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:20.978 [2024-07-25 00:21:16.749657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:21.545 00:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:21.545 00:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:37:21.545 00:21:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:37:21.545 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:21.545 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:37:21.545 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:37:21.545 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:37:21.545 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:21.545 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:37:21.545 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:37:21.545 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:37:21.545 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:37:21.545 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:37:21.545 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:37:21.545 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:37:21.803 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:37:21.803 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:37:21.803 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:37:21.803 00:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:37:21.803 00:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:37:21.803 00:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:21.803 00:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:21.803 00:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:37:21.803 00:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:37:21.803 00:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:21.803 00:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:21.803 00:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:21.803 1+0 records in 00:37:21.803 1+0 records out 00:37:21.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555229 s, 7.4 MB/s 00:37:21.803 00:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:21.803 00:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:37:21.803 00:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:21.803 00:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:21.803 00:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:37:21.803 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:37:21.803 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:37:21.803 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:22.062 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:37:22.062 { 00:37:22.062 "nbd_device": "/dev/nbd0", 00:37:22.062 "bdev_name": "Nvme0n1" 00:37:22.062 } 00:37:22.062 ]' 00:37:22.062 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:37:22.062 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:37:22.062 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:37:22.062 { 00:37:22.062 "nbd_device": "/dev/nbd0", 00:37:22.062 "bdev_name": "Nvme0n1" 00:37:22.062 } 00:37:22.062 ]' 00:37:22.062 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:37:22.062 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:22.062 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:37:22.062 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:22.062 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:37:22.062 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:22.062 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:37:22.320 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:22.320 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:22.320 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:22.320 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:22.320 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:22.320 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:22.320 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:37:22.320 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:37:22.320 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:37:22.320 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:22.320 00:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:22.579 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:37:22.580 /dev/nbd0 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:22.580 1+0 records in 00:37:22.580 1+0 records out 00:37:22.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00262781 s, 1.6 MB/s 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:22.580 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:22.839 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:37:22.839 { 00:37:22.839 "nbd_device": "/dev/nbd0", 00:37:22.839 "bdev_name": "Nvme0n1" 00:37:22.839 } 00:37:22.839 ]' 00:37:22.839 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:37:22.839 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:37:22.839 { 00:37:22.839 "nbd_device": "/dev/nbd0", 00:37:22.839 "bdev_name": "Nvme0n1" 00:37:22.839 } 00:37:22.839 ]' 00:37:22.839 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:37:23.098 256+0 records in 00:37:23.098 256+0 records out 00:37:23.098 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00757824 s, 138 MB/s 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:37:23.098 256+0 records in 00:37:23.098 256+0 records out 00:37:23.098 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0686498 s, 15.3 MB/s 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:23.098 00:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:37:23.357 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:23.357 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:23.357 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:23.357 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:23.357 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:23.357 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:23.357 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:37:23.357 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:37:23.357 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:37:23.357 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:23.357 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:23.357 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:37:23.357 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:37:23.357 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:37:23.357 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:37:23.617 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:37:23.617 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:37:23.617 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:37:23.617 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:37:23.617 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:37:23.617 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:37:23.617 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:37:23.617 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:37:23.617 00:21:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:37:23.617 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:23.617 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:37:23.617 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:37:23.617 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:37:23.617 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:37:23.876 malloc_lvol_verify 00:37:23.876 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:37:24.135 dfd1e676-0e95-4c87-9da5-76b3a0d80187 00:37:24.135 00:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:37:24.394 e24dbb98-ca4e-401e-9314-5683cfa48686 00:37:24.394 00:21:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:37:24.394 /dev/nbd0 00:37:24.394 00:21:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:37:24.394 Discarding device blocks: 0/1024 done 00:37:24.394 Creating filesystem with 1024 4k blocks and 1024 inodes 00:37:24.394 00:37:24.394 Allocating group tables: 0/1 done 00:37:24.394 Writing inode tables: 0/1 done 00:37:24.394 mke2fs 1.47.0 (5-Feb-2023) 00:37:24.394 00:37:24.394 Filesystem too small for a journal 00:37:24.394 Writing superblocks and filesystem accounting information: 0/1 done 00:37:24.394 00:37:24.394 00:21:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:37:24.394 00:21:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:37:24.394 00:21:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:24.394 00:21:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:37:24.394 00:21:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:24.394 00:21:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:37:24.394 00:21:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:24.394 00:21:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:37:24.962 00:21:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:24.962 00:21:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:24.962 00:21:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:24.962 00:21:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:24.962 00:21:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:24.962 00:21:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:24.962 00:21:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:37:24.962 00:21:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:37:24.962 00:21:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:37:24.962 00:21:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:37:24.962 00:21:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 118409 00:37:24.962 00:21:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 118409 ']' 00:37:24.962 00:21:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 118409 00:37:24.962 00:21:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:37:24.962 00:21:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:24.962 00:21:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 118409 00:37:24.962 00:21:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:24.962 killing process with pid 118409 00:37:24.962 00:21:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:24.962 00:21:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 118409' 00:37:24.962 00:21:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 118409 00:37:24.962 00:21:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 118409 00:37:25.900 00:21:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:37:25.900 00:37:25.900 real 0m5.154s 00:37:25.900 user 0m7.458s 00:37:25.900 sys 0m1.078s 00:37:25.900 00:21:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:25.900 00:21:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:37:25.900 ************************************ 00:37:25.900 END TEST bdev_nbd 00:37:25.900 ************************************ 00:37:25.900 00:21:21 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:37:25.900 00:21:21 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:37:25.900 skipping fio tests on NVMe due to multi-ns failures. 00:37:25.900 00:21:21 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:37:25.900 00:21:21 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:37:25.900 00:21:21 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:37:25.900 00:21:21 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:37:25.900 00:21:21 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:25.900 00:21:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:37:25.900 ************************************ 00:37:25.900 START TEST bdev_verify 00:37:25.900 ************************************ 00:37:25.900 00:21:21 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:37:25.900 [2024-07-25 00:21:21.660241] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:37:25.900 [2024-07-25 00:21:21.660467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118584 ] 00:37:26.159 [2024-07-25 00:21:21.831564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:26.159 [2024-07-25 00:21:21.982492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:26.159 [2024-07-25 00:21:21.982511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:26.727 Running I/O for 5 seconds... 00:37:32.032 00:37:32.032 Latency(us) 00:37:32.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.032 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:32.032 Verification LBA range: start 0x0 length 0xa0000 00:37:32.032 Nvme0n1 : 5.00 11151.30 43.56 0.00 0.00 11418.40 845.27 17754.30 00:37:32.032 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:37:32.032 Verification LBA range: start 0xa0000 length 0xa0000 00:37:32.032 Nvme0n1 : 5.01 10996.45 42.95 0.00 0.00 11579.73 997.93 19065.02 00:37:32.032 =================================================================================================================== 00:37:32.032 Total : 22147.74 86.51 0.00 0.00 11498.51 845.27 19065.02 00:37:32.599 00:37:32.599 real 0m6.865s 00:37:32.599 user 0m12.659s 00:37:32.599 sys 0m0.222s 00:37:32.599 00:21:28 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:32.599 00:21:28 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:37:32.599 ************************************ 00:37:32.599 END TEST bdev_verify 00:37:32.599 ************************************ 00:37:32.858 00:21:28 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:37:32.858 00:21:28 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:37:32.858 00:21:28 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:32.858 00:21:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:37:32.858 ************************************ 00:37:32.858 START TEST bdev_verify_big_io 00:37:32.858 ************************************ 00:37:32.858 00:21:28 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:37:32.858 [2024-07-25 00:21:28.578965] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:37:32.858 [2024-07-25 00:21:28.579147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118671 ] 00:37:33.117 [2024-07-25 00:21:28.750936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:33.117 [2024-07-25 00:21:28.899540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:33.117 [2024-07-25 00:21:28.899550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:33.685 Running I/O for 5 seconds... 00:37:38.958 00:37:38.958 Latency(us) 00:37:38.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:38.958 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:37:38.958 Verification LBA range: start 0x0 length 0xa000 00:37:38.958 Nvme0n1 : 5.05 885.62 55.35 0.00 0.00 141373.10 1437.32 158239.65 00:37:38.958 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:37:38.958 Verification LBA range: start 0xa000 length 0xa000 00:37:38.958 Nvme0n1 : 5.05 884.35 55.27 0.00 0.00 141656.23 893.67 278349.27 00:37:38.958 =================================================================================================================== 00:37:38.958 Total : 1769.97 110.62 0.00 0.00 141514.57 893.67 278349.27 00:37:39.895 00:37:39.895 real 0m7.085s 00:37:39.895 user 0m12.809s 00:37:39.895 sys 0m0.214s 00:37:39.895 00:21:35 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:39.895 00:21:35 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:37:39.895 ************************************ 00:37:39.895 END TEST bdev_verify_big_io 00:37:39.895 ************************************ 00:37:39.895 00:21:35 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:39.895 00:21:35 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:37:39.895 00:21:35 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:39.895 00:21:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:37:39.895 ************************************ 00:37:39.895 START TEST bdev_write_zeroes 00:37:39.895 ************************************ 00:37:39.895 00:21:35 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:39.895 [2024-07-25 00:21:35.698540] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:37:39.895 [2024-07-25 00:21:35.698677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118750 ] 00:37:40.154 [2024-07-25 00:21:35.854923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:40.154 [2024-07-25 00:21:36.005646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:40.720 Running I/O for 1 seconds... 00:37:41.653 00:37:41.653 Latency(us) 00:37:41.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:41.653 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:37:41.653 Nvme0n1 : 1.00 60256.47 235.38 0.00 0.00 2118.83 930.91 6494.02 00:37:41.653 =================================================================================================================== 00:37:41.653 Total : 60256.47 235.38 0.00 0.00 2118.83 930.91 6494.02 00:37:42.588 00:37:42.588 real 0m2.624s 00:37:42.588 user 0m2.348s 00:37:42.588 sys 0m0.176s 00:37:42.588 00:21:38 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:42.588 00:21:38 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:37:42.588 ************************************ 00:37:42.588 END TEST bdev_write_zeroes 00:37:42.588 ************************************ 00:37:42.588 00:21:38 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:42.588 00:21:38 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:37:42.588 00:21:38 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:42.588 00:21:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:37:42.588 ************************************ 00:37:42.588 START TEST bdev_json_nonenclosed 00:37:42.588 ************************************ 00:37:42.588 00:21:38 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:42.588 [2024-07-25 00:21:38.390642] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:37:42.588 [2024-07-25 00:21:38.390838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118798 ] 00:37:42.847 [2024-07-25 00:21:38.561453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.847 [2024-07-25 00:21:38.711379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:42.847 [2024-07-25 00:21:38.711499] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:37:42.847 [2024-07-25 00:21:38.711522] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:37:42.847 [2024-07-25 00:21:38.711536] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:43.412 00:37:43.412 real 0m0.741s 00:37:43.412 user 0m0.511s 00:37:43.412 sys 0m0.129s 00:37:43.412 00:21:39 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:43.412 00:21:39 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:37:43.412 ************************************ 00:37:43.412 END TEST bdev_json_nonenclosed 00:37:43.412 ************************************ 00:37:43.413 00:21:39 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:43.413 00:21:39 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:37:43.413 00:21:39 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:43.413 00:21:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:37:43.413 ************************************ 00:37:43.413 START TEST bdev_json_nonarray 00:37:43.413 ************************************ 00:37:43.413 00:21:39 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:43.413 [2024-07-25 00:21:39.183914] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:37:43.413 [2024-07-25 00:21:39.184119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118828 ] 00:37:43.671 [2024-07-25 00:21:39.354461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:43.671 [2024-07-25 00:21:39.500894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:43.671 [2024-07-25 00:21:39.501011] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:37:43.671 [2024-07-25 00:21:39.501041] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:37:43.671 [2024-07-25 00:21:39.501057] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:44.239 00:37:44.239 real 0m0.721s 00:37:44.239 user 0m0.499s 00:37:44.239 sys 0m0.121s 00:37:44.239 00:21:39 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:44.239 00:21:39 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:37:44.239 ************************************ 00:37:44.239 END TEST bdev_json_nonarray 00:37:44.239 ************************************ 00:37:44.239 00:21:39 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:37:44.239 00:21:39 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:37:44.239 00:21:39 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:37:44.239 00:21:39 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:37:44.239 00:21:39 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:37:44.239 00:21:39 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:37:44.239 00:21:39 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:37:44.239 00:21:39 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:37:44.239 00:21:39 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:37:44.239 00:21:39 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:37:44.239 00:21:39 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:37:44.239 00:37:44.239 real 0m30.458s 00:37:44.239 user 0m45.874s 00:37:44.239 sys 0m3.295s 00:37:44.239 00:21:39 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:44.239 00:21:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:37:44.239 ************************************ 00:37:44.239 END TEST blockdev_nvme 00:37:44.239 ************************************ 00:37:44.239 00:21:39 -- spdk/autotest.sh@217 -- # uname -s 00:37:44.239 00:21:39 -- spdk/autotest.sh@217 -- # [[ Linux == Linux ]] 00:37:44.239 00:21:39 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:37:44.239 00:21:39 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:44.239 00:21:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:44.239 00:21:39 -- common/autotest_common.sh@10 -- # set +x 00:37:44.240 ************************************ 00:37:44.240 START TEST blockdev_nvme_gpt 00:37:44.240 ************************************ 00:37:44.240 00:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:37:44.240 * Looking for test storage... 00:37:44.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=118900 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 118900 00:37:44.240 00:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:37:44.240 00:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 118900 ']' 00:37:44.240 00:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:44.240 00:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:44.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:44.240 00:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:44.240 00:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:44.240 00:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:37:44.240 [2024-07-25 00:21:40.107664] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:37:44.240 [2024-07-25 00:21:40.107898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118900 ] 00:37:44.498 [2024-07-25 00:21:40.266045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:44.757 [2024-07-25 00:21:40.425410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:45.325 00:21:41 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:45.325 00:21:41 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:37:45.325 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:37:45.325 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:37:45.325 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:37:45.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:37:45.585 Waiting for block devices as requested 00:37:45.585 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:37:45.844 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:37:45.844 00:21:41 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:37:45.844 00:21:41 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:37:45.844 00:21:41 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:37:45.844 00:21:41 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:37:45.844 00:21:41 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:37:45.844 00:21:41 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:37:45.844 00:21:41 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:45.844 00:21:41 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:37:45.844 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1') 00:37:45.844 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:37:45.844 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:37:45.844 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:37:45.844 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:37:45.844 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:37:45.844 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:37:45.844 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:37:45.844 BYT; 00:37:45.844 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:37:45.844 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:37:45.844 BYT; 00:37:45.844 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:37:45.844 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:37:45.844 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:37:45.844 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:37:45.844 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:37:45.844 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:37:45.844 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:37:45.844 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:37:45.844 00:21:41 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:37:45.844 00:21:41 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:37:45.844 00:21:41 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:37:45.844 00:21:41 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:37:45.844 00:21:41 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:37:45.844 00:21:41 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:37:45.844 00:21:41 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:37:45.844 00:21:41 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:37:45.844 00:21:41 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:37:45.844 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:37:45.844 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:37:45.844 00:21:41 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:37:45.844 00:21:41 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:37:45.844 00:21:41 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:37:45.844 00:21:41 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:37:45.844 00:21:41 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:37:45.844 00:21:41 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:37:45.844 00:21:41 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:37:45.844 00:21:41 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:37:45.844 00:21:41 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:37:45.844 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:37:45.844 00:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:37:47.221 The operation has completed successfully. 00:37:47.221 00:21:42 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:37:48.155 The operation has completed successfully. 00:37:48.155 00:21:43 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:37:48.414 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:37:48.414 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:37:48.980 00:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:37:48.980 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:48.980 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:37:48.980 [] 00:37:48.980 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:48.980 00:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:37:48.980 00:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:37:48.980 00:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:37:48.980 00:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:37:48.980 00:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:37:48.980 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:48.980 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:37:48.980 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:48.980 00:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:37:48.981 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:48.981 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:37:48.981 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:48.981 00:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:37:48.981 00:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:37:48.981 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:48.981 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:37:48.981 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:48.981 00:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:37:48.981 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:48.981 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:37:48.981 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:48.981 00:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:37:48.981 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:48.981 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:37:48.981 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:48.981 00:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:37:48.981 00:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:37:48.981 00:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:37:48.981 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:48.981 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:37:48.981 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:48.981 00:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:37:49.243 00:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:37:49.243 00:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:37:49.243 00:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:37:49.243 00:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1p1 00:37:49.243 00:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:37:49.244 00:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 118900 00:37:49.244 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 118900 ']' 00:37:49.244 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 118900 00:37:49.244 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:37:49.244 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:49.244 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 118900 00:37:49.244 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:49.244 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:49.244 killing process with pid 118900 00:37:49.244 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 118900' 00:37:49.244 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 118900 00:37:49.244 00:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 118900 00:37:51.157 00:21:46 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:37:51.157 00:21:46 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:37:51.157 00:21:46 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:37:51.157 00:21:46 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:51.157 00:21:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:37:51.157 ************************************ 00:37:51.157 START TEST bdev_hello_world 00:37:51.157 ************************************ 00:37:51.157 00:21:46 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:37:51.157 [2024-07-25 00:21:46.657742] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:37:51.157 [2024-07-25 00:21:46.657931] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119284 ] 00:37:51.157 [2024-07-25 00:21:46.829395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:51.157 [2024-07-25 00:21:46.984516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:51.725 [2024-07-25 00:21:47.327665] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:37:51.725 [2024-07-25 00:21:47.327721] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:37:51.725 [2024-07-25 00:21:47.327759] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:37:51.725 [2024-07-25 00:21:47.330317] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:37:51.725 [2024-07-25 00:21:47.330849] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:37:51.725 [2024-07-25 00:21:47.330890] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:37:51.725 [2024-07-25 00:21:47.331137] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:37:51.725 00:37:51.725 [2024-07-25 00:21:47.331183] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:37:52.661 00:37:52.661 real 0m1.650s 00:37:52.661 user 0m1.349s 00:37:52.661 sys 0m0.201s 00:37:52.662 00:21:48 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:52.662 00:21:48 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:37:52.662 ************************************ 00:37:52.662 END TEST bdev_hello_world 00:37:52.662 ************************************ 00:37:52.662 00:21:48 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:37:52.662 00:21:48 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:52.662 00:21:48 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:52.662 00:21:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:37:52.662 ************************************ 00:37:52.662 START TEST bdev_bounds 00:37:52.662 ************************************ 00:37:52.662 00:21:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:37:52.662 00:21:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=119321 00:37:52.662 00:21:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:37:52.662 Process bdevio pid: 119321 00:37:52.662 00:21:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 119321' 00:37:52.662 00:21:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:37:52.662 00:21:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 119321 00:37:52.662 00:21:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 119321 ']' 00:37:52.662 00:21:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:52.662 00:21:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:52.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:52.662 00:21:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:52.662 00:21:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:52.662 00:21:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:37:52.662 [2024-07-25 00:21:48.362211] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:37:52.662 [2024-07-25 00:21:48.362394] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119321 ] 00:37:52.921 [2024-07-25 00:21:48.537097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:52.921 [2024-07-25 00:21:48.690232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:52.921 [2024-07-25 00:21:48.690385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:52.921 [2024-07-25 00:21:48.690395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:53.489 00:21:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:53.489 00:21:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:37:53.489 00:21:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:37:53.748 I/O targets: 00:37:53.748 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:37:53.748 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:37:53.748 00:37:53.748 00:37:53.748 CUnit - A unit testing framework for C - Version 2.1-3 00:37:53.748 http://cunit.sourceforge.net/ 00:37:53.748 00:37:53.748 00:37:53.748 Suite: bdevio tests on: Nvme0n1p2 00:37:53.748 Test: blockdev write read block ...passed 00:37:53.748 Test: blockdev write zeroes read block ...passed 00:37:53.748 Test: blockdev write zeroes read no split ...passed 00:37:53.748 Test: blockdev write zeroes read split ...passed 00:37:53.748 Test: blockdev write zeroes read split partial ...passed 00:37:53.748 Test: blockdev reset ...[2024-07-25 00:21:49.520799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:37:53.748 [2024-07-25 00:21:49.524293] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:53.748 passed 00:37:53.748 Test: blockdev write read 8 blocks ...passed 00:37:53.748 Test: blockdev write read size > 128k ...passed 00:37:53.748 Test: blockdev write read invalid size ...passed 00:37:53.748 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:37:53.748 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:37:53.748 Test: blockdev write read max offset ...passed 00:37:53.748 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:37:53.748 Test: blockdev writev readv 8 blocks ...passed 00:37:53.748 Test: blockdev writev readv 30 x 1block ...passed 00:37:53.748 Test: blockdev writev readv block ...passed 00:37:53.748 Test: blockdev writev readv size > 128k ...passed 00:37:53.748 Test: blockdev writev readv size > 128k in two iovs ...passed 00:37:53.748 Test: blockdev comparev and writev ...[2024-07-25 00:21:49.534301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x29920d000 len:0x1000 00:37:53.748 [2024-07-25 00:21:49.534390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:37:53.748 passed 00:37:53.748 Test: blockdev nvme passthru rw ...passed 00:37:53.748 Test: blockdev nvme passthru vendor specific ...passed 00:37:53.748 Test: blockdev nvme admin passthru ...passed 00:37:53.748 Test: blockdev copy ...passed 00:37:53.748 Suite: bdevio tests on: Nvme0n1p1 00:37:53.748 Test: blockdev write read block ...passed 00:37:53.748 Test: blockdev write zeroes read block ...passed 00:37:53.748 Test: blockdev write zeroes read no split ...passed 00:37:53.748 Test: blockdev write zeroes read split ...passed 00:37:53.748 Test: blockdev write zeroes read split partial ...passed 00:37:53.748 Test: blockdev reset ...[2024-07-25 00:21:49.586790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:37:53.748 [2024-07-25 00:21:49.590309] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:53.748 passed 00:37:53.748 Test: blockdev write read 8 blocks ...passed 00:37:53.748 Test: blockdev write read size > 128k ...passed 00:37:53.748 Test: blockdev write read invalid size ...passed 00:37:53.748 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:37:53.748 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:37:53.748 Test: blockdev write read max offset ...passed 00:37:53.748 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:37:53.748 Test: blockdev writev readv 8 blocks ...passed 00:37:53.748 Test: blockdev writev readv 30 x 1block ...passed 00:37:53.748 Test: blockdev writev readv block ...passed 00:37:53.748 Test: blockdev writev readv size > 128k ...passed 00:37:53.748 Test: blockdev writev readv size > 128k in two iovs ...passed 00:37:53.748 Test: blockdev comparev and writev ...[2024-07-25 00:21:49.600109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x299209000 len:0x1000 00:37:53.748 [2024-07-25 00:21:49.600210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:37:53.748 passed 00:37:53.748 Test: blockdev nvme passthru rw ...passed 00:37:53.748 Test: blockdev nvme passthru vendor specific ...passed 00:37:53.748 Test: blockdev nvme admin passthru ...passed 00:37:53.748 Test: blockdev copy ...passed 00:37:53.748 00:37:53.748 Run Summary: Type Total Ran Passed Failed Inactive 00:37:53.748 suites 2 2 n/a 0 0 00:37:53.748 tests 46 46 46 0 0 00:37:53.748 asserts 284 284 284 0 n/a 00:37:53.748 00:37:53.748 Elapsed time = 0.360 seconds 00:37:53.748 0 00:37:54.007 00:21:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 119321 00:37:54.008 00:21:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 119321 ']' 00:37:54.008 00:21:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 119321 00:37:54.008 00:21:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:37:54.008 00:21:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:54.008 00:21:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 119321 00:37:54.008 00:21:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:54.008 killing process with pid 119321 00:37:54.008 00:21:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:54.008 00:21:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 119321' 00:37:54.008 00:21:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 119321 00:37:54.008 00:21:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 119321 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:37:54.945 00:37:54.945 real 0m2.295s 00:37:54.945 user 0m5.598s 00:37:54.945 sys 0m0.346s 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:54.945 ************************************ 00:37:54.945 END TEST bdev_bounds 00:37:54.945 ************************************ 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:37:54.945 00:21:50 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:37:54.945 00:21:50 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:37:54.945 00:21:50 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:54.945 00:21:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:37:54.945 ************************************ 00:37:54.945 START TEST bdev_nbd 00:37:54.945 ************************************ 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=2 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=2 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=119374 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 119374 /var/tmp/spdk-nbd.sock 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 119374 ']' 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:54.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:54.945 00:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:37:54.945 [2024-07-25 00:21:50.715069] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:37:54.945 [2024-07-25 00:21:50.715247] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:55.203 [2024-07-25 00:21:50.887556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:55.203 [2024-07-25 00:21:51.041107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:55.767 00:21:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:55.767 00:21:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:37:55.767 00:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:37:55.767 00:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:55.767 00:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:37:55.767 00:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:37:55.767 00:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:37:55.767 00:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:55.767 00:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:37:55.767 00:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:37:55.767 00:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:37:55.767 00:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:37:55.767 00:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:37:55.767 00:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:37:55.767 00:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:37:56.025 00:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:37:56.025 00:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:37:56.025 00:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:37:56.025 00:21:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:37:56.025 00:21:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:37:56.025 00:21:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:56.025 00:21:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:56.025 00:21:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:37:56.025 00:21:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:37:56.025 00:21:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:56.025 00:21:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:56.025 00:21:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:56.025 1+0 records in 00:37:56.025 1+0 records out 00:37:56.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000630632 s, 6.5 MB/s 00:37:56.025 00:21:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:56.025 00:21:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:37:56.025 00:21:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:56.025 00:21:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:56.025 00:21:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:37:56.025 00:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:37:56.025 00:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:37:56.025 00:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:37:56.283 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:37:56.283 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:37:56.283 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:37:56.283 00:21:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:37:56.283 00:21:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:37:56.283 00:21:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:56.283 00:21:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:56.283 00:21:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:37:56.283 00:21:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:37:56.283 00:21:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:56.283 00:21:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:56.283 00:21:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:56.283 1+0 records in 00:37:56.283 1+0 records out 00:37:56.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498247 s, 8.2 MB/s 00:37:56.283 00:21:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:56.283 00:21:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:37:56.283 00:21:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:56.283 00:21:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:56.283 00:21:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:37:56.283 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:37:56.283 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:37:56.283 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:56.541 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:37:56.541 { 00:37:56.541 "nbd_device": "/dev/nbd0", 00:37:56.541 "bdev_name": "Nvme0n1p1" 00:37:56.541 }, 00:37:56.541 { 00:37:56.541 "nbd_device": "/dev/nbd1", 00:37:56.541 "bdev_name": "Nvme0n1p2" 00:37:56.541 } 00:37:56.541 ]' 00:37:56.541 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:37:56.541 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:37:56.541 { 00:37:56.541 "nbd_device": "/dev/nbd0", 00:37:56.541 "bdev_name": "Nvme0n1p1" 00:37:56.541 }, 00:37:56.541 { 00:37:56.541 "nbd_device": "/dev/nbd1", 00:37:56.541 "bdev_name": "Nvme0n1p2" 00:37:56.541 } 00:37:56.541 ]' 00:37:56.541 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:37:56.541 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:37:56.541 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:56.541 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:56.541 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:56.541 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:37:56.541 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:56.541 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:37:56.799 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:56.799 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:56.799 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:56.799 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:56.799 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:56.799 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:56.799 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:37:56.799 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:37:56.799 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:56.799 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:57.058 00:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:37:57.317 /dev/nbd0 00:37:57.317 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:57.317 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:57.317 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:37:57.317 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:37:57.317 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:57.317 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:57.317 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:37:57.317 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:37:57.317 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:57.317 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:57.317 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:57.575 1+0 records in 00:37:57.575 1+0 records out 00:37:57.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444234 s, 9.2 MB/s 00:37:57.575 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:57.575 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:37:57.575 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:57.575 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:57.575 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:37:57.575 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:57.575 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:57.575 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:37:57.575 /dev/nbd1 00:37:57.575 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:57.575 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:57.575 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:37:57.575 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:37:57.575 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:57.575 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:57.575 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:37:57.575 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:37:57.575 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:57.575 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:57.575 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:57.575 1+0 records in 00:37:57.575 1+0 records out 00:37:57.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606609 s, 6.8 MB/s 00:37:57.576 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:57.576 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:37:57.576 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:57.576 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:57.576 00:21:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:37:57.576 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:57.576 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:57.576 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:37:57.576 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:57.576 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:57.834 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:37:57.834 { 00:37:57.834 "nbd_device": "/dev/nbd0", 00:37:57.834 "bdev_name": "Nvme0n1p1" 00:37:57.834 }, 00:37:57.834 { 00:37:57.834 "nbd_device": "/dev/nbd1", 00:37:57.834 "bdev_name": "Nvme0n1p2" 00:37:57.834 } 00:37:57.834 ]' 00:37:57.834 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:37:57.834 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:37:57.834 { 00:37:57.834 "nbd_device": "/dev/nbd0", 00:37:57.834 "bdev_name": "Nvme0n1p1" 00:37:57.834 }, 00:37:57.834 { 00:37:57.834 "nbd_device": "/dev/nbd1", 00:37:57.834 "bdev_name": "Nvme0n1p2" 00:37:57.834 } 00:37:57.834 ]' 00:37:57.834 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:37:57.834 /dev/nbd1' 00:37:57.834 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:37:57.834 /dev/nbd1' 00:37:57.834 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:37:57.834 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:37:57.834 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:37:57.834 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:37:57.834 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:37:57.834 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:37:57.834 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:57.834 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:37:57.834 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:37:57.834 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:37:57.834 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:37:57.834 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:37:57.834 256+0 records in 00:37:57.834 256+0 records out 00:37:57.834 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00754367 s, 139 MB/s 00:37:57.834 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:37:57.834 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:37:58.092 256+0 records in 00:37:58.092 256+0 records out 00:37:58.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0786885 s, 13.3 MB/s 00:37:58.092 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:37:58.092 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:37:58.092 256+0 records in 00:37:58.092 256+0 records out 00:37:58.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0992605 s, 10.6 MB/s 00:37:58.092 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:37:58.092 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:58.092 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:37:58.092 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:37:58.092 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:37:58.092 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:37:58.092 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:37:58.092 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:37:58.092 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:37:58.092 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:37:58.092 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:37:58.092 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:37:58.092 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:37:58.092 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:58.092 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:58.092 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:58.092 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:37:58.092 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:58.092 00:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:37:58.352 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:58.352 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:58.352 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:58.352 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:58.352 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:58.352 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:58.352 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:37:58.352 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:37:58.352 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:58.352 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:37:58.610 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:58.610 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:58.610 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:58.610 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:58.610 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:58.610 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:58.610 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:37:58.610 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:37:58.610 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:37:58.610 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:58.610 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:58.869 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:37:58.869 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:37:58.869 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:37:58.869 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:37:58.869 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:37:58.869 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:37:58.869 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:37:58.869 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:37:58.869 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:37:58.869 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:37:58.869 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:37:58.869 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:37:58.869 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:37:58.869 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:58.870 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:58.870 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:37:58.870 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:37:58.870 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:37:59.128 malloc_lvol_verify 00:37:59.128 00:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:37:59.386 42a539f2-3b05-4dd7-ac28-18b36d363d5c 00:37:59.387 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:37:59.387 27ef0946-d6b3-428c-bd65-cef0150b417c 00:37:59.387 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:37:59.645 /dev/nbd0 00:37:59.645 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:37:59.645 mke2fs 1.47.0 (5-Feb-2023) 00:37:59.645 00:37:59.645 Filesystem too small for a journal 00:37:59.645 Discarding device blocks: 0/1024 done 00:37:59.645 Creating filesystem with 1024 4k blocks and 1024 inodes 00:37:59.645 00:37:59.645 Allocating group tables: 0/1 done 00:37:59.645 Writing inode tables: 0/1 done 00:37:59.645 Writing superblocks and filesystem accounting information: 0/1 done 00:37:59.645 00:37:59.645 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:37:59.645 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:37:59.645 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:59.645 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:37:59.645 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:59.645 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:37:59.645 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:59.645 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:37:59.904 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:59.904 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:59.904 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:59.904 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:59.904 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:59.904 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:59.904 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:37:59.904 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:37:59.904 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:37:59.904 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:37:59.904 00:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 119374 00:37:59.904 00:21:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 119374 ']' 00:37:59.904 00:21:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 119374 00:37:59.904 00:21:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:37:59.904 00:21:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:59.904 00:21:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 119374 00:37:59.904 00:21:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:59.904 00:21:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:59.904 killing process with pid 119374 00:37:59.904 00:21:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 119374' 00:37:59.904 00:21:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 119374 00:37:59.904 00:21:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 119374 00:38:00.840 00:21:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:38:00.840 00:38:00.840 real 0m5.995s 00:38:00.840 user 0m8.572s 00:38:00.840 sys 0m1.503s 00:38:00.840 00:21:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:00.840 00:21:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:38:00.840 ************************************ 00:38:00.840 END TEST bdev_nbd 00:38:00.840 ************************************ 00:38:00.840 00:21:56 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:38:00.840 00:21:56 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:38:00.840 00:21:56 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:38:00.840 skipping fio tests on NVMe due to multi-ns failures. 00:38:00.840 00:21:56 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:38:00.840 00:21:56 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:38:00.840 00:21:56 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:38:00.840 00:21:56 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:38:00.840 00:21:56 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:00.840 00:21:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:38:00.840 ************************************ 00:38:00.840 START TEST bdev_verify 00:38:00.840 ************************************ 00:38:00.840 00:21:56 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:38:01.099 [2024-07-25 00:21:56.751145] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:38:01.099 [2024-07-25 00:21:56.751325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119603 ] 00:38:01.099 [2024-07-25 00:21:56.922567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:01.358 [2024-07-25 00:21:57.076074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:01.358 [2024-07-25 00:21:57.076095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:01.617 Running I/O for 5 seconds... 00:38:06.884 00:38:06.884 Latency(us) 00:38:06.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:06.885 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:06.885 Verification LBA range: start 0x0 length 0x4ff80 00:38:06.885 Nvme0n1p1 : 5.02 4920.85 19.22 0.00 0.00 25942.28 3902.37 26929.34 00:38:06.885 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:38:06.885 Verification LBA range: start 0x4ff80 length 0x4ff80 00:38:06.885 Nvme0n1p1 : 5.02 4792.79 18.72 0.00 0.00 26585.35 3649.16 35508.60 00:38:06.885 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:06.885 Verification LBA range: start 0x0 length 0x4ff7f 00:38:06.885 Nvme0n1p2 : 5.02 4919.12 19.22 0.00 0.00 25915.43 3530.01 26691.03 00:38:06.885 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:38:06.885 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:38:06.885 Nvme0n1p2 : 5.02 4794.06 18.73 0.00 0.00 26625.73 4140.68 35746.91 00:38:06.885 =================================================================================================================== 00:38:06.885 Total : 19426.82 75.89 0.00 0.00 26262.76 3530.01 35746.91 00:38:07.856 00:38:07.856 real 0m6.736s 00:38:07.856 user 0m12.406s 00:38:07.856 sys 0m0.220s 00:38:07.856 00:22:03 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:07.856 00:22:03 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:38:07.856 ************************************ 00:38:07.856 END TEST bdev_verify 00:38:07.856 ************************************ 00:38:07.856 00:22:03 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:38:07.856 00:22:03 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:38:07.856 00:22:03 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:07.856 00:22:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:38:07.856 ************************************ 00:38:07.856 START TEST bdev_verify_big_io 00:38:07.856 ************************************ 00:38:07.856 00:22:03 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:38:07.856 [2024-07-25 00:22:03.517377] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:38:07.856 [2024-07-25 00:22:03.517515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119686 ] 00:38:07.856 [2024-07-25 00:22:03.677328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:08.114 [2024-07-25 00:22:03.845096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:08.114 [2024-07-25 00:22:03.845104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:08.683 Running I/O for 5 seconds... 00:38:13.948 00:38:13.948 Latency(us) 00:38:13.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:13.948 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:38:13.948 Verification LBA range: start 0x0 length 0x4ff8 00:38:13.948 Nvme0n1p1 : 5.15 422.51 26.41 0.00 0.00 295167.14 4706.68 375580.86 00:38:13.948 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:38:13.948 Verification LBA range: start 0x4ff8 length 0x4ff8 00:38:13.948 Nvme0n1p1 : 5.11 425.63 26.60 0.00 0.00 294028.64 5064.15 371767.85 00:38:13.948 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:38:13.948 Verification LBA range: start 0x0 length 0x4ff7 00:38:13.948 Nvme0n1p2 : 5.21 442.60 27.66 0.00 0.00 275411.01 1176.67 377487.36 00:38:13.948 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:38:13.948 Verification LBA range: start 0x4ff7 length 0x4ff7 00:38:13.948 Nvme0n1p2 : 5.19 443.67 27.73 0.00 0.00 275393.18 1675.64 373674.36 00:38:13.948 =================================================================================================================== 00:38:13.948 Total : 1734.41 108.40 0.00 0.00 284725.77 1176.67 377487.36 00:38:15.320 00:38:15.320 real 0m7.310s 00:38:15.320 user 0m13.570s 00:38:15.320 sys 0m0.226s 00:38:15.320 00:22:10 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:15.320 00:22:10 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:38:15.320 ************************************ 00:38:15.320 END TEST bdev_verify_big_io 00:38:15.320 ************************************ 00:38:15.320 00:22:10 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:38:15.320 00:22:10 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:38:15.320 00:22:10 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:15.321 00:22:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:38:15.321 ************************************ 00:38:15.321 START TEST bdev_write_zeroes 00:38:15.321 ************************************ 00:38:15.321 00:22:10 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:38:15.321 [2024-07-25 00:22:10.900681] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:38:15.321 [2024-07-25 00:22:10.900874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119787 ] 00:38:15.321 [2024-07-25 00:22:11.074126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:15.578 [2024-07-25 00:22:11.226379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:15.836 Running I/O for 1 seconds... 00:38:16.768 00:38:16.768 Latency(us) 00:38:16.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:16.768 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:38:16.768 Nvme0n1p1 : 1.01 21925.80 85.65 0.00 0.00 5823.47 3664.06 11796.48 00:38:16.768 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:38:16.768 Nvme0n1p2 : 1.01 21889.60 85.51 0.00 0.00 5824.20 3127.85 11736.90 00:38:16.768 =================================================================================================================== 00:38:16.768 Total : 43815.40 171.15 0.00 0.00 5823.83 3127.85 11796.48 00:38:18.140 00:38:18.140 real 0m2.780s 00:38:18.140 user 0m2.466s 00:38:18.140 sys 0m0.214s 00:38:18.140 00:22:13 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:18.140 ************************************ 00:38:18.140 END TEST bdev_write_zeroes 00:38:18.140 00:22:13 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:38:18.140 ************************************ 00:38:18.140 00:22:13 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:38:18.140 00:22:13 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:38:18.140 00:22:13 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:18.140 00:22:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:38:18.140 ************************************ 00:38:18.140 START TEST bdev_json_nonenclosed 00:38:18.140 ************************************ 00:38:18.140 00:22:13 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:38:18.140 [2024-07-25 00:22:13.729142] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:38:18.140 [2024-07-25 00:22:13.729310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119835 ] 00:38:18.140 [2024-07-25 00:22:13.899322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:18.398 [2024-07-25 00:22:14.049511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:18.398 [2024-07-25 00:22:14.049602] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:38:18.398 [2024-07-25 00:22:14.049624] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:38:18.398 [2024-07-25 00:22:14.049638] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:18.656 00:38:18.656 real 0m0.718s 00:38:18.656 user 0m0.499s 00:38:18.656 sys 0m0.118s 00:38:18.656 00:22:14 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:18.656 ************************************ 00:38:18.656 END TEST bdev_json_nonenclosed 00:38:18.656 ************************************ 00:38:18.656 00:22:14 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:38:18.656 00:22:14 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:38:18.656 00:22:14 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:38:18.656 00:22:14 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:18.656 00:22:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:38:18.656 ************************************ 00:38:18.656 START TEST bdev_json_nonarray 00:38:18.656 ************************************ 00:38:18.656 00:22:14 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:38:18.656 [2024-07-25 00:22:14.493616] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:38:18.656 [2024-07-25 00:22:14.493790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119859 ] 00:38:18.914 [2024-07-25 00:22:14.665771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:19.172 [2024-07-25 00:22:14.842272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:19.172 [2024-07-25 00:22:14.842386] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:38:19.172 [2024-07-25 00:22:14.842411] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:38:19.172 [2024-07-25 00:22:14.842426] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:19.430 00:38:19.430 real 0m0.751s 00:38:19.430 user 0m0.531s 00:38:19.430 sys 0m0.119s 00:38:19.430 00:22:15 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:19.430 ************************************ 00:38:19.430 END TEST bdev_json_nonarray 00:38:19.430 ************************************ 00:38:19.430 00:22:15 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:38:19.430 00:22:15 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:38:19.430 00:22:15 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:38:19.430 00:22:15 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:38:19.430 00:22:15 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:19.430 00:22:15 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:19.430 00:22:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:38:19.430 ************************************ 00:38:19.430 START TEST bdev_gpt_uuid 00:38:19.430 ************************************ 00:38:19.430 00:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:38:19.430 00:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:38:19.430 00:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:38:19.430 00:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=119886 00:38:19.430 00:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:38:19.430 00:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 119886 00:38:19.431 00:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 119886 ']' 00:38:19.431 00:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:19.431 00:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:19.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:19.431 00:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:19.431 00:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:19.431 00:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:38:19.431 00:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:38:19.689 [2024-07-25 00:22:15.315407] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:38:19.689 [2024-07-25 00:22:15.315582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119886 ] 00:38:19.689 [2024-07-25 00:22:15.486787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:19.948 [2024-07-25 00:22:15.642347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:20.517 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:20.517 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:38:20.517 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:38:20.517 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.517 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:38:20.517 Some configs were skipped because the RPC state that can call them passed over. 00:38:20.517 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.517 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:38:20.517 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.517 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:38:20.517 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.517 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:38:20.517 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.517 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:38:20.517 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.517 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:38:20.517 { 00:38:20.517 "name": "Nvme0n1p1", 00:38:20.517 "aliases": [ 00:38:20.517 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:38:20.517 ], 00:38:20.517 "product_name": "GPT Disk", 00:38:20.517 "block_size": 4096, 00:38:20.517 "num_blocks": 655104, 00:38:20.517 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:38:20.517 "assigned_rate_limits": { 00:38:20.517 "rw_ios_per_sec": 0, 00:38:20.517 "rw_mbytes_per_sec": 0, 00:38:20.517 "r_mbytes_per_sec": 0, 00:38:20.517 "w_mbytes_per_sec": 0 00:38:20.517 }, 00:38:20.517 "claimed": false, 00:38:20.517 "zoned": false, 00:38:20.517 "supported_io_types": { 00:38:20.517 "read": true, 00:38:20.517 "write": true, 00:38:20.517 "unmap": true, 00:38:20.517 "flush": true, 00:38:20.517 "reset": true, 00:38:20.517 "nvme_admin": false, 00:38:20.517 "nvme_io": false, 00:38:20.517 "nvme_io_md": false, 00:38:20.517 "write_zeroes": true, 00:38:20.517 "zcopy": false, 00:38:20.517 "get_zone_info": false, 00:38:20.517 "zone_management": false, 00:38:20.517 "zone_append": false, 00:38:20.517 "compare": true, 00:38:20.517 "compare_and_write": false, 00:38:20.517 "abort": true, 00:38:20.517 "seek_hole": false, 00:38:20.517 "seek_data": false, 00:38:20.517 "copy": true, 00:38:20.517 "nvme_iov_md": false 00:38:20.517 }, 00:38:20.517 "driver_specific": { 00:38:20.517 "gpt": { 00:38:20.517 "base_bdev": "Nvme0n1", 00:38:20.517 "offset_blocks": 256, 00:38:20.517 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:38:20.517 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:38:20.517 "partition_name": "SPDK_TEST_first" 00:38:20.517 } 00:38:20.517 } 00:38:20.517 } 00:38:20.517 ]' 00:38:20.517 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:38:20.776 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:38:20.776 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:38:20.776 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:38:20.776 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:38:20.776 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:38:20.776 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:38:20.776 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.776 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:38:20.776 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.776 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:38:20.776 { 00:38:20.776 "name": "Nvme0n1p2", 00:38:20.776 "aliases": [ 00:38:20.776 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:38:20.776 ], 00:38:20.776 "product_name": "GPT Disk", 00:38:20.776 "block_size": 4096, 00:38:20.776 "num_blocks": 655103, 00:38:20.776 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:38:20.776 "assigned_rate_limits": { 00:38:20.776 "rw_ios_per_sec": 0, 00:38:20.776 "rw_mbytes_per_sec": 0, 00:38:20.776 "r_mbytes_per_sec": 0, 00:38:20.776 "w_mbytes_per_sec": 0 00:38:20.776 }, 00:38:20.776 "claimed": false, 00:38:20.776 "zoned": false, 00:38:20.776 "supported_io_types": { 00:38:20.776 "read": true, 00:38:20.776 "write": true, 00:38:20.776 "unmap": true, 00:38:20.776 "flush": true, 00:38:20.776 "reset": true, 00:38:20.776 "nvme_admin": false, 00:38:20.776 "nvme_io": false, 00:38:20.776 "nvme_io_md": false, 00:38:20.776 "write_zeroes": true, 00:38:20.776 "zcopy": false, 00:38:20.777 "get_zone_info": false, 00:38:20.777 "zone_management": false, 00:38:20.777 "zone_append": false, 00:38:20.777 "compare": true, 00:38:20.777 "compare_and_write": false, 00:38:20.777 "abort": true, 00:38:20.777 "seek_hole": false, 00:38:20.777 "seek_data": false, 00:38:20.777 "copy": true, 00:38:20.777 "nvme_iov_md": false 00:38:20.777 }, 00:38:20.777 "driver_specific": { 00:38:20.777 "gpt": { 00:38:20.777 "base_bdev": "Nvme0n1", 00:38:20.777 "offset_blocks": 655360, 00:38:20.777 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:38:20.777 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:38:20.777 "partition_name": "SPDK_TEST_second" 00:38:20.777 } 00:38:20.777 } 00:38:20.777 } 00:38:20.777 ]' 00:38:20.777 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:38:20.777 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:38:20.777 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:38:20.777 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:38:20.777 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:38:20.777 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:38:20.777 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 119886 00:38:20.777 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 119886 ']' 00:38:20.777 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 119886 00:38:20.777 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:38:20.777 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:20.777 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 119886 00:38:20.777 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:20.777 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:20.777 killing process with pid 119886 00:38:20.777 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 119886' 00:38:20.777 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 119886 00:38:20.777 00:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 119886 00:38:22.682 00:38:22.682 real 0m2.945s 00:38:22.682 user 0m2.964s 00:38:22.682 sys 0m0.434s 00:38:22.682 00:22:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:22.682 00:22:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:38:22.682 ************************************ 00:38:22.682 END TEST bdev_gpt_uuid 00:38:22.682 ************************************ 00:38:22.682 00:22:18 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:38:22.682 00:22:18 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:38:22.682 00:22:18 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:38:22.682 00:22:18 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:38:22.682 00:22:18 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:38:22.682 00:22:18 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:38:22.682 00:22:18 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:38:22.682 00:22:18 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:38:22.682 00:22:18 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:38:22.682 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:38:22.941 Waiting for block devices as requested 00:38:22.941 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:38:22.941 00:22:18 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:38:22.941 00:22:18 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:38:23.200 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:38:23.200 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:38:23.200 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:38:23.200 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:38:23.200 00:22:18 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:38:23.200 ************************************ 00:38:23.200 END TEST blockdev_nvme_gpt 00:38:23.200 ************************************ 00:38:23.200 00:38:23.200 real 0m39.014s 00:38:23.200 user 0m55.274s 00:38:23.200 sys 0m5.696s 00:38:23.200 00:22:18 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:23.200 00:22:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:38:23.200 00:22:19 -- spdk/autotest.sh@220 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:38:23.200 00:22:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:23.200 00:22:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:23.200 00:22:19 -- common/autotest_common.sh@10 -- # set +x 00:38:23.200 ************************************ 00:38:23.200 START TEST nvme 00:38:23.200 ************************************ 00:38:23.200 00:22:19 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:38:23.459 * Looking for test storage... 00:38:23.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:38:23.459 00:22:19 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:38:23.718 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:38:23.718 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:38:24.285 00:22:20 nvme -- nvme/nvme.sh@79 -- # uname 00:38:24.285 00:22:20 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:38:24.285 00:22:20 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:38:24.285 00:22:20 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:38:24.285 00:22:20 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:38:24.285 00:22:20 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:38:24.285 00:22:20 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:38:24.285 00:22:20 nvme -- common/autotest_common.sh@1071 -- # stubpid=120251 00:38:24.285 00:22:20 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:38:24.285 Waiting for stub to ready for secondary processes... 00:38:24.285 00:22:20 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:38:24.285 00:22:20 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:38:24.285 00:22:20 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/120251 ]] 00:38:24.285 00:22:20 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:38:24.544 [2024-07-25 00:22:20.195253] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:38:24.544 [2024-07-25 00:22:20.195432] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:38:25.112 [2024-07-25 00:22:20.931752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:25.374 [2024-07-25 00:22:21.139273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:25.374 [2024-07-25 00:22:21.139375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:25.374 [2024-07-25 00:22:21.139407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:38:25.374 [2024-07-25 00:22:21.148510] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:38:25.374 [2024-07-25 00:22:21.148581] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:38:25.374 [2024-07-25 00:22:21.155304] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:38:25.374 [2024-07-25 00:22:21.155466] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:38:25.374 done. 00:38:25.374 00:22:21 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:38:25.374 00:22:21 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:38:25.374 00:22:21 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:38:25.374 00:22:21 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:38:25.374 00:22:21 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:25.374 00:22:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:38:25.374 ************************************ 00:38:25.374 START TEST nvme_reset 00:38:25.374 ************************************ 00:38:25.374 00:22:21 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:38:25.692 Initializing NVMe Controllers 00:38:25.692 Skipping QEMU NVMe SSD at 0000:00:10.0 00:38:25.692 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:38:25.692 00:38:25.692 real 0m0.268s 00:38:25.692 user 0m0.100s 00:38:25.692 sys 0m0.120s 00:38:25.692 00:22:21 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:25.692 00:22:21 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:38:25.692 ************************************ 00:38:25.692 END TEST nvme_reset 00:38:25.692 ************************************ 00:38:25.692 00:22:21 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:38:25.692 00:22:21 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:25.692 00:22:21 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:25.692 00:22:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:38:25.692 ************************************ 00:38:25.692 START TEST nvme_identify 00:38:25.692 ************************************ 00:38:25.692 00:22:21 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:38:25.692 00:22:21 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:38:25.692 00:22:21 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:38:25.692 00:22:21 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:38:25.692 00:22:21 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:38:25.692 00:22:21 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:38:25.692 00:22:21 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:38:25.692 00:22:21 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:25.692 00:22:21 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:38:25.692 00:22:21 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:38:25.951 00:22:21 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:38:25.951 00:22:21 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:38:25.951 00:22:21 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:38:25.951 [2024-07-25 00:22:21.794373] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 120268 terminated unexpected 00:38:25.951 ===================================================== 00:38:25.951 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:38:25.951 ===================================================== 00:38:25.951 Controller Capabilities/Features 00:38:25.951 ================================ 00:38:25.951 Vendor ID: 1b36 00:38:25.951 Subsystem Vendor ID: 1af4 00:38:25.951 Serial Number: 12340 00:38:25.951 Model Number: QEMU NVMe Ctrl 00:38:25.951 Firmware Version: 8.0.0 00:38:25.951 Recommended Arb Burst: 6 00:38:25.951 IEEE OUI Identifier: 00 54 52 00:38:25.951 Multi-path I/O 00:38:25.951 May have multiple subsystem ports: No 00:38:25.951 May have multiple controllers: No 00:38:25.951 Associated with SR-IOV VF: No 00:38:25.951 Max Data Transfer Size: 524288 00:38:25.951 Max Number of Namespaces: 256 00:38:25.951 Max Number of I/O Queues: 64 00:38:25.951 NVMe Specification Version (VS): 1.4 00:38:25.951 NVMe Specification Version (Identify): 1.4 00:38:25.951 Maximum Queue Entries: 2048 00:38:25.951 Contiguous Queues Required: Yes 00:38:25.951 Arbitration Mechanisms Supported 00:38:25.951 Weighted Round Robin: Not Supported 00:38:25.951 Vendor Specific: Not Supported 00:38:25.951 Reset Timeout: 7500 ms 00:38:25.951 Doorbell Stride: 4 bytes 00:38:25.951 NVM Subsystem Reset: Not Supported 00:38:25.951 Command Sets Supported 00:38:25.951 NVM Command Set: Supported 00:38:25.951 Boot Partition: Not Supported 00:38:25.951 Memory Page Size Minimum: 4096 bytes 00:38:25.951 Memory Page Size Maximum: 65536 bytes 00:38:25.951 Persistent Memory Region: Not Supported 00:38:25.951 Optional Asynchronous Events Supported 00:38:25.951 Namespace Attribute Notices: Supported 00:38:25.951 Firmware Activation Notices: Not Supported 00:38:25.951 ANA Change Notices: Not Supported 00:38:25.951 PLE Aggregate Log Change Notices: Not Supported 00:38:25.951 LBA Status Info Alert Notices: Not Supported 00:38:25.951 EGE Aggregate Log Change Notices: Not Supported 00:38:25.951 Normal NVM Subsystem Shutdown event: Not Supported 00:38:25.951 Zone Descriptor Change Notices: Not Supported 00:38:25.951 Discovery Log Change Notices: Not Supported 00:38:25.951 Controller Attributes 00:38:25.951 128-bit Host Identifier: Not Supported 00:38:25.951 Non-Operational Permissive Mode: Not Supported 00:38:25.951 NVM Sets: Not Supported 00:38:25.951 Read Recovery Levels: Not Supported 00:38:25.951 Endurance Groups: Not Supported 00:38:25.951 Predictable Latency Mode: Not Supported 00:38:25.951 Traffic Based Keep ALive: Not Supported 00:38:25.951 Namespace Granularity: Not Supported 00:38:25.951 SQ Associations: Not Supported 00:38:25.951 UUID List: Not Supported 00:38:25.951 Multi-Domain Subsystem: Not Supported 00:38:25.951 Fixed Capacity Management: Not Supported 00:38:25.951 Variable Capacity Management: Not Supported 00:38:25.951 Delete Endurance Group: Not Supported 00:38:25.951 Delete NVM Set: Not Supported 00:38:25.951 Extended LBA Formats Supported: Supported 00:38:25.951 Flexible Data Placement Supported: Not Supported 00:38:25.951 00:38:25.951 Controller Memory Buffer Support 00:38:25.951 ================================ 00:38:25.951 Supported: No 00:38:25.951 00:38:25.951 Persistent Memory Region Support 00:38:25.951 ================================ 00:38:25.951 Supported: No 00:38:25.951 00:38:25.951 Admin Command Set Attributes 00:38:25.951 ============================ 00:38:25.951 Security Send/Receive: Not Supported 00:38:25.951 Format NVM: Supported 00:38:25.951 Firmware Activate/Download: Not Supported 00:38:25.951 Namespace Management: Supported 00:38:25.951 Device Self-Test: Not Supported 00:38:25.951 Directives: Supported 00:38:25.951 NVMe-MI: Not Supported 00:38:25.951 Virtualization Management: Not Supported 00:38:25.951 Doorbell Buffer Config: Supported 00:38:25.951 Get LBA Status Capability: Not Supported 00:38:25.951 Command & Feature Lockdown Capability: Not Supported 00:38:25.951 Abort Command Limit: 4 00:38:25.951 Async Event Request Limit: 4 00:38:25.951 Number of Firmware Slots: N/A 00:38:25.951 Firmware Slot 1 Read-Only: N/A 00:38:25.951 Firmware Activation Without Reset: N/A 00:38:25.951 Multiple Update Detection Support: N/A 00:38:25.951 Firmware Update Granularity: No Information Provided 00:38:25.951 Per-Namespace SMART Log: Yes 00:38:25.951 Asymmetric Namespace Access Log Page: Not Supported 00:38:25.951 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:38:25.952 Command Effects Log Page: Supported 00:38:25.952 Get Log Page Extended Data: Supported 00:38:25.952 Telemetry Log Pages: Not Supported 00:38:25.952 Persistent Event Log Pages: Not Supported 00:38:25.952 Supported Log Pages Log Page: May Support 00:38:25.952 Commands Supported & Effects Log Page: Not Supported 00:38:25.952 Feature Identifiers & Effects Log Page:May Support 00:38:25.952 NVMe-MI Commands & Effects Log Page: May Support 00:38:25.952 Data Area 4 for Telemetry Log: Not Supported 00:38:25.952 Error Log Page Entries Supported: 1 00:38:25.952 Keep Alive: Not Supported 00:38:25.952 00:38:25.952 NVM Command Set Attributes 00:38:25.952 ========================== 00:38:25.952 Submission Queue Entry Size 00:38:25.952 Max: 64 00:38:25.952 Min: 64 00:38:25.952 Completion Queue Entry Size 00:38:25.952 Max: 16 00:38:25.952 Min: 16 00:38:25.952 Number of Namespaces: 256 00:38:25.952 Compare Command: Supported 00:38:25.952 Write Uncorrectable Command: Not Supported 00:38:25.952 Dataset Management Command: Supported 00:38:25.952 Write Zeroes Command: Supported 00:38:25.952 Set Features Save Field: Supported 00:38:25.952 Reservations: Not Supported 00:38:25.952 Timestamp: Supported 00:38:25.952 Copy: Supported 00:38:25.952 Volatile Write Cache: Present 00:38:25.952 Atomic Write Unit (Normal): 1 00:38:25.952 Atomic Write Unit (PFail): 1 00:38:25.952 Atomic Compare & Write Unit: 1 00:38:25.952 Fused Compare & Write: Not Supported 00:38:25.952 Scatter-Gather List 00:38:25.952 SGL Command Set: Supported 00:38:25.952 SGL Keyed: Not Supported 00:38:25.952 SGL Bit Bucket Descriptor: Not Supported 00:38:25.952 SGL Metadata Pointer: Not Supported 00:38:25.952 Oversized SGL: Not Supported 00:38:25.952 SGL Metadata Address: Not Supported 00:38:25.952 SGL Offset: Not Supported 00:38:25.952 Transport SGL Data Block: Not Supported 00:38:25.952 Replay Protected Memory Block: Not Supported 00:38:25.952 00:38:25.952 Firmware Slot Information 00:38:25.952 ========================= 00:38:25.952 Active slot: 1 00:38:25.952 Slot 1 Firmware Revision: 1.0 00:38:25.952 00:38:25.952 00:38:25.952 Commands Supported and Effects 00:38:25.952 ============================== 00:38:25.952 Admin Commands 00:38:25.952 -------------- 00:38:25.952 Delete I/O Submission Queue (00h): Supported 00:38:25.952 Create I/O Submission Queue (01h): Supported 00:38:25.952 Get Log Page (02h): Supported 00:38:25.952 Delete I/O Completion Queue (04h): Supported 00:38:25.952 Create I/O Completion Queue (05h): Supported 00:38:25.952 Identify (06h): Supported 00:38:25.952 Abort (08h): Supported 00:38:25.952 Set Features (09h): Supported 00:38:25.952 Get Features (0Ah): Supported 00:38:25.952 Asynchronous Event Request (0Ch): Supported 00:38:25.952 Namespace Attachment (15h): Supported NS-Inventory-Change 00:38:25.952 Directive Send (19h): Supported 00:38:25.952 Directive Receive (1Ah): Supported 00:38:25.952 Virtualization Management (1Ch): Supported 00:38:25.952 Doorbell Buffer Config (7Ch): Supported 00:38:25.952 Format NVM (80h): Supported LBA-Change 00:38:25.952 I/O Commands 00:38:25.952 ------------ 00:38:25.952 Flush (00h): Supported LBA-Change 00:38:25.952 Write (01h): Supported LBA-Change 00:38:25.952 Read (02h): Supported 00:38:25.952 Compare (05h): Supported 00:38:25.952 Write Zeroes (08h): Supported LBA-Change 00:38:25.952 Dataset Management (09h): Supported LBA-Change 00:38:25.952 Unknown (0Ch): Supported 00:38:25.952 Unknown (12h): Supported 00:38:25.952 Copy (19h): Supported LBA-Change 00:38:25.952 Unknown (1Dh): Supported LBA-Change 00:38:25.952 00:38:25.952 Error Log 00:38:25.952 ========= 00:38:25.952 00:38:25.952 Arbitration 00:38:25.952 =========== 00:38:25.952 Arbitration Burst: no limit 00:38:25.952 00:38:25.952 Power Management 00:38:25.952 ================ 00:38:25.952 Number of Power States: 1 00:38:25.952 Current Power State: Power State #0 00:38:25.952 Power State #0: 00:38:25.952 Max Power: 25.00 W 00:38:25.952 Non-Operational State: Operational 00:38:25.952 Entry Latency: 16 microseconds 00:38:25.952 Exit Latency: 4 microseconds 00:38:25.952 Relative Read Throughput: 0 00:38:25.952 Relative Read Latency: 0 00:38:25.952 Relative Write Throughput: 0 00:38:25.952 Relative Write Latency: 0 00:38:26.210 Idle Power: Not Reported 00:38:26.210 Active Power: Not Reported 00:38:26.210 Non-Operational Permissive Mode: Not Supported 00:38:26.210 00:38:26.210 Health Information 00:38:26.210 ================== 00:38:26.210 Critical Warnings: 00:38:26.210 Available Spare Space: OK 00:38:26.210 Temperature: OK 00:38:26.210 Device Reliability: OK 00:38:26.210 Read Only: No 00:38:26.210 Volatile Memory Backup: OK 00:38:26.210 Current Temperature: 323 Kelvin (50 Celsius) 00:38:26.210 Temperature Threshold: 343 Kelvin (70 Celsius) 00:38:26.210 Available Spare: 0% 00:38:26.210 Available Spare Threshold: 0% 00:38:26.210 Life Percentage Used: 0% 00:38:26.210 Data Units Read: 4486 00:38:26.210 Data Units Written: 4132 00:38:26.210 Host Read Commands: 230965 00:38:26.210 Host Write Commands: 243680 00:38:26.210 Controller Busy Time: 0 minutes 00:38:26.210 Power Cycles: 0 00:38:26.210 Power On Hours: 0 hours 00:38:26.210 Unsafe Shutdowns: 0 00:38:26.210 Unrecoverable Media Errors: 0 00:38:26.210 Lifetime Error Log Entries: 0 00:38:26.210 Warning Temperature Time: 0 minutes 00:38:26.210 Critical Temperature Time: 0 minutes 00:38:26.210 00:38:26.210 Number of Queues 00:38:26.210 ================ 00:38:26.210 Number of I/O Submission Queues: 64 00:38:26.210 Number of I/O Completion Queues: 64 00:38:26.210 00:38:26.210 ZNS Specific Controller Data 00:38:26.210 ============================ 00:38:26.210 Zone Append Size Limit: 0 00:38:26.210 00:38:26.210 00:38:26.210 Active Namespaces 00:38:26.210 ================= 00:38:26.210 Namespace ID:1 00:38:26.210 Error Recovery Timeout: Unlimited 00:38:26.210 Command Set Identifier: NVM (00h) 00:38:26.210 Deallocate: Supported 00:38:26.210 Deallocated/Unwritten Error: Supported 00:38:26.210 Deallocated Read Value: All 0x00 00:38:26.210 Deallocate in Write Zeroes: Not Supported 00:38:26.210 Deallocated Guard Field: 0xFFFF 00:38:26.210 Flush: Supported 00:38:26.210 Reservation: Not Supported 00:38:26.210 Namespace Sharing Capabilities: Private 00:38:26.210 Size (in LBAs): 1310720 (5GiB) 00:38:26.210 Capacity (in LBAs): 1310720 (5GiB) 00:38:26.210 Utilization (in LBAs): 1310720 (5GiB) 00:38:26.210 Thin Provisioning: Not Supported 00:38:26.210 Per-NS Atomic Units: No 00:38:26.210 Maximum Single Source Range Length: 128 00:38:26.210 Maximum Copy Length: 128 00:38:26.210 Maximum Source Range Count: 128 00:38:26.210 NGUID/EUI64 Never Reused: No 00:38:26.210 Namespace Write Protected: No 00:38:26.210 Number of LBA Formats: 8 00:38:26.210 Current LBA Format: LBA Format #04 00:38:26.210 LBA Format #00: Data Size: 512 Metadata Size: 0 00:38:26.210 LBA Format #01: Data Size: 512 Metadata Size: 8 00:38:26.210 LBA Format #02: Data Size: 512 Metadata Size: 16 00:38:26.210 LBA Format #03: Data Size: 512 Metadata Size: 64 00:38:26.210 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:38:26.210 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:38:26.210 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:38:26.210 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:38:26.210 00:38:26.210 NVM Specific Namespace Data 00:38:26.210 =========================== 00:38:26.210 Logical Block Storage Tag Mask: 0 00:38:26.210 Protection Information Capabilities: 00:38:26.210 16b Guard Protection Information Storage Tag Support: No 00:38:26.210 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:38:26.210 Storage Tag Check Read Support: No 00:38:26.210 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:38:26.210 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:38:26.210 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:38:26.210 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:38:26.210 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:38:26.210 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:38:26.210 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:38:26.210 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:38:26.210 00:22:21 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:38:26.210 00:22:21 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:38:26.469 ===================================================== 00:38:26.469 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:38:26.469 ===================================================== 00:38:26.469 Controller Capabilities/Features 00:38:26.469 ================================ 00:38:26.469 Vendor ID: 1b36 00:38:26.469 Subsystem Vendor ID: 1af4 00:38:26.469 Serial Number: 12340 00:38:26.469 Model Number: QEMU NVMe Ctrl 00:38:26.469 Firmware Version: 8.0.0 00:38:26.469 Recommended Arb Burst: 6 00:38:26.469 IEEE OUI Identifier: 00 54 52 00:38:26.469 Multi-path I/O 00:38:26.469 May have multiple subsystem ports: No 00:38:26.469 May have multiple controllers: No 00:38:26.469 Associated with SR-IOV VF: No 00:38:26.469 Max Data Transfer Size: 524288 00:38:26.469 Max Number of Namespaces: 256 00:38:26.469 Max Number of I/O Queues: 64 00:38:26.469 NVMe Specification Version (VS): 1.4 00:38:26.469 NVMe Specification Version (Identify): 1.4 00:38:26.469 Maximum Queue Entries: 2048 00:38:26.469 Contiguous Queues Required: Yes 00:38:26.469 Arbitration Mechanisms Supported 00:38:26.469 Weighted Round Robin: Not Supported 00:38:26.469 Vendor Specific: Not Supported 00:38:26.469 Reset Timeout: 7500 ms 00:38:26.470 Doorbell Stride: 4 bytes 00:38:26.470 NVM Subsystem Reset: Not Supported 00:38:26.470 Command Sets Supported 00:38:26.470 NVM Command Set: Supported 00:38:26.470 Boot Partition: Not Supported 00:38:26.470 Memory Page Size Minimum: 4096 bytes 00:38:26.470 Memory Page Size Maximum: 65536 bytes 00:38:26.470 Persistent Memory Region: Not Supported 00:38:26.470 Optional Asynchronous Events Supported 00:38:26.470 Namespace Attribute Notices: Supported 00:38:26.470 Firmware Activation Notices: Not Supported 00:38:26.470 ANA Change Notices: Not Supported 00:38:26.470 PLE Aggregate Log Change Notices: Not Supported 00:38:26.470 LBA Status Info Alert Notices: Not Supported 00:38:26.470 EGE Aggregate Log Change Notices: Not Supported 00:38:26.470 Normal NVM Subsystem Shutdown event: Not Supported 00:38:26.470 Zone Descriptor Change Notices: Not Supported 00:38:26.470 Discovery Log Change Notices: Not Supported 00:38:26.470 Controller Attributes 00:38:26.470 128-bit Host Identifier: Not Supported 00:38:26.470 Non-Operational Permissive Mode: Not Supported 00:38:26.470 NVM Sets: Not Supported 00:38:26.470 Read Recovery Levels: Not Supported 00:38:26.470 Endurance Groups: Not Supported 00:38:26.470 Predictable Latency Mode: Not Supported 00:38:26.470 Traffic Based Keep ALive: Not Supported 00:38:26.470 Namespace Granularity: Not Supported 00:38:26.470 SQ Associations: Not Supported 00:38:26.470 UUID List: Not Supported 00:38:26.470 Multi-Domain Subsystem: Not Supported 00:38:26.470 Fixed Capacity Management: Not Supported 00:38:26.470 Variable Capacity Management: Not Supported 00:38:26.470 Delete Endurance Group: Not Supported 00:38:26.470 Delete NVM Set: Not Supported 00:38:26.470 Extended LBA Formats Supported: Supported 00:38:26.470 Flexible Data Placement Supported: Not Supported 00:38:26.470 00:38:26.470 Controller Memory Buffer Support 00:38:26.470 ================================ 00:38:26.470 Supported: No 00:38:26.470 00:38:26.470 Persistent Memory Region Support 00:38:26.470 ================================ 00:38:26.470 Supported: No 00:38:26.470 00:38:26.470 Admin Command Set Attributes 00:38:26.470 ============================ 00:38:26.470 Security Send/Receive: Not Supported 00:38:26.470 Format NVM: Supported 00:38:26.470 Firmware Activate/Download: Not Supported 00:38:26.470 Namespace Management: Supported 00:38:26.470 Device Self-Test: Not Supported 00:38:26.470 Directives: Supported 00:38:26.470 NVMe-MI: Not Supported 00:38:26.470 Virtualization Management: Not Supported 00:38:26.470 Doorbell Buffer Config: Supported 00:38:26.470 Get LBA Status Capability: Not Supported 00:38:26.470 Command & Feature Lockdown Capability: Not Supported 00:38:26.470 Abort Command Limit: 4 00:38:26.470 Async Event Request Limit: 4 00:38:26.470 Number of Firmware Slots: N/A 00:38:26.470 Firmware Slot 1 Read-Only: N/A 00:38:26.470 Firmware Activation Without Reset: N/A 00:38:26.470 Multiple Update Detection Support: N/A 00:38:26.470 Firmware Update Granularity: No Information Provided 00:38:26.470 Per-Namespace SMART Log: Yes 00:38:26.470 Asymmetric Namespace Access Log Page: Not Supported 00:38:26.470 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:38:26.470 Command Effects Log Page: Supported 00:38:26.470 Get Log Page Extended Data: Supported 00:38:26.470 Telemetry Log Pages: Not Supported 00:38:26.470 Persistent Event Log Pages: Not Supported 00:38:26.470 Supported Log Pages Log Page: May Support 00:38:26.470 Commands Supported & Effects Log Page: Not Supported 00:38:26.470 Feature Identifiers & Effects Log Page:May Support 00:38:26.470 NVMe-MI Commands & Effects Log Page: May Support 00:38:26.470 Data Area 4 for Telemetry Log: Not Supported 00:38:26.470 Error Log Page Entries Supported: 1 00:38:26.470 Keep Alive: Not Supported 00:38:26.470 00:38:26.470 NVM Command Set Attributes 00:38:26.470 ========================== 00:38:26.470 Submission Queue Entry Size 00:38:26.470 Max: 64 00:38:26.470 Min: 64 00:38:26.470 Completion Queue Entry Size 00:38:26.470 Max: 16 00:38:26.470 Min: 16 00:38:26.470 Number of Namespaces: 256 00:38:26.470 Compare Command: Supported 00:38:26.470 Write Uncorrectable Command: Not Supported 00:38:26.470 Dataset Management Command: Supported 00:38:26.470 Write Zeroes Command: Supported 00:38:26.470 Set Features Save Field: Supported 00:38:26.470 Reservations: Not Supported 00:38:26.470 Timestamp: Supported 00:38:26.470 Copy: Supported 00:38:26.470 Volatile Write Cache: Present 00:38:26.470 Atomic Write Unit (Normal): 1 00:38:26.470 Atomic Write Unit (PFail): 1 00:38:26.470 Atomic Compare & Write Unit: 1 00:38:26.470 Fused Compare & Write: Not Supported 00:38:26.470 Scatter-Gather List 00:38:26.470 SGL Command Set: Supported 00:38:26.470 SGL Keyed: Not Supported 00:38:26.470 SGL Bit Bucket Descriptor: Not Supported 00:38:26.470 SGL Metadata Pointer: Not Supported 00:38:26.470 Oversized SGL: Not Supported 00:38:26.470 SGL Metadata Address: Not Supported 00:38:26.470 SGL Offset: Not Supported 00:38:26.470 Transport SGL Data Block: Not Supported 00:38:26.470 Replay Protected Memory Block: Not Supported 00:38:26.470 00:38:26.470 Firmware Slot Information 00:38:26.470 ========================= 00:38:26.470 Active slot: 1 00:38:26.470 Slot 1 Firmware Revision: 1.0 00:38:26.470 00:38:26.470 00:38:26.470 Commands Supported and Effects 00:38:26.470 ============================== 00:38:26.470 Admin Commands 00:38:26.470 -------------- 00:38:26.470 Delete I/O Submission Queue (00h): Supported 00:38:26.470 Create I/O Submission Queue (01h): Supported 00:38:26.470 Get Log Page (02h): Supported 00:38:26.470 Delete I/O Completion Queue (04h): Supported 00:38:26.470 Create I/O Completion Queue (05h): Supported 00:38:26.470 Identify (06h): Supported 00:38:26.470 Abort (08h): Supported 00:38:26.470 Set Features (09h): Supported 00:38:26.470 Get Features (0Ah): Supported 00:38:26.470 Asynchronous Event Request (0Ch): Supported 00:38:26.470 Namespace Attachment (15h): Supported NS-Inventory-Change 00:38:26.470 Directive Send (19h): Supported 00:38:26.470 Directive Receive (1Ah): Supported 00:38:26.470 Virtualization Management (1Ch): Supported 00:38:26.470 Doorbell Buffer Config (7Ch): Supported 00:38:26.470 Format NVM (80h): Supported LBA-Change 00:38:26.470 I/O Commands 00:38:26.470 ------------ 00:38:26.470 Flush (00h): Supported LBA-Change 00:38:26.470 Write (01h): Supported LBA-Change 00:38:26.470 Read (02h): Supported 00:38:26.470 Compare (05h): Supported 00:38:26.470 Write Zeroes (08h): Supported LBA-Change 00:38:26.470 Dataset Management (09h): Supported LBA-Change 00:38:26.470 Unknown (0Ch): Supported 00:38:26.470 Unknown (12h): Supported 00:38:26.470 Copy (19h): Supported LBA-Change 00:38:26.470 Unknown (1Dh): Supported LBA-Change 00:38:26.470 00:38:26.470 Error Log 00:38:26.470 ========= 00:38:26.470 00:38:26.470 Arbitration 00:38:26.470 =========== 00:38:26.470 Arbitration Burst: no limit 00:38:26.470 00:38:26.470 Power Management 00:38:26.470 ================ 00:38:26.470 Number of Power States: 1 00:38:26.470 Current Power State: Power State #0 00:38:26.470 Power State #0: 00:38:26.470 Max Power: 25.00 W 00:38:26.470 Non-Operational State: Operational 00:38:26.470 Entry Latency: 16 microseconds 00:38:26.470 Exit Latency: 4 microseconds 00:38:26.470 Relative Read Throughput: 0 00:38:26.470 Relative Read Latency: 0 00:38:26.470 Relative Write Throughput: 0 00:38:26.470 Relative Write Latency: 0 00:38:26.470 Idle Power: Not Reported 00:38:26.470 Active Power: Not Reported 00:38:26.470 Non-Operational Permissive Mode: Not Supported 00:38:26.470 00:38:26.470 Health Information 00:38:26.470 ================== 00:38:26.470 Critical Warnings: 00:38:26.470 Available Spare Space: OK 00:38:26.470 Temperature: OK 00:38:26.470 Device Reliability: OK 00:38:26.470 Read Only: No 00:38:26.470 Volatile Memory Backup: OK 00:38:26.470 Current Temperature: 323 Kelvin (50 Celsius) 00:38:26.470 Temperature Threshold: 343 Kelvin (70 Celsius) 00:38:26.470 Available Spare: 0% 00:38:26.470 Available Spare Threshold: 0% 00:38:26.470 Life Percentage Used: 0% 00:38:26.470 Data Units Read: 4486 00:38:26.470 Data Units Written: 4132 00:38:26.470 Host Read Commands: 230965 00:38:26.470 Host Write Commands: 243680 00:38:26.470 Controller Busy Time: 0 minutes 00:38:26.470 Power Cycles: 0 00:38:26.470 Power On Hours: 0 hours 00:38:26.470 Unsafe Shutdowns: 0 00:38:26.470 Unrecoverable Media Errors: 0 00:38:26.470 Lifetime Error Log Entries: 0 00:38:26.470 Warning Temperature Time: 0 minutes 00:38:26.470 Critical Temperature Time: 0 minutes 00:38:26.470 00:38:26.470 Number of Queues 00:38:26.470 ================ 00:38:26.470 Number of I/O Submission Queues: 64 00:38:26.470 Number of I/O Completion Queues: 64 00:38:26.470 00:38:26.470 ZNS Specific Controller Data 00:38:26.470 ============================ 00:38:26.470 Zone Append Size Limit: 0 00:38:26.470 00:38:26.470 00:38:26.470 Active Namespaces 00:38:26.470 ================= 00:38:26.470 Namespace ID:1 00:38:26.470 Error Recovery Timeout: Unlimited 00:38:26.470 Command Set Identifier: NVM (00h) 00:38:26.470 Deallocate: Supported 00:38:26.470 Deallocated/Unwritten Error: Supported 00:38:26.470 Deallocated Read Value: All 0x00 00:38:26.470 Deallocate in Write Zeroes: Not Supported 00:38:26.470 Deallocated Guard Field: 0xFFFF 00:38:26.470 Flush: Supported 00:38:26.470 Reservation: Not Supported 00:38:26.470 Namespace Sharing Capabilities: Private 00:38:26.470 Size (in LBAs): 1310720 (5GiB) 00:38:26.470 Capacity (in LBAs): 1310720 (5GiB) 00:38:26.470 Utilization (in LBAs): 1310720 (5GiB) 00:38:26.470 Thin Provisioning: Not Supported 00:38:26.470 Per-NS Atomic Units: No 00:38:26.470 Maximum Single Source Range Length: 128 00:38:26.470 Maximum Copy Length: 128 00:38:26.470 Maximum Source Range Count: 128 00:38:26.470 NGUID/EUI64 Never Reused: No 00:38:26.470 Namespace Write Protected: No 00:38:26.470 Number of LBA Formats: 8 00:38:26.470 Current LBA Format: LBA Format #04 00:38:26.470 LBA Format #00: Data Size: 512 Metadata Size: 0 00:38:26.470 LBA Format #01: Data Size: 512 Metadata Size: 8 00:38:26.470 LBA Format #02: Data Size: 512 Metadata Size: 16 00:38:26.470 LBA Format #03: Data Size: 512 Metadata Size: 64 00:38:26.470 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:38:26.470 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:38:26.470 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:38:26.470 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:38:26.470 00:38:26.470 NVM Specific Namespace Data 00:38:26.470 =========================== 00:38:26.470 Logical Block Storage Tag Mask: 0 00:38:26.470 Protection Information Capabilities: 00:38:26.470 16b Guard Protection Information Storage Tag Support: No 00:38:26.470 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:38:26.470 Storage Tag Check Read Support: No 00:38:26.470 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:38:26.470 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:38:26.470 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:38:26.470 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:38:26.470 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:38:26.470 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:38:26.470 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:38:26.470 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:38:26.470 00:38:26.470 real 0m0.687s 00:38:26.470 user 0m0.252s 00:38:26.470 sys 0m0.354s 00:38:26.470 00:22:22 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:26.470 00:22:22 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:38:26.470 ************************************ 00:38:26.470 END TEST nvme_identify 00:38:26.470 ************************************ 00:38:26.470 00:22:22 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:38:26.470 00:22:22 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:26.470 00:22:22 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:26.470 00:22:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:38:26.470 ************************************ 00:38:26.470 START TEST nvme_perf 00:38:26.470 ************************************ 00:38:26.470 00:22:22 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:38:26.470 00:22:22 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:38:27.845 Initializing NVMe Controllers 00:38:27.845 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:38:27.845 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:38:27.845 Initialization complete. Launching workers. 00:38:27.845 ======================================================== 00:38:27.845 Latency(us) 00:38:27.845 Device Information : IOPS MiB/s Average min max 00:38:27.845 PCIE (0000:00:10.0) NSID 1 from core 0: 86648.20 1015.41 1476.01 652.31 6451.29 00:38:27.845 ======================================================== 00:38:27.845 Total : 86648.20 1015.41 1476.01 652.31 6451.29 00:38:27.845 00:38:27.845 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:38:27.845 ================================================================================= 00:38:27.845 1.00000% : 789.411us 00:38:27.845 10.00000% : 968.145us 00:38:27.845 25.00000% : 1161.775us 00:38:27.845 50.00000% : 1444.771us 00:38:27.845 75.00000% : 1720.320us 00:38:27.845 90.00000% : 1995.869us 00:38:27.845 95.00000% : 2234.182us 00:38:27.845 98.00000% : 2606.545us 00:38:27.845 99.00000% : 2785.280us 00:38:27.845 99.50000% : 3023.593us 00:38:27.845 99.90000% : 4051.316us 00:38:27.845 99.99000% : 6136.553us 00:38:27.845 99.99900% : 6464.233us 00:38:27.845 99.99990% : 6464.233us 00:38:27.845 99.99999% : 6464.233us 00:38:27.845 00:38:27.845 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:38:27.845 ============================================================================== 00:38:27.845 Range in us Cumulative IO count 00:38:27.845 651.636 - 655.360: 0.0012% ( 1) 00:38:27.845 655.360 - 659.084: 0.0023% ( 1) 00:38:27.845 666.531 - 670.255: 0.0069% ( 4) 00:38:27.845 670.255 - 673.978: 0.0104% ( 3) 00:38:27.845 673.978 - 677.702: 0.0150% ( 4) 00:38:27.845 677.702 - 681.425: 0.0173% ( 2) 00:38:27.845 681.425 - 685.149: 0.0208% ( 3) 00:38:27.845 685.149 - 688.873: 0.0219% ( 1) 00:38:27.845 688.873 - 692.596: 0.0242% ( 2) 00:38:27.845 692.596 - 696.320: 0.0265% ( 2) 00:38:27.845 696.320 - 700.044: 0.0335% ( 6) 00:38:27.845 700.044 - 703.767: 0.0369% ( 3) 00:38:27.845 703.767 - 707.491: 0.0439% ( 6) 00:38:27.845 707.491 - 711.215: 0.0519% ( 7) 00:38:27.845 711.215 - 714.938: 0.0658% ( 12) 00:38:27.845 714.938 - 718.662: 0.0715% ( 5) 00:38:27.845 718.662 - 722.385: 0.0831% ( 10) 00:38:27.845 722.385 - 726.109: 0.1004% ( 15) 00:38:27.845 726.109 - 729.833: 0.1223% ( 19) 00:38:27.845 729.833 - 733.556: 0.1512% ( 25) 00:38:27.845 733.556 - 737.280: 0.1696% ( 16) 00:38:27.845 737.280 - 741.004: 0.1996% ( 26) 00:38:27.845 741.004 - 744.727: 0.2308% ( 27) 00:38:27.845 744.727 - 748.451: 0.2700% ( 34) 00:38:27.845 748.451 - 752.175: 0.3139% ( 38) 00:38:27.845 752.175 - 755.898: 0.3681% ( 47) 00:38:27.845 755.898 - 759.622: 0.4027% ( 30) 00:38:27.845 759.622 - 763.345: 0.4489% ( 40) 00:38:27.845 763.345 - 767.069: 0.5251% ( 66) 00:38:27.845 767.069 - 770.793: 0.5908% ( 57) 00:38:27.845 770.793 - 774.516: 0.6589% ( 59) 00:38:27.845 774.516 - 778.240: 0.7328% ( 64) 00:38:27.845 778.240 - 781.964: 0.8263% ( 81) 00:38:27.845 781.964 - 785.687: 0.9197% ( 81) 00:38:27.845 785.687 - 789.411: 1.0120% ( 80) 00:38:27.845 789.411 - 793.135: 1.1090% ( 84) 00:38:27.845 793.135 - 796.858: 1.1990% ( 78) 00:38:27.845 796.858 - 800.582: 1.3225% ( 107) 00:38:27.845 800.582 - 804.305: 1.4367% ( 99) 00:38:27.845 804.305 - 808.029: 1.5717% ( 117) 00:38:27.845 808.029 - 811.753: 1.6998% ( 111) 00:38:27.845 811.753 - 815.476: 1.8245% ( 108) 00:38:27.846 815.476 - 819.200: 1.9364% ( 97) 00:38:27.846 819.200 - 822.924: 2.0772% ( 122) 00:38:27.846 822.924 - 826.647: 2.2399% ( 141) 00:38:27.846 826.647 - 830.371: 2.4072% ( 145) 00:38:27.846 830.371 - 834.095: 2.5330% ( 109) 00:38:27.846 834.095 - 837.818: 2.6773% ( 125) 00:38:27.846 837.818 - 841.542: 2.8492% ( 149) 00:38:27.846 841.542 - 845.265: 3.0188% ( 147) 00:38:27.846 845.265 - 848.989: 3.1965% ( 154) 00:38:27.846 848.989 - 852.713: 3.3546% ( 137) 00:38:27.846 852.713 - 856.436: 3.5312% ( 153) 00:38:27.846 856.436 - 860.160: 3.7181% ( 162) 00:38:27.846 860.160 - 863.884: 3.9247% ( 179) 00:38:27.846 863.884 - 867.607: 4.1001% ( 152) 00:38:27.846 867.607 - 871.331: 4.3125% ( 184) 00:38:27.846 871.331 - 875.055: 4.5017% ( 164) 00:38:27.846 875.055 - 878.778: 4.7071% ( 178) 00:38:27.846 878.778 - 882.502: 4.9010% ( 168) 00:38:27.846 882.502 - 886.225: 5.1260% ( 195) 00:38:27.846 886.225 - 889.949: 5.3314% ( 178) 00:38:27.846 889.949 - 893.673: 5.5357% ( 177) 00:38:27.846 893.673 - 897.396: 5.7307% ( 169) 00:38:27.846 897.396 - 901.120: 5.9615% ( 200) 00:38:27.846 901.120 - 904.844: 6.1946% ( 202) 00:38:27.846 904.844 - 908.567: 6.4069% ( 184) 00:38:27.846 908.567 - 912.291: 6.6377% ( 200) 00:38:27.846 912.291 - 916.015: 6.8489% ( 183) 00:38:27.846 916.015 - 919.738: 7.0855% ( 205) 00:38:27.846 919.738 - 923.462: 7.3071% ( 192) 00:38:27.846 923.462 - 927.185: 7.5586% ( 218) 00:38:27.846 927.185 - 930.909: 7.7594% ( 174) 00:38:27.846 930.909 - 934.633: 8.0168% ( 223) 00:38:27.846 934.633 - 938.356: 8.2545% ( 206) 00:38:27.846 938.356 - 942.080: 8.5003% ( 213) 00:38:27.846 942.080 - 945.804: 8.7553% ( 221) 00:38:27.846 945.804 - 949.527: 8.9976% ( 210) 00:38:27.846 949.527 - 953.251: 9.2377% ( 208) 00:38:27.846 953.251 - 960.698: 9.7547% ( 448) 00:38:27.846 960.698 - 968.145: 10.2578% ( 436) 00:38:27.846 968.145 - 975.593: 10.7702% ( 444) 00:38:27.846 975.593 - 983.040: 11.3125% ( 470) 00:38:27.846 983.040 - 990.487: 11.8180% ( 438) 00:38:27.846 990.487 - 997.935: 12.3604% ( 470) 00:38:27.846 997.935 - 1005.382: 12.8993% ( 467) 00:38:27.846 1005.382 - 1012.829: 13.4382% ( 467) 00:38:27.846 1012.829 - 1020.276: 13.9910% ( 479) 00:38:27.846 1020.276 - 1027.724: 14.5414% ( 477) 00:38:27.846 1027.724 - 1035.171: 15.1265% ( 507) 00:38:27.846 1035.171 - 1042.618: 15.7185% ( 513) 00:38:27.846 1042.618 - 1050.065: 16.2943% ( 499) 00:38:27.846 1050.065 - 1057.513: 16.8563% ( 487) 00:38:27.846 1057.513 - 1064.960: 17.4702% ( 532) 00:38:27.846 1064.960 - 1072.407: 18.0530% ( 505) 00:38:27.846 1072.407 - 1079.855: 18.6531% ( 520) 00:38:27.846 1079.855 - 1087.302: 19.2439% ( 512) 00:38:27.846 1087.302 - 1094.749: 19.8440% ( 520) 00:38:27.846 1094.749 - 1102.196: 20.4475% ( 523) 00:38:27.846 1102.196 - 1109.644: 21.0580% ( 529) 00:38:27.846 1109.644 - 1117.091: 21.6465% ( 510) 00:38:27.846 1117.091 - 1124.538: 22.2731% ( 543) 00:38:27.846 1124.538 - 1131.985: 22.8847% ( 530) 00:38:27.846 1131.985 - 1139.433: 23.5125% ( 544) 00:38:27.846 1139.433 - 1146.880: 24.1184% ( 525) 00:38:27.846 1146.880 - 1154.327: 24.7150% ( 517) 00:38:27.846 1154.327 - 1161.775: 25.3577% ( 557) 00:38:27.846 1161.775 - 1169.222: 25.9647% ( 526) 00:38:27.846 1169.222 - 1176.669: 26.6214% ( 569) 00:38:27.846 1176.669 - 1184.116: 27.2053% ( 506) 00:38:27.846 1184.116 - 1191.564: 27.8504% ( 559) 00:38:27.846 1191.564 - 1199.011: 28.4597% ( 528) 00:38:27.846 1199.011 - 1206.458: 29.0897% ( 546) 00:38:27.846 1206.458 - 1213.905: 29.6887% ( 519) 00:38:27.846 1213.905 - 1221.353: 30.3545% ( 577) 00:38:27.846 1221.353 - 1228.800: 30.9511% ( 517) 00:38:27.846 1228.800 - 1236.247: 31.6193% ( 579) 00:38:27.846 1236.247 - 1243.695: 32.2297% ( 529) 00:38:27.846 1243.695 - 1251.142: 32.8794% ( 563) 00:38:27.846 1251.142 - 1258.589: 33.4991% ( 537) 00:38:27.846 1258.589 - 1266.036: 34.1315% ( 548) 00:38:27.846 1266.036 - 1273.484: 34.7777% ( 560) 00:38:27.846 1273.484 - 1280.931: 35.4321% ( 567) 00:38:27.846 1280.931 - 1288.378: 36.1118% ( 589) 00:38:27.846 1288.378 - 1295.825: 36.7522% ( 555) 00:38:27.846 1295.825 - 1303.273: 37.4273% ( 585) 00:38:27.846 1303.273 - 1310.720: 38.0770% ( 563) 00:38:27.846 1310.720 - 1318.167: 38.7602% ( 592) 00:38:27.846 1318.167 - 1325.615: 39.4225% ( 574) 00:38:27.846 1325.615 - 1333.062: 40.1311% ( 614) 00:38:27.846 1333.062 - 1340.509: 40.7866% ( 568) 00:38:27.846 1340.509 - 1347.956: 41.5020% ( 620) 00:38:27.846 1347.956 - 1355.404: 42.1517% ( 563) 00:38:27.846 1355.404 - 1362.851: 42.8614% ( 615) 00:38:27.846 1362.851 - 1370.298: 43.5307% ( 580) 00:38:27.846 1370.298 - 1377.745: 44.2127% ( 591) 00:38:27.846 1377.745 - 1385.193: 44.9190% ( 612) 00:38:27.846 1385.193 - 1392.640: 45.5791% ( 572) 00:38:27.846 1392.640 - 1400.087: 46.3211% ( 643) 00:38:27.846 1400.087 - 1407.535: 46.9754% ( 567) 00:38:27.846 1407.535 - 1414.982: 47.7059% ( 633) 00:38:27.846 1414.982 - 1422.429: 48.3729% ( 578) 00:38:27.846 1422.429 - 1429.876: 49.1022% ( 632) 00:38:27.846 1429.876 - 1437.324: 49.7831% ( 590) 00:38:27.846 1437.324 - 1444.771: 50.4985% ( 620) 00:38:27.846 1444.771 - 1452.218: 51.1863% ( 596) 00:38:27.846 1452.218 - 1459.665: 51.9191% ( 635) 00:38:27.846 1459.665 - 1467.113: 52.5884% ( 580) 00:38:27.846 1467.113 - 1474.560: 53.3166% ( 631) 00:38:27.846 1474.560 - 1482.007: 53.9916% ( 585) 00:38:27.846 1482.007 - 1489.455: 54.6944% ( 609) 00:38:27.846 1489.455 - 1496.902: 55.4134% ( 623) 00:38:27.846 1496.902 - 1504.349: 56.1069% ( 601) 00:38:27.846 1504.349 - 1511.796: 56.8120% ( 611) 00:38:27.846 1511.796 - 1519.244: 57.4940% ( 591) 00:38:27.846 1519.244 - 1526.691: 58.1991% ( 611) 00:38:27.846 1526.691 - 1534.138: 58.8707% ( 582) 00:38:27.846 1534.138 - 1541.585: 59.5885% ( 622) 00:38:27.846 1541.585 - 1549.033: 60.2566% ( 579) 00:38:27.846 1549.033 - 1556.480: 60.9537% ( 604) 00:38:27.846 1556.480 - 1563.927: 61.6403% ( 595) 00:38:27.846 1563.927 - 1571.375: 62.3511% ( 616) 00:38:27.846 1571.375 - 1578.822: 63.0320% ( 590) 00:38:27.846 1578.822 - 1586.269: 63.7151% ( 592) 00:38:27.846 1586.269 - 1593.716: 64.4029% ( 596) 00:38:27.846 1593.716 - 1601.164: 65.0780% ( 585) 00:38:27.846 1601.164 - 1608.611: 65.7669% ( 597) 00:38:27.846 1608.611 - 1616.058: 66.4109% ( 558) 00:38:27.846 1616.058 - 1623.505: 67.0975% ( 595) 00:38:27.846 1623.505 - 1630.953: 67.7426% ( 559) 00:38:27.846 1630.953 - 1638.400: 68.4153% ( 583) 00:38:27.846 1638.400 - 1645.847: 69.0743% ( 571) 00:38:27.846 1645.847 - 1653.295: 69.7309% ( 569) 00:38:27.846 1653.295 - 1660.742: 70.3794% ( 562) 00:38:27.846 1660.742 - 1668.189: 71.0580% ( 588) 00:38:27.846 1668.189 - 1675.636: 71.6996% ( 556) 00:38:27.846 1675.636 - 1683.084: 72.3447% ( 559) 00:38:27.846 1683.084 - 1690.531: 72.9828% ( 553) 00:38:27.846 1690.531 - 1697.978: 73.6198% ( 552) 00:38:27.846 1697.978 - 1705.425: 74.2707% ( 564) 00:38:27.846 1705.425 - 1712.873: 74.8788% ( 527) 00:38:27.846 1712.873 - 1720.320: 75.5054% ( 543) 00:38:27.846 1720.320 - 1727.767: 76.1159% ( 529) 00:38:27.846 1727.767 - 1735.215: 76.7264% ( 529) 00:38:27.846 1735.215 - 1742.662: 77.3184% ( 513) 00:38:27.846 1742.662 - 1750.109: 77.9519% ( 549) 00:38:27.846 1750.109 - 1757.556: 78.5358% ( 506) 00:38:27.846 1757.556 - 1765.004: 79.1428% ( 526) 00:38:27.846 1765.004 - 1772.451: 79.7152% ( 496) 00:38:27.846 1772.451 - 1779.898: 80.2887% ( 497) 00:38:27.846 1779.898 - 1787.345: 80.8403% ( 478) 00:38:27.846 1787.345 - 1794.793: 81.3850% ( 472) 00:38:27.846 1794.793 - 1802.240: 81.9274% ( 470) 00:38:27.846 1802.240 - 1809.687: 82.4444% ( 448) 00:38:27.846 1809.687 - 1817.135: 82.9521% ( 440) 00:38:27.846 1817.135 - 1824.582: 83.4299% ( 414) 00:38:27.846 1824.582 - 1832.029: 83.8742% ( 385) 00:38:27.846 1832.029 - 1839.476: 84.3058% ( 374) 00:38:27.846 1839.476 - 1846.924: 84.7131% ( 353) 00:38:27.846 1846.924 - 1854.371: 85.1436% ( 373) 00:38:27.847 1854.371 - 1861.818: 85.4932% ( 303) 00:38:27.847 1861.818 - 1869.265: 85.8706% ( 327) 00:38:27.847 1869.265 - 1876.713: 86.2260% ( 308) 00:38:27.847 1876.713 - 1884.160: 86.5445% ( 276) 00:38:27.847 1884.160 - 1891.607: 86.8688% ( 281) 00:38:27.847 1891.607 - 1899.055: 87.1919% ( 280) 00:38:27.847 1899.055 - 1906.502: 87.4700% ( 241) 00:38:27.847 1906.502 - 1921.396: 88.0204% ( 477) 00:38:27.847 1921.396 - 1936.291: 88.5397% ( 450) 00:38:27.847 1936.291 - 1951.185: 89.0071% ( 405) 00:38:27.847 1951.185 - 1966.080: 89.4629% ( 395) 00:38:27.847 1966.080 - 1980.975: 89.8945% ( 374) 00:38:27.847 1980.975 - 1995.869: 90.3100% ( 360) 00:38:27.847 1995.869 - 2010.764: 90.7046% ( 342) 00:38:27.847 2010.764 - 2025.658: 91.0704% ( 317) 00:38:27.847 2025.658 - 2040.553: 91.4259% ( 308) 00:38:27.847 2040.553 - 2055.447: 91.8044% ( 328) 00:38:27.847 2055.447 - 2070.342: 92.1483% ( 298) 00:38:27.847 2070.342 - 2085.236: 92.4783% ( 286) 00:38:27.847 2085.236 - 2100.131: 92.8130% ( 290) 00:38:27.847 2100.131 - 2115.025: 93.1176% ( 264) 00:38:27.847 2115.025 - 2129.920: 93.4084% ( 252) 00:38:27.847 2129.920 - 2144.815: 93.6946% ( 248) 00:38:27.847 2144.815 - 2159.709: 93.9554% ( 226) 00:38:27.847 2159.709 - 2174.604: 94.1954% ( 208) 00:38:27.847 2174.604 - 2189.498: 94.4320% ( 205) 00:38:27.847 2189.498 - 2204.393: 94.6374% ( 178) 00:38:27.847 2204.393 - 2219.287: 94.8474% ( 182) 00:38:27.847 2219.287 - 2234.182: 95.0367% ( 164) 00:38:27.847 2234.182 - 2249.076: 95.2098% ( 150) 00:38:27.847 2249.076 - 2263.971: 95.3610% ( 131) 00:38:27.847 2263.971 - 2278.865: 95.5271% ( 144) 00:38:27.847 2278.865 - 2293.760: 95.6749% ( 128) 00:38:27.847 2293.760 - 2308.655: 95.8329% ( 137) 00:38:27.847 2308.655 - 2323.549: 95.9622% ( 112) 00:38:27.847 2323.549 - 2338.444: 96.0961% ( 116) 00:38:27.847 2338.444 - 2353.338: 96.2115% ( 100) 00:38:27.847 2353.338 - 2368.233: 96.3361% ( 108) 00:38:27.847 2368.233 - 2383.127: 96.4573% ( 105) 00:38:27.847 2383.127 - 2398.022: 96.5773% ( 104) 00:38:27.847 2398.022 - 2412.916: 96.6915% ( 99) 00:38:27.847 2412.916 - 2427.811: 96.8011% ( 95) 00:38:27.847 2427.811 - 2442.705: 96.9119% ( 96) 00:38:27.847 2442.705 - 2457.600: 97.0181% ( 92) 00:38:27.847 2457.600 - 2472.495: 97.1254% ( 93) 00:38:27.847 2472.495 - 2487.389: 97.2247% ( 86) 00:38:27.847 2487.389 - 2502.284: 97.3297% ( 91) 00:38:27.847 2502.284 - 2517.178: 97.4324% ( 89) 00:38:27.847 2517.178 - 2532.073: 97.5409% ( 94) 00:38:27.847 2532.073 - 2546.967: 97.6470% ( 92) 00:38:27.847 2546.967 - 2561.862: 97.7578% ( 96) 00:38:27.847 2561.862 - 2576.756: 97.8501% ( 80) 00:38:27.847 2576.756 - 2591.651: 97.9621% ( 97) 00:38:27.847 2591.651 - 2606.545: 98.0659% ( 90) 00:38:27.847 2606.545 - 2621.440: 98.1652% ( 86) 00:38:27.847 2621.440 - 2636.335: 98.2702% ( 91) 00:38:27.847 2636.335 - 2651.229: 98.3602% ( 78) 00:38:27.847 2651.229 - 2666.124: 98.4525% ( 80) 00:38:27.847 2666.124 - 2681.018: 98.5471% ( 82) 00:38:27.847 2681.018 - 2695.913: 98.6314% ( 73) 00:38:27.847 2695.913 - 2710.807: 98.7110% ( 69) 00:38:27.847 2710.807 - 2725.702: 98.7906% ( 69) 00:38:27.847 2725.702 - 2740.596: 98.8656% ( 65) 00:38:27.847 2740.596 - 2755.491: 98.9395% ( 64) 00:38:27.847 2755.491 - 2770.385: 98.9995% ( 52) 00:38:27.847 2770.385 - 2785.280: 99.0676% ( 59) 00:38:27.847 2785.280 - 2800.175: 99.1276% ( 52) 00:38:27.847 2800.175 - 2815.069: 99.1761% ( 42) 00:38:27.847 2815.069 - 2829.964: 99.2222% ( 40) 00:38:27.847 2829.964 - 2844.858: 99.2603% ( 33) 00:38:27.847 2844.858 - 2859.753: 99.2926% ( 28) 00:38:27.847 2859.753 - 2874.647: 99.3191% ( 23) 00:38:27.847 2874.647 - 2889.542: 99.3480% ( 25) 00:38:27.847 2889.542 - 2904.436: 99.3699% ( 19) 00:38:27.847 2904.436 - 2919.331: 99.3918% ( 19) 00:38:27.847 2919.331 - 2934.225: 99.4207% ( 25) 00:38:27.847 2934.225 - 2949.120: 99.4345% ( 12) 00:38:27.847 2949.120 - 2964.015: 99.4519% ( 15) 00:38:27.847 2964.015 - 2978.909: 99.4611% ( 8) 00:38:27.847 2978.909 - 2993.804: 99.4772% ( 14) 00:38:27.847 2993.804 - 3008.698: 99.4853% ( 7) 00:38:27.847 3008.698 - 3023.593: 99.5003% ( 13) 00:38:27.847 3023.593 - 3038.487: 99.5096% ( 8) 00:38:27.847 3038.487 - 3053.382: 99.5257% ( 14) 00:38:27.847 3053.382 - 3068.276: 99.5373% ( 10) 00:38:27.847 3068.276 - 3083.171: 99.5511% ( 12) 00:38:27.847 3083.171 - 3098.065: 99.5638% ( 11) 00:38:27.847 3098.065 - 3112.960: 99.5742% ( 9) 00:38:27.847 3112.960 - 3127.855: 99.5857% ( 10) 00:38:27.847 3127.855 - 3142.749: 99.5950% ( 8) 00:38:27.847 3142.749 - 3157.644: 99.6065% ( 10) 00:38:27.847 3157.644 - 3172.538: 99.6157% ( 8) 00:38:27.847 3172.538 - 3187.433: 99.6238% ( 7) 00:38:27.847 3187.433 - 3202.327: 99.6342% ( 9) 00:38:27.847 3202.327 - 3217.222: 99.6423% ( 7) 00:38:27.847 3217.222 - 3232.116: 99.6515% ( 8) 00:38:27.847 3232.116 - 3247.011: 99.6607% ( 8) 00:38:27.847 3247.011 - 3261.905: 99.6688% ( 7) 00:38:27.847 3261.905 - 3276.800: 99.6780% ( 8) 00:38:27.847 3276.800 - 3291.695: 99.6838% ( 5) 00:38:27.847 3291.695 - 3306.589: 99.6919% ( 7) 00:38:27.847 3306.589 - 3321.484: 99.6965% ( 4) 00:38:27.847 3321.484 - 3336.378: 99.7023% ( 5) 00:38:27.847 3336.378 - 3351.273: 99.7092% ( 6) 00:38:27.847 3351.273 - 3366.167: 99.7127% ( 3) 00:38:27.847 3366.167 - 3381.062: 99.7161% ( 3) 00:38:27.847 3381.062 - 3395.956: 99.7196% ( 3) 00:38:27.847 3395.956 - 3410.851: 99.7254% ( 5) 00:38:27.847 3410.851 - 3425.745: 99.7288% ( 3) 00:38:27.847 3425.745 - 3440.640: 99.7346% ( 5) 00:38:27.847 3440.640 - 3455.535: 99.7392% ( 4) 00:38:27.847 3455.535 - 3470.429: 99.7450% ( 5) 00:38:27.847 3470.429 - 3485.324: 99.7461% ( 1) 00:38:27.847 3485.324 - 3500.218: 99.7507% ( 4) 00:38:27.847 3500.218 - 3515.113: 99.7554% ( 4) 00:38:27.847 3515.113 - 3530.007: 99.7611% ( 5) 00:38:27.847 3530.007 - 3544.902: 99.7646% ( 3) 00:38:27.847 3544.902 - 3559.796: 99.7704% ( 5) 00:38:27.847 3559.796 - 3574.691: 99.7750% ( 4) 00:38:27.847 3574.691 - 3589.585: 99.7784% ( 3) 00:38:27.847 3589.585 - 3604.480: 99.7831% ( 4) 00:38:27.847 3604.480 - 3619.375: 99.7877% ( 4) 00:38:27.847 3619.375 - 3634.269: 99.7911% ( 3) 00:38:27.847 3634.269 - 3649.164: 99.7946% ( 3) 00:38:27.847 3649.164 - 3664.058: 99.7981% ( 3) 00:38:27.847 3664.058 - 3678.953: 99.8027% ( 4) 00:38:27.847 3678.953 - 3693.847: 99.8073% ( 4) 00:38:27.847 3693.847 - 3708.742: 99.8131% ( 5) 00:38:27.847 3708.742 - 3723.636: 99.8165% ( 3) 00:38:27.847 3723.636 - 3738.531: 99.8211% ( 4) 00:38:27.847 3738.531 - 3753.425: 99.8257% ( 4) 00:38:27.847 3753.425 - 3768.320: 99.8315% ( 5) 00:38:27.847 3768.320 - 3783.215: 99.8361% ( 4) 00:38:27.847 3783.215 - 3798.109: 99.8407% ( 4) 00:38:27.847 3798.109 - 3813.004: 99.8431% ( 2) 00:38:27.847 3813.004 - 3842.793: 99.8534% ( 9) 00:38:27.847 3842.793 - 3872.582: 99.8615% ( 7) 00:38:27.847 3872.582 - 3902.371: 99.8696% ( 7) 00:38:27.847 3902.371 - 3932.160: 99.8788% ( 8) 00:38:27.847 3932.160 - 3961.949: 99.8834% ( 4) 00:38:27.847 3961.949 - 3991.738: 99.8904% ( 6) 00:38:27.847 3991.738 - 4021.527: 99.8961% ( 5) 00:38:27.847 4021.527 - 4051.316: 99.9008% ( 4) 00:38:27.847 4051.316 - 4081.105: 99.9065% ( 5) 00:38:27.847 4081.105 - 4110.895: 99.9100% ( 3) 00:38:27.847 4110.895 - 4140.684: 99.9135% ( 3) 00:38:27.847 4140.684 - 4170.473: 99.9169% ( 3) 00:38:27.847 4170.473 - 4200.262: 99.9192% ( 2) 00:38:27.847 4200.262 - 4230.051: 99.9204% ( 1) 00:38:27.847 4230.051 - 4259.840: 99.9215% ( 1) 00:38:27.847 4259.840 - 4289.629: 99.9227% ( 1) 00:38:27.847 4289.629 - 4319.418: 99.9238% ( 1) 00:38:27.847 4319.418 - 4349.207: 99.9250% ( 1) 00:38:27.847 4349.207 - 4378.996: 99.9261% ( 1) 00:38:27.847 4408.785 - 4438.575: 99.9273% ( 1) 00:38:27.847 4438.575 - 4468.364: 99.9285% ( 1) 00:38:27.847 4468.364 - 4498.153: 99.9296% ( 1) 00:38:27.847 4498.153 - 4527.942: 99.9308% ( 1) 00:38:27.847 4557.731 - 4587.520: 99.9319% ( 1) 00:38:27.847 4587.520 - 4617.309: 99.9331% ( 1) 00:38:27.847 4617.309 - 4647.098: 99.9342% ( 1) 00:38:27.847 4676.887 - 4706.676: 99.9354% ( 1) 00:38:27.847 4706.676 - 4736.465: 99.9365% ( 1) 00:38:27.847 4736.465 - 4766.255: 99.9377% ( 1) 00:38:27.847 4766.255 - 4796.044: 99.9388% ( 1) 00:38:27.847 4796.044 - 4825.833: 99.9400% ( 1) 00:38:27.847 4825.833 - 4855.622: 99.9411% ( 1) 00:38:27.847 4855.622 - 4885.411: 99.9423% ( 1) 00:38:27.847 4885.411 - 4915.200: 99.9446% ( 2) 00:38:27.847 4915.200 - 4944.989: 99.9458% ( 1) 00:38:27.847 4944.989 - 4974.778: 99.9469% ( 1) 00:38:27.848 5004.567 - 5034.356: 99.9481% ( 1) 00:38:27.848 5034.356 - 5064.145: 99.9492% ( 1) 00:38:27.848 5064.145 - 5093.935: 99.9504% ( 1) 00:38:27.848 5093.935 - 5123.724: 99.9515% ( 1) 00:38:27.848 5123.724 - 5153.513: 99.9527% ( 1) 00:38:27.848 5153.513 - 5183.302: 99.9538% ( 1) 00:38:27.848 5183.302 - 5213.091: 99.9550% ( 1) 00:38:27.848 5213.091 - 5242.880: 99.9561% ( 1) 00:38:27.848 5242.880 - 5272.669: 99.9573% ( 1) 00:38:27.848 5272.669 - 5302.458: 99.9585% ( 1) 00:38:27.848 5302.458 - 5332.247: 99.9596% ( 1) 00:38:27.848 5332.247 - 5362.036: 99.9608% ( 1) 00:38:27.848 5362.036 - 5391.825: 99.9619% ( 1) 00:38:27.848 5391.825 - 5421.615: 99.9631% ( 1) 00:38:27.848 5421.615 - 5451.404: 99.9642% ( 1) 00:38:27.848 5451.404 - 5481.193: 99.9654% ( 1) 00:38:27.848 5481.193 - 5510.982: 99.9665% ( 1) 00:38:27.848 5510.982 - 5540.771: 99.9677% ( 1) 00:38:27.848 5540.771 - 5570.560: 99.9688% ( 1) 00:38:27.848 5570.560 - 5600.349: 99.9700% ( 1) 00:38:27.848 5600.349 - 5630.138: 99.9712% ( 1) 00:38:27.848 5630.138 - 5659.927: 99.9723% ( 1) 00:38:27.848 5659.927 - 5689.716: 99.9735% ( 1) 00:38:27.848 5689.716 - 5719.505: 99.9746% ( 1) 00:38:27.848 5719.505 - 5749.295: 99.9758% ( 1) 00:38:27.848 5749.295 - 5779.084: 99.9781% ( 2) 00:38:27.848 5808.873 - 5838.662: 99.9804% ( 2) 00:38:27.848 5838.662 - 5868.451: 99.9815% ( 1) 00:38:27.848 5898.240 - 5928.029: 99.9827% ( 1) 00:38:27.848 5928.029 - 5957.818: 99.9838% ( 1) 00:38:27.848 5957.818 - 5987.607: 99.9850% ( 1) 00:38:27.848 5987.607 - 6017.396: 99.9862% ( 1) 00:38:27.848 6017.396 - 6047.185: 99.9873% ( 1) 00:38:27.848 6076.975 - 6106.764: 99.9896% ( 2) 00:38:27.848 6106.764 - 6136.553: 99.9908% ( 1) 00:38:27.848 6136.553 - 6166.342: 99.9919% ( 1) 00:38:27.848 6166.342 - 6196.131: 99.9931% ( 1) 00:38:27.848 6225.920 - 6255.709: 99.9942% ( 1) 00:38:27.848 6255.709 - 6285.498: 99.9954% ( 1) 00:38:27.848 6285.498 - 6315.287: 99.9965% ( 1) 00:38:27.848 6315.287 - 6345.076: 99.9977% ( 1) 00:38:27.848 6345.076 - 6374.865: 99.9988% ( 1) 00:38:27.848 6434.444 - 6464.233: 100.0000% ( 1) 00:38:27.848 00:38:27.848 00:22:23 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:38:29.218 Initializing NVMe Controllers 00:38:29.218 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:38:29.218 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:38:29.218 Initialization complete. Launching workers. 00:38:29.218 ======================================================== 00:38:29.218 Latency(us) 00:38:29.218 Device Information : IOPS MiB/s Average min max 00:38:29.218 PCIE (0000:00:10.0) NSID 1 from core 0: 94108.23 1102.83 1359.04 610.00 6367.33 00:38:29.218 ======================================================== 00:38:29.218 Total : 94108.23 1102.83 1359.04 610.00 6367.33 00:38:29.218 00:38:29.218 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:38:29.218 ================================================================================= 00:38:29.218 1.00000% : 897.396us 00:38:29.218 10.00000% : 1020.276us 00:38:29.218 25.00000% : 1124.538us 00:38:29.218 50.00000% : 1295.825us 00:38:29.218 75.00000% : 1549.033us 00:38:29.218 90.00000% : 1750.109us 00:38:29.218 95.00000% : 1854.371us 00:38:29.218 98.00000% : 2144.815us 00:38:29.218 99.00000% : 2457.600us 00:38:29.218 99.50000% : 2844.858us 00:38:29.218 99.90000% : 4110.895us 00:38:29.218 99.99000% : 6017.396us 00:38:29.218 99.99900% : 6374.865us 00:38:29.218 99.99990% : 6374.865us 00:38:29.218 99.99999% : 6374.865us 00:38:29.218 00:38:29.218 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:38:29.218 ============================================================================== 00:38:29.218 Range in us Cumulative IO count 00:38:29.218 606.953 - 610.676: 0.0011% ( 1) 00:38:29.218 681.425 - 685.149: 0.0021% ( 1) 00:38:29.218 688.873 - 692.596: 0.0032% ( 1) 00:38:29.218 692.596 - 696.320: 0.0042% ( 1) 00:38:29.218 700.044 - 703.767: 0.0053% ( 1) 00:38:29.218 714.938 - 718.662: 0.0085% ( 3) 00:38:29.219 718.662 - 722.385: 0.0106% ( 2) 00:38:29.219 722.385 - 726.109: 0.0138% ( 3) 00:38:29.219 726.109 - 729.833: 0.0191% ( 5) 00:38:29.219 729.833 - 733.556: 0.0223% ( 3) 00:38:29.219 733.556 - 737.280: 0.0234% ( 1) 00:38:29.219 737.280 - 741.004: 0.0255% ( 2) 00:38:29.219 741.004 - 744.727: 0.0319% ( 6) 00:38:29.219 744.727 - 748.451: 0.0382% ( 6) 00:38:29.219 748.451 - 752.175: 0.0393% ( 1) 00:38:29.219 752.175 - 755.898: 0.0446% ( 5) 00:38:29.219 755.898 - 759.622: 0.0489% ( 4) 00:38:29.219 759.622 - 763.345: 0.0542% ( 5) 00:38:29.219 763.345 - 767.069: 0.0595% ( 5) 00:38:29.219 767.069 - 770.793: 0.0669% ( 7) 00:38:29.219 770.793 - 774.516: 0.0733% ( 6) 00:38:29.219 774.516 - 778.240: 0.0829% ( 9) 00:38:29.219 778.240 - 781.964: 0.0935% ( 10) 00:38:29.219 781.964 - 785.687: 0.1094% ( 15) 00:38:29.219 785.687 - 789.411: 0.1190% ( 9) 00:38:29.219 789.411 - 793.135: 0.1339% ( 14) 00:38:29.219 793.135 - 796.858: 0.1434% ( 9) 00:38:29.219 796.858 - 800.582: 0.1487% ( 5) 00:38:29.219 800.582 - 804.305: 0.1647% ( 15) 00:38:29.219 804.305 - 808.029: 0.1742% ( 9) 00:38:29.219 808.029 - 811.753: 0.1838% ( 9) 00:38:29.219 811.753 - 815.476: 0.1976% ( 13) 00:38:29.219 815.476 - 819.200: 0.2135% ( 15) 00:38:29.219 819.200 - 822.924: 0.2327% ( 18) 00:38:29.219 822.924 - 826.647: 0.2497% ( 16) 00:38:29.219 826.647 - 830.371: 0.2666% ( 16) 00:38:29.219 830.371 - 834.095: 0.2975% ( 29) 00:38:29.219 834.095 - 837.818: 0.3113% ( 13) 00:38:29.219 837.818 - 841.542: 0.3346% ( 22) 00:38:29.219 841.542 - 845.265: 0.3601% ( 24) 00:38:29.219 845.265 - 848.989: 0.3814% ( 20) 00:38:29.219 848.989 - 852.713: 0.4122% ( 29) 00:38:29.219 852.713 - 856.436: 0.4441% ( 30) 00:38:29.219 856.436 - 860.160: 0.4696% ( 24) 00:38:29.219 860.160 - 863.884: 0.5195% ( 47) 00:38:29.219 863.884 - 867.607: 0.5620% ( 40) 00:38:29.219 867.607 - 871.331: 0.5992% ( 35) 00:38:29.219 871.331 - 875.055: 0.6427% ( 41) 00:38:29.219 875.055 - 878.778: 0.6905% ( 45) 00:38:29.219 878.778 - 882.502: 0.7649% ( 70) 00:38:29.219 882.502 - 886.225: 0.8169% ( 49) 00:38:29.219 886.225 - 889.949: 0.9009% ( 79) 00:38:29.219 889.949 - 893.673: 0.9837% ( 78) 00:38:29.219 893.673 - 897.396: 1.0730% ( 84) 00:38:29.219 897.396 - 901.120: 1.1686% ( 90) 00:38:29.219 901.120 - 904.844: 1.2525% ( 79) 00:38:29.219 904.844 - 908.567: 1.3736% ( 114) 00:38:29.219 908.567 - 912.291: 1.4894% ( 109) 00:38:29.219 912.291 - 916.015: 1.6105% ( 114) 00:38:29.219 916.015 - 919.738: 1.7274% ( 110) 00:38:29.219 919.738 - 923.462: 1.8782% ( 142) 00:38:29.219 923.462 - 927.185: 2.0312% ( 144) 00:38:29.219 927.185 - 930.909: 2.2256% ( 183) 00:38:29.219 930.909 - 934.633: 2.3935% ( 158) 00:38:29.219 934.633 - 938.356: 2.5868% ( 182) 00:38:29.219 938.356 - 942.080: 2.8163% ( 216) 00:38:29.219 942.080 - 945.804: 3.0426% ( 213) 00:38:29.219 945.804 - 949.527: 3.2688% ( 213) 00:38:29.219 949.527 - 953.251: 3.5259% ( 242) 00:38:29.219 953.251 - 960.698: 4.0752% ( 517) 00:38:29.219 960.698 - 968.145: 4.7126% ( 600) 00:38:29.219 968.145 - 975.593: 5.3861% ( 634) 00:38:29.219 975.593 - 983.040: 6.1000% ( 672) 00:38:29.219 983.040 - 990.487: 6.8405% ( 697) 00:38:29.219 990.487 - 997.935: 7.6755% ( 786) 00:38:29.219 997.935 - 1005.382: 8.5307% ( 805) 00:38:29.219 1005.382 - 1012.829: 9.4507% ( 866) 00:38:29.219 1012.829 - 1020.276: 10.3473% ( 844) 00:38:29.219 1020.276 - 1027.724: 11.2949% ( 892) 00:38:29.219 1027.724 - 1035.171: 12.3137% ( 959) 00:38:29.219 1035.171 - 1042.618: 13.3176% ( 945) 00:38:29.219 1042.618 - 1050.065: 14.3290% ( 952) 00:38:29.219 1050.065 - 1057.513: 15.4147% ( 1022) 00:38:29.219 1057.513 - 1064.960: 16.4664% ( 990) 00:38:29.219 1064.960 - 1072.407: 17.5755% ( 1044) 00:38:29.219 1072.407 - 1079.855: 18.6166% ( 980) 00:38:29.219 1079.855 - 1087.302: 19.7161% ( 1035) 00:38:29.219 1087.302 - 1094.749: 20.8040% ( 1024) 00:38:29.219 1094.749 - 1102.196: 21.8982% ( 1030) 00:38:29.219 1102.196 - 1109.644: 22.9882% ( 1026) 00:38:29.219 1109.644 - 1117.091: 24.0835% ( 1031) 00:38:29.219 1117.091 - 1124.538: 25.2425% ( 1091) 00:38:29.219 1124.538 - 1131.985: 26.3261% ( 1020) 00:38:29.219 1131.985 - 1139.433: 27.4957% ( 1101) 00:38:29.219 1139.433 - 1146.880: 28.6261% ( 1064) 00:38:29.219 1146.880 - 1154.327: 29.7522% ( 1060) 00:38:29.219 1154.327 - 1161.775: 30.8570% ( 1040) 00:38:29.219 1161.775 - 1169.222: 31.9332% ( 1013) 00:38:29.219 1169.222 - 1176.669: 33.0582% ( 1059) 00:38:29.219 1176.669 - 1184.116: 34.2278% ( 1101) 00:38:29.219 1184.116 - 1191.564: 35.3614% ( 1067) 00:38:29.219 1191.564 - 1199.011: 36.4747% ( 1048) 00:38:29.219 1199.011 - 1206.458: 37.6135% ( 1072) 00:38:29.219 1206.458 - 1213.905: 38.7067% ( 1029) 00:38:29.219 1213.905 - 1221.353: 39.7574% ( 989) 00:38:29.219 1221.353 - 1228.800: 40.9228% ( 1097) 00:38:29.219 1228.800 - 1236.247: 42.0276% ( 1040) 00:38:29.219 1236.247 - 1243.695: 43.0804% ( 991) 00:38:29.219 1243.695 - 1251.142: 44.2033% ( 1057) 00:38:29.219 1251.142 - 1258.589: 45.3134% ( 1045) 00:38:29.219 1258.589 - 1266.036: 46.4247% ( 1046) 00:38:29.219 1266.036 - 1273.484: 47.5518% ( 1061) 00:38:29.219 1273.484 - 1280.931: 48.6322% ( 1017) 00:38:29.219 1280.931 - 1288.378: 49.7445% ( 1047) 00:38:29.219 1288.378 - 1295.825: 50.7049% ( 904) 00:38:29.219 1295.825 - 1303.273: 51.7014% ( 938) 00:38:29.219 1303.273 - 1310.720: 52.6373% ( 881) 00:38:29.219 1310.720 - 1318.167: 53.5520% ( 861) 00:38:29.219 1318.167 - 1325.615: 54.5123% ( 904) 00:38:29.219 1325.615 - 1333.062: 55.4111% ( 846) 00:38:29.219 1333.062 - 1340.509: 56.2769% ( 815) 00:38:29.219 1340.509 - 1347.956: 57.1703% ( 841) 00:38:29.219 1347.956 - 1355.404: 57.9883% ( 770) 00:38:29.219 1355.404 - 1362.851: 58.7745% ( 740) 00:38:29.219 1362.851 - 1370.298: 59.5139% ( 696) 00:38:29.219 1370.298 - 1377.745: 60.2841% ( 725) 00:38:29.219 1377.745 - 1385.193: 61.0628% ( 733) 00:38:29.219 1385.193 - 1392.640: 61.7947% ( 689) 00:38:29.219 1392.640 - 1400.087: 62.5033% ( 667) 00:38:29.219 1400.087 - 1407.535: 63.2034% ( 659) 00:38:29.219 1407.535 - 1414.982: 63.9120% ( 667) 00:38:29.219 1414.982 - 1422.429: 64.5876% ( 636) 00:38:29.219 1422.429 - 1429.876: 65.2134% ( 589) 00:38:29.219 1429.876 - 1437.324: 65.8805% ( 628) 00:38:29.219 1437.324 - 1444.771: 66.4839% ( 568) 00:38:29.219 1444.771 - 1452.218: 67.0990% ( 579) 00:38:29.219 1452.218 - 1459.665: 67.6833% ( 550) 00:38:29.219 1459.665 - 1467.113: 68.3367% ( 615) 00:38:29.219 1467.113 - 1474.560: 68.9040% ( 534) 00:38:29.219 1474.560 - 1482.007: 69.4936% ( 555) 00:38:29.219 1482.007 - 1489.455: 70.1331% ( 602) 00:38:29.219 1489.455 - 1496.902: 70.7397% ( 571) 00:38:29.219 1496.902 - 1504.349: 71.3877% ( 610) 00:38:29.219 1504.349 - 1511.796: 71.9912% ( 568) 00:38:29.219 1511.796 - 1519.244: 72.6211% ( 593) 00:38:29.219 1519.244 - 1526.691: 73.2086% ( 553) 00:38:29.219 1526.691 - 1534.138: 73.8067% ( 563) 00:38:29.219 1534.138 - 1541.585: 74.4048% ( 563) 00:38:29.219 1541.585 - 1549.033: 75.0082% ( 568) 00:38:29.219 1549.033 - 1556.480: 75.6000% ( 557) 00:38:29.219 1556.480 - 1563.927: 76.1885% ( 554) 00:38:29.219 1563.927 - 1571.375: 76.7600% ( 538) 00:38:29.219 1571.375 - 1578.822: 77.3677% ( 572) 00:38:29.219 1578.822 - 1586.269: 77.9690% ( 566) 00:38:29.219 1586.269 - 1593.716: 78.5618% ( 558) 00:38:29.219 1593.716 - 1601.164: 79.1206% ( 526) 00:38:29.219 1601.164 - 1608.611: 79.7038% ( 549) 00:38:29.219 1608.611 - 1616.058: 80.2796% ( 542) 00:38:29.219 1616.058 - 1623.505: 80.8767% ( 562) 00:38:29.219 1623.505 - 1630.953: 81.4291% ( 520) 00:38:29.219 1630.953 - 1638.400: 82.0166% ( 553) 00:38:29.219 1638.400 - 1645.847: 82.5499% ( 502) 00:38:29.219 1645.847 - 1653.295: 83.1267% ( 543) 00:38:29.219 1653.295 - 1660.742: 83.6674% ( 509) 00:38:29.220 1660.742 - 1668.189: 84.2315% ( 531) 00:38:29.220 1668.189 - 1675.636: 84.7861% ( 522) 00:38:29.220 1675.636 - 1683.084: 85.3406% ( 522) 00:38:29.220 1683.084 - 1690.531: 85.8984% ( 525) 00:38:29.220 1690.531 - 1697.978: 86.4678% ( 536) 00:38:29.220 1697.978 - 1705.425: 86.9926% ( 494) 00:38:29.220 1705.425 - 1712.873: 87.5142% ( 491) 00:38:29.220 1712.873 - 1720.320: 88.0507% ( 505) 00:38:29.220 1720.320 - 1727.767: 88.5362% ( 457) 00:38:29.220 1727.767 - 1735.215: 89.0684% ( 501) 00:38:29.220 1735.215 - 1742.662: 89.5656% ( 468) 00:38:29.220 1742.662 - 1750.109: 90.0267% ( 434) 00:38:29.220 1750.109 - 1757.556: 90.4973% ( 443) 00:38:29.220 1757.556 - 1765.004: 90.9690% ( 444) 00:38:29.220 1765.004 - 1772.451: 91.4247% ( 429) 00:38:29.220 1772.451 - 1779.898: 91.8635% ( 413) 00:38:29.220 1779.898 - 1787.345: 92.2842% ( 396) 00:38:29.220 1787.345 - 1794.793: 92.6666% ( 360) 00:38:29.220 1794.793 - 1802.240: 93.0607% ( 371) 00:38:29.220 1802.240 - 1809.687: 93.4219% ( 340) 00:38:29.220 1809.687 - 1817.135: 93.7757% ( 333) 00:38:29.220 1817.135 - 1824.582: 94.0817% ( 288) 00:38:29.220 1824.582 - 1832.029: 94.3823% ( 283) 00:38:29.220 1832.029 - 1839.476: 94.6511% ( 253) 00:38:29.220 1839.476 - 1846.924: 94.8965% ( 231) 00:38:29.220 1846.924 - 1854.371: 95.1419% ( 231) 00:38:29.220 1854.371 - 1861.818: 95.3448% ( 191) 00:38:29.220 1861.818 - 1869.265: 95.5360% ( 180) 00:38:29.220 1869.265 - 1876.713: 95.7049% ( 159) 00:38:29.220 1876.713 - 1884.160: 95.8781% ( 163) 00:38:29.220 1884.160 - 1891.607: 96.0056% ( 120) 00:38:29.220 1891.607 - 1899.055: 96.1405% ( 127) 00:38:29.220 1899.055 - 1906.502: 96.2520% ( 105) 00:38:29.220 1906.502 - 1921.396: 96.4549% ( 191) 00:38:29.220 1921.396 - 1936.291: 96.6270% ( 162) 00:38:29.220 1936.291 - 1951.185: 96.7875% ( 151) 00:38:29.220 1951.185 - 1966.080: 96.9245% ( 129) 00:38:29.220 1966.080 - 1980.975: 97.0414% ( 110) 00:38:29.220 1980.975 - 1995.869: 97.1582% ( 110) 00:38:29.220 1995.869 - 2010.764: 97.2868% ( 121) 00:38:29.220 2010.764 - 2025.658: 97.3845% ( 92) 00:38:29.220 2025.658 - 2040.553: 97.4727% ( 83) 00:38:29.220 2040.553 - 2055.447: 97.5577% ( 80) 00:38:29.220 2055.447 - 2070.342: 97.6405% ( 78) 00:38:29.220 2070.342 - 2085.236: 97.7308% ( 85) 00:38:29.220 2085.236 - 2100.131: 97.8147% ( 79) 00:38:29.220 2100.131 - 2115.025: 97.8987% ( 79) 00:38:29.220 2115.025 - 2129.920: 97.9709% ( 68) 00:38:29.220 2129.920 - 2144.815: 98.0538% ( 78) 00:38:29.220 2144.815 - 2159.709: 98.1101% ( 53) 00:38:29.220 2159.709 - 2174.604: 98.1759% ( 62) 00:38:29.220 2174.604 - 2189.498: 98.2312% ( 52) 00:38:29.220 2189.498 - 2204.393: 98.2843% ( 50) 00:38:29.220 2204.393 - 2219.287: 98.3491% ( 61) 00:38:29.220 2219.287 - 2234.182: 98.4022% ( 50) 00:38:29.220 2234.182 - 2249.076: 98.4575% ( 52) 00:38:29.220 2249.076 - 2263.971: 98.5010% ( 41) 00:38:29.220 2263.971 - 2278.865: 98.5510% ( 47) 00:38:29.220 2278.865 - 2293.760: 98.5924% ( 39) 00:38:29.220 2293.760 - 2308.655: 98.6413% ( 46) 00:38:29.220 2308.655 - 2323.549: 98.6816% ( 38) 00:38:29.220 2323.549 - 2338.444: 98.7284% ( 44) 00:38:29.220 2338.444 - 2353.338: 98.7698% ( 39) 00:38:29.220 2353.338 - 2368.233: 98.8155% ( 43) 00:38:29.220 2368.233 - 2383.127: 98.8505% ( 33) 00:38:29.220 2383.127 - 2398.022: 98.8845% ( 32) 00:38:29.220 2398.022 - 2412.916: 98.9153% ( 29) 00:38:29.220 2412.916 - 2427.811: 98.9504% ( 33) 00:38:29.220 2427.811 - 2442.705: 98.9780% ( 26) 00:38:29.220 2442.705 - 2457.600: 99.0035% ( 24) 00:38:29.220 2457.600 - 2472.495: 99.0269% ( 22) 00:38:29.220 2472.495 - 2487.389: 99.0492% ( 21) 00:38:29.220 2487.389 - 2502.284: 99.0779% ( 27) 00:38:29.220 2502.284 - 2517.178: 99.1034% ( 24) 00:38:29.220 2517.178 - 2532.073: 99.1236% ( 19) 00:38:29.220 2532.073 - 2546.967: 99.1512% ( 26) 00:38:29.220 2546.967 - 2561.862: 99.1714% ( 19) 00:38:29.220 2561.862 - 2576.756: 99.1905% ( 18) 00:38:29.220 2576.756 - 2591.651: 99.2043% ( 13) 00:38:29.220 2591.651 - 2606.545: 99.2277% ( 22) 00:38:29.220 2606.545 - 2621.440: 99.2510% ( 22) 00:38:29.220 2621.440 - 2636.335: 99.2691% ( 17) 00:38:29.220 2636.335 - 2651.229: 99.2925% ( 22) 00:38:29.220 2651.229 - 2666.124: 99.3148% ( 21) 00:38:29.220 2666.124 - 2681.018: 99.3456% ( 29) 00:38:29.220 2681.018 - 2695.913: 99.3658% ( 19) 00:38:29.220 2695.913 - 2710.807: 99.3817% ( 15) 00:38:29.220 2710.807 - 2725.702: 99.3966% ( 14) 00:38:29.220 2725.702 - 2740.596: 99.4093% ( 12) 00:38:29.220 2740.596 - 2755.491: 99.4242% ( 14) 00:38:29.220 2755.491 - 2770.385: 99.4391% ( 14) 00:38:29.220 2770.385 - 2785.280: 99.4550% ( 15) 00:38:29.220 2785.280 - 2800.175: 99.4710% ( 15) 00:38:29.220 2800.175 - 2815.069: 99.4837% ( 12) 00:38:29.220 2815.069 - 2829.964: 99.4996% ( 15) 00:38:29.220 2829.964 - 2844.858: 99.5113% ( 11) 00:38:29.220 2844.858 - 2859.753: 99.5273% ( 15) 00:38:29.220 2859.753 - 2874.647: 99.5368% ( 9) 00:38:29.220 2874.647 - 2889.542: 99.5528% ( 15) 00:38:29.220 2889.542 - 2904.436: 99.5612% ( 8) 00:38:29.220 2904.436 - 2919.331: 99.5772% ( 15) 00:38:29.220 2919.331 - 2934.225: 99.5867% ( 9) 00:38:29.220 2934.225 - 2949.120: 99.5963% ( 9) 00:38:29.220 2949.120 - 2964.015: 99.6069% ( 10) 00:38:29.220 2964.015 - 2978.909: 99.6133% ( 6) 00:38:29.220 2978.909 - 2993.804: 99.6218% ( 8) 00:38:29.220 2993.804 - 3008.698: 99.6292% ( 7) 00:38:29.220 3008.698 - 3023.593: 99.6335% ( 4) 00:38:29.220 3023.593 - 3038.487: 99.6399% ( 6) 00:38:29.220 3038.487 - 3053.382: 99.6473% ( 7) 00:38:29.220 3053.382 - 3068.276: 99.6515% ( 4) 00:38:29.220 3068.276 - 3083.171: 99.6569% ( 5) 00:38:29.220 3083.171 - 3098.065: 99.6622% ( 5) 00:38:29.220 3098.065 - 3112.960: 99.6739% ( 11) 00:38:29.220 3112.960 - 3127.855: 99.6855% ( 11) 00:38:29.220 3127.855 - 3142.749: 99.6983% ( 12) 00:38:29.220 3142.749 - 3157.644: 99.7047% ( 6) 00:38:29.220 3157.644 - 3172.538: 99.7089% ( 4) 00:38:29.220 3172.538 - 3187.433: 99.7206% ( 11) 00:38:29.220 3187.433 - 3202.327: 99.7334% ( 12) 00:38:29.220 3202.327 - 3217.222: 99.7376% ( 4) 00:38:29.220 3217.222 - 3232.116: 99.7429% ( 5) 00:38:29.220 3232.116 - 3247.011: 99.7493% ( 6) 00:38:29.220 3247.011 - 3261.905: 99.7535% ( 4) 00:38:29.220 3261.905 - 3276.800: 99.7588% ( 5) 00:38:29.220 3276.800 - 3291.695: 99.7642% ( 5) 00:38:29.220 3291.695 - 3306.589: 99.7695% ( 5) 00:38:29.220 3306.589 - 3321.484: 99.7737% ( 4) 00:38:29.220 3321.484 - 3336.378: 99.7769% ( 3) 00:38:29.220 3336.378 - 3351.273: 99.7790% ( 2) 00:38:29.220 3351.273 - 3366.167: 99.7843% ( 5) 00:38:29.220 3366.167 - 3381.062: 99.7865% ( 2) 00:38:29.220 3381.062 - 3395.956: 99.7907% ( 4) 00:38:29.220 3395.956 - 3410.851: 99.7928% ( 2) 00:38:29.220 3410.851 - 3425.745: 99.7971% ( 4) 00:38:29.220 3425.745 - 3440.640: 99.8003% ( 3) 00:38:29.220 3440.640 - 3455.535: 99.8024% ( 2) 00:38:29.220 3455.535 - 3470.429: 99.8035% ( 1) 00:38:29.220 3470.429 - 3485.324: 99.8077% ( 4) 00:38:29.220 3485.324 - 3500.218: 99.8088% ( 1) 00:38:29.220 3500.218 - 3515.113: 99.8109% ( 2) 00:38:29.220 3515.113 - 3530.007: 99.8130% ( 2) 00:38:29.220 3530.007 - 3544.902: 99.8152% ( 2) 00:38:29.220 3544.902 - 3559.796: 99.8162% ( 1) 00:38:29.220 3559.796 - 3574.691: 99.8194% ( 3) 00:38:29.220 3574.691 - 3589.585: 99.8205% ( 1) 00:38:29.220 3589.585 - 3604.480: 99.8215% ( 1) 00:38:29.220 3604.480 - 3619.375: 99.8226% ( 1) 00:38:29.220 3619.375 - 3634.269: 99.8258% ( 3) 00:38:29.220 3634.269 - 3649.164: 99.8268% ( 1) 00:38:29.220 3649.164 - 3664.058: 99.8290% ( 2) 00:38:29.220 3664.058 - 3678.953: 99.8311% ( 2) 00:38:29.220 3678.953 - 3693.847: 99.8332% ( 2) 00:38:29.220 3693.847 - 3708.742: 99.8364% ( 3) 00:38:29.220 3708.742 - 3723.636: 99.8385% ( 2) 00:38:29.220 3723.636 - 3738.531: 99.8396% ( 1) 00:38:29.220 3738.531 - 3753.425: 99.8406% ( 1) 00:38:29.220 3753.425 - 3768.320: 99.8438% ( 3) 00:38:29.220 3768.320 - 3783.215: 99.8460% ( 2) 00:38:29.220 3783.215 - 3798.109: 99.8470% ( 1) 00:38:29.220 3798.109 - 3813.004: 99.8502% ( 3) 00:38:29.220 3813.004 - 3842.793: 99.8555% ( 5) 00:38:29.220 3842.793 - 3872.582: 99.8598% ( 4) 00:38:29.220 3872.582 - 3902.371: 99.8672% ( 7) 00:38:29.220 3902.371 - 3932.160: 99.8725% ( 5) 00:38:29.220 3932.160 - 3961.949: 99.8778% ( 5) 00:38:29.220 3961.949 - 3991.738: 99.8831% ( 5) 00:38:29.220 3991.738 - 4021.527: 99.8874% ( 4) 00:38:29.220 4021.527 - 4051.316: 99.8916% ( 4) 00:38:29.220 4051.316 - 4081.105: 99.8959% ( 4) 00:38:29.220 4081.105 - 4110.895: 99.9001% ( 4) 00:38:29.220 4110.895 - 4140.684: 99.9023% ( 2) 00:38:29.220 4140.684 - 4170.473: 99.9044% ( 2) 00:38:29.221 4170.473 - 4200.262: 99.9065% ( 2) 00:38:29.221 4200.262 - 4230.051: 99.9097% ( 3) 00:38:29.221 4230.051 - 4259.840: 99.9118% ( 2) 00:38:29.221 4259.840 - 4289.629: 99.9139% ( 2) 00:38:29.221 4289.629 - 4319.418: 99.9161% ( 2) 00:38:29.221 4319.418 - 4349.207: 99.9182% ( 2) 00:38:29.221 4349.207 - 4378.996: 99.9214% ( 3) 00:38:29.221 4378.996 - 4408.785: 99.9235% ( 2) 00:38:29.221 4408.785 - 4438.575: 99.9256% ( 2) 00:38:29.221 4438.575 - 4468.364: 99.9288% ( 3) 00:38:29.221 4468.364 - 4498.153: 99.9299% ( 1) 00:38:29.221 4498.153 - 4527.942: 99.9331% ( 3) 00:38:29.221 4527.942 - 4557.731: 99.9341% ( 1) 00:38:29.221 4557.731 - 4587.520: 99.9373% ( 3) 00:38:29.221 4587.520 - 4617.309: 99.9405% ( 3) 00:38:29.221 4647.098 - 4676.887: 99.9437% ( 3) 00:38:29.221 4676.887 - 4706.676: 99.9448% ( 1) 00:38:29.221 4706.676 - 4736.465: 99.9469% ( 2) 00:38:29.221 4736.465 - 4766.255: 99.9479% ( 1) 00:38:29.221 4796.044 - 4825.833: 99.9490% ( 1) 00:38:29.221 4825.833 - 4855.622: 99.9501% ( 1) 00:38:29.221 4855.622 - 4885.411: 99.9511% ( 1) 00:38:29.221 4885.411 - 4915.200: 99.9522% ( 1) 00:38:29.221 4944.989 - 4974.778: 99.9533% ( 1) 00:38:29.221 4974.778 - 5004.567: 99.9543% ( 1) 00:38:29.221 5004.567 - 5034.356: 99.9554% ( 1) 00:38:29.221 5034.356 - 5064.145: 99.9564% ( 1) 00:38:29.221 5064.145 - 5093.935: 99.9575% ( 1) 00:38:29.221 5093.935 - 5123.724: 99.9586% ( 1) 00:38:29.221 5123.724 - 5153.513: 99.9596% ( 1) 00:38:29.221 5153.513 - 5183.302: 99.9607% ( 1) 00:38:29.221 5183.302 - 5213.091: 99.9618% ( 1) 00:38:29.221 5213.091 - 5242.880: 99.9628% ( 1) 00:38:29.221 5242.880 - 5272.669: 99.9639% ( 1) 00:38:29.221 5272.669 - 5302.458: 99.9649% ( 1) 00:38:29.221 5302.458 - 5332.247: 99.9660% ( 1) 00:38:29.221 5332.247 - 5362.036: 99.9671% ( 1) 00:38:29.221 5362.036 - 5391.825: 99.9681% ( 1) 00:38:29.221 5391.825 - 5421.615: 99.9692% ( 1) 00:38:29.221 5421.615 - 5451.404: 99.9703% ( 1) 00:38:29.221 5451.404 - 5481.193: 99.9713% ( 1) 00:38:29.221 5510.982 - 5540.771: 99.9724% ( 1) 00:38:29.221 5540.771 - 5570.560: 99.9734% ( 1) 00:38:29.221 5570.560 - 5600.349: 99.9745% ( 1) 00:38:29.221 5600.349 - 5630.138: 99.9766% ( 2) 00:38:29.221 5659.927 - 5689.716: 99.9788% ( 2) 00:38:29.221 5689.716 - 5719.505: 99.9798% ( 1) 00:38:29.221 5719.505 - 5749.295: 99.9809% ( 1) 00:38:29.221 5749.295 - 5779.084: 99.9819% ( 1) 00:38:29.221 5779.084 - 5808.873: 99.9830% ( 1) 00:38:29.221 5808.873 - 5838.662: 99.9841% ( 1) 00:38:29.221 5838.662 - 5868.451: 99.9851% ( 1) 00:38:29.221 5868.451 - 5898.240: 99.9862% ( 1) 00:38:29.221 5898.240 - 5928.029: 99.9873% ( 1) 00:38:29.221 5957.818 - 5987.607: 99.9883% ( 1) 00:38:29.221 5987.607 - 6017.396: 99.9904% ( 2) 00:38:29.221 6047.185 - 6076.975: 99.9915% ( 1) 00:38:29.221 6076.975 - 6106.764: 99.9926% ( 1) 00:38:29.221 6106.764 - 6136.553: 99.9936% ( 1) 00:38:29.221 6136.553 - 6166.342: 99.9947% ( 1) 00:38:29.221 6166.342 - 6196.131: 99.9958% ( 1) 00:38:29.221 6225.920 - 6255.709: 99.9968% ( 1) 00:38:29.221 6255.709 - 6285.498: 99.9979% ( 1) 00:38:29.221 6285.498 - 6315.287: 99.9989% ( 1) 00:38:29.221 6345.076 - 6374.865: 100.0000% ( 1) 00:38:29.221 00:38:29.221 00:22:24 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:38:29.221 00:38:29.221 real 0m2.641s 00:38:29.221 user 0m2.256s 00:38:29.221 sys 0m0.289s 00:38:29.221 00:22:24 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:29.221 00:22:24 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:38:29.221 ************************************ 00:38:29.221 END TEST nvme_perf 00:38:29.221 ************************************ 00:38:29.221 00:22:24 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:38:29.221 00:22:24 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:29.221 00:22:24 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:29.221 00:22:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:38:29.221 ************************************ 00:38:29.221 START TEST nvme_hello_world 00:38:29.221 ************************************ 00:38:29.221 00:22:24 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:38:29.480 Initializing NVMe Controllers 00:38:29.480 Attached to 0000:00:10.0 00:38:29.480 Namespace ID: 1 size: 5GB 00:38:29.480 Initialization complete. 00:38:29.480 INFO: using host memory buffer for IO 00:38:29.480 Hello world! 00:38:29.480 00:38:29.480 real 0m0.318s 00:38:29.480 user 0m0.129s 00:38:29.480 sys 0m0.143s 00:38:29.480 00:22:25 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:29.480 00:22:25 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:38:29.480 ************************************ 00:38:29.480 END TEST nvme_hello_world 00:38:29.480 ************************************ 00:38:29.480 00:22:25 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:38:29.480 00:22:25 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:29.480 00:22:25 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:29.480 00:22:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:38:29.480 ************************************ 00:38:29.480 START TEST nvme_sgl 00:38:29.480 ************************************ 00:38:29.480 00:22:25 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:38:29.739 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:38:29.739 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:38:29.739 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:38:29.739 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:38:29.739 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:38:29.739 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:38:29.739 NVMe Readv/Writev Request test 00:38:29.739 Attached to 0000:00:10.0 00:38:29.739 0000:00:10.0: build_io_request_2 test passed 00:38:29.739 0000:00:10.0: build_io_request_4 test passed 00:38:29.739 0000:00:10.0: build_io_request_5 test passed 00:38:29.739 0000:00:10.0: build_io_request_6 test passed 00:38:29.739 0000:00:10.0: build_io_request_7 test passed 00:38:29.739 0000:00:10.0: build_io_request_10 test passed 00:38:29.739 Cleaning up... 00:38:29.998 00:38:29.998 real 0m0.350s 00:38:29.998 user 0m0.161s 00:38:29.998 sys 0m0.151s 00:38:29.998 00:22:25 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:29.998 00:22:25 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:38:29.998 ************************************ 00:38:29.998 END TEST nvme_sgl 00:38:29.998 ************************************ 00:38:29.998 00:22:25 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:38:29.998 00:22:25 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:29.998 00:22:25 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:29.998 00:22:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:38:29.998 ************************************ 00:38:29.998 START TEST nvme_e2edp 00:38:29.998 ************************************ 00:38:29.998 00:22:25 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:38:30.257 NVMe Write/Read with End-to-End data protection test 00:38:30.257 Attached to 0000:00:10.0 00:38:30.257 Cleaning up... 00:38:30.257 00:38:30.257 real 0m0.314s 00:38:30.257 user 0m0.125s 00:38:30.257 sys 0m0.144s 00:38:30.257 00:22:25 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:30.257 00:22:25 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:38:30.257 ************************************ 00:38:30.257 END TEST nvme_e2edp 00:38:30.257 ************************************ 00:38:30.257 00:22:26 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:38:30.257 00:22:26 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:30.257 00:22:26 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:30.257 00:22:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:38:30.257 ************************************ 00:38:30.257 START TEST nvme_reserve 00:38:30.257 ************************************ 00:38:30.257 00:22:26 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:38:30.516 ===================================================== 00:38:30.516 NVMe Controller at PCI bus 0, device 16, function 0 00:38:30.516 ===================================================== 00:38:30.516 Reservations: Not Supported 00:38:30.516 Reservation test passed 00:38:30.516 00:38:30.516 real 0m0.314s 00:38:30.516 user 0m0.113s 00:38:30.516 sys 0m0.156s 00:38:30.516 00:22:26 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:30.516 00:22:26 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:38:30.516 ************************************ 00:38:30.516 END TEST nvme_reserve 00:38:30.516 ************************************ 00:38:30.775 00:22:26 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:38:30.775 00:22:26 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:30.775 00:22:26 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:30.775 00:22:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:38:30.775 ************************************ 00:38:30.775 START TEST nvme_err_injection 00:38:30.775 ************************************ 00:38:30.775 00:22:26 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:38:31.034 NVMe Error Injection test 00:38:31.034 Attached to 0000:00:10.0 00:38:31.034 0000:00:10.0: get features failed as expected 00:38:31.034 0000:00:10.0: get features successfully as expected 00:38:31.034 0000:00:10.0: read failed as expected 00:38:31.034 0000:00:10.0: read successfully as expected 00:38:31.034 Cleaning up... 00:38:31.034 00:38:31.034 real 0m0.325s 00:38:31.034 user 0m0.120s 00:38:31.034 sys 0m0.162s 00:38:31.034 00:22:26 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:31.034 00:22:26 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:38:31.034 ************************************ 00:38:31.034 END TEST nvme_err_injection 00:38:31.034 ************************************ 00:38:31.034 00:22:26 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:38:31.034 00:22:26 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:38:31.034 00:22:26 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:31.034 00:22:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:38:31.034 ************************************ 00:38:31.034 START TEST nvme_overhead 00:38:31.034 ************************************ 00:38:31.035 00:22:26 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:38:32.414 Initializing NVMe Controllers 00:38:32.414 Attached to 0000:00:10.0 00:38:32.414 Initialization complete. Launching workers. 00:38:32.414 submit (in ns) avg, min, max = 17347.8, 12376.8, 97627.7 00:38:32.414 complete (in ns) avg, min, max = 12103.0, 8590.0, 47355.5 00:38:32.414 00:38:32.414 Submit histogram 00:38:32.414 ================ 00:38:32.414 Range in us Cumulative Count 00:38:32.414 12.335 - 12.393: 0.0114% ( 1) 00:38:32.414 12.509 - 12.567: 0.0228% ( 1) 00:38:32.414 13.091 - 13.149: 0.0342% ( 1) 00:38:32.414 13.207 - 13.265: 0.0684% ( 3) 00:38:32.414 13.265 - 13.324: 0.0912% ( 2) 00:38:32.414 13.324 - 13.382: 0.1596% ( 6) 00:38:32.414 13.382 - 13.440: 0.2849% ( 11) 00:38:32.414 13.440 - 13.498: 0.4673% ( 16) 00:38:32.414 13.498 - 13.556: 0.7066% ( 21) 00:38:32.414 13.556 - 13.615: 1.4703% ( 67) 00:38:32.414 13.615 - 13.673: 2.8721% ( 123) 00:38:32.414 13.673 - 13.731: 5.1174% ( 197) 00:38:32.414 13.731 - 13.789: 7.6476% ( 222) 00:38:32.414 13.789 - 13.847: 10.4627% ( 247) 00:38:32.414 13.847 - 13.905: 12.4459% ( 174) 00:38:32.414 13.905 - 13.964: 14.3492% ( 167) 00:38:32.414 13.964 - 14.022: 17.1985% ( 250) 00:38:32.414 14.022 - 14.080: 20.6861% ( 306) 00:38:32.414 14.080 - 14.138: 24.7664% ( 358) 00:38:32.414 14.138 - 14.196: 27.9918% ( 283) 00:38:32.414 14.196 - 14.255: 30.5106% ( 221) 00:38:32.414 14.255 - 14.313: 32.4253% ( 168) 00:38:32.414 14.313 - 14.371: 34.5339% ( 185) 00:38:32.414 14.371 - 14.429: 37.5313% ( 263) 00:38:32.414 14.429 - 14.487: 41.2241% ( 324) 00:38:32.414 14.487 - 14.545: 44.2102% ( 262) 00:38:32.414 14.545 - 14.604: 46.5808% ( 208) 00:38:32.414 14.604 - 14.662: 47.9257% ( 118) 00:38:32.414 14.662 - 14.720: 48.9287% ( 88) 00:38:32.414 14.720 - 14.778: 49.8518% ( 81) 00:38:32.414 14.778 - 14.836: 50.8434% ( 87) 00:38:32.414 14.836 - 14.895: 51.8806% ( 91) 00:38:32.414 14.895 - 15.011: 53.4420% ( 137) 00:38:32.414 15.011 - 15.127: 54.6843% ( 109) 00:38:32.414 15.127 - 15.244: 56.2457% ( 137) 00:38:32.414 15.244 - 15.360: 58.9241% ( 235) 00:38:32.414 15.360 - 15.476: 61.1352% ( 194) 00:38:32.414 15.476 - 15.593: 62.2065% ( 94) 00:38:32.414 15.593 - 15.709: 63.0043% ( 70) 00:38:32.414 15.709 - 15.825: 63.7224% ( 63) 00:38:32.414 15.825 - 15.942: 64.2580% ( 47) 00:38:32.414 15.942 - 16.058: 64.6000% ( 30) 00:38:32.414 16.058 - 16.175: 64.8963% ( 26) 00:38:32.414 16.175 - 16.291: 65.1014% ( 18) 00:38:32.414 16.291 - 16.407: 65.2040% ( 9) 00:38:32.414 16.407 - 16.524: 65.2610% ( 5) 00:38:32.414 16.524 - 16.640: 65.3408% ( 7) 00:38:32.414 16.640 - 16.756: 65.3522% ( 1) 00:38:32.414 16.756 - 16.873: 65.3864% ( 3) 00:38:32.414 16.873 - 16.989: 65.4206% ( 3) 00:38:32.414 17.105 - 17.222: 65.4434% ( 2) 00:38:32.414 17.222 - 17.338: 65.4548% ( 1) 00:38:32.414 17.338 - 17.455: 65.4661% ( 1) 00:38:32.414 17.455 - 17.571: 65.4889% ( 2) 00:38:32.414 17.571 - 17.687: 65.5117% ( 2) 00:38:32.414 17.687 - 17.804: 65.5801% ( 6) 00:38:32.414 17.804 - 17.920: 66.2526% ( 59) 00:38:32.414 17.920 - 18.036: 69.1247% ( 252) 00:38:32.414 18.036 - 18.153: 74.3902% ( 462) 00:38:32.414 18.153 - 18.269: 77.9120% ( 309) 00:38:32.414 18.269 - 18.385: 79.4620% ( 136) 00:38:32.414 18.385 - 18.502: 80.0547% ( 52) 00:38:32.414 18.502 - 18.618: 80.4422% ( 34) 00:38:32.414 18.618 - 18.735: 80.7727% ( 29) 00:38:32.414 18.735 - 18.851: 81.4680% ( 61) 00:38:32.414 18.851 - 18.967: 82.3456% ( 77) 00:38:32.414 18.967 - 19.084: 83.0522% ( 62) 00:38:32.414 19.084 - 19.200: 83.4283% ( 33) 00:38:32.414 19.200 - 19.316: 83.6449% ( 19) 00:38:32.414 19.316 - 19.433: 83.8500% ( 18) 00:38:32.414 19.433 - 19.549: 84.1121% ( 23) 00:38:32.414 19.549 - 19.665: 84.2717% ( 14) 00:38:32.414 19.665 - 19.782: 84.4313% ( 14) 00:38:32.414 19.782 - 19.898: 84.5794% ( 13) 00:38:32.414 19.898 - 20.015: 84.7162% ( 12) 00:38:32.414 20.015 - 20.131: 84.8530% ( 12) 00:38:32.414 20.131 - 20.247: 84.9328% ( 7) 00:38:32.414 20.247 - 20.364: 85.0239% ( 8) 00:38:32.414 20.364 - 20.480: 85.1607% ( 12) 00:38:32.414 20.480 - 20.596: 85.2747% ( 10) 00:38:32.414 20.596 - 20.713: 85.3659% ( 8) 00:38:32.414 20.713 - 20.829: 85.4114% ( 4) 00:38:32.414 20.829 - 20.945: 85.4798% ( 6) 00:38:32.414 20.945 - 21.062: 85.5710% ( 8) 00:38:32.414 21.062 - 21.178: 85.6508% ( 7) 00:38:32.414 21.178 - 21.295: 85.7078% ( 5) 00:38:32.414 21.295 - 21.411: 85.7648% ( 5) 00:38:32.414 21.411 - 21.527: 85.8787% ( 10) 00:38:32.414 21.527 - 21.644: 86.0611% ( 16) 00:38:32.414 21.644 - 21.760: 86.1979% ( 12) 00:38:32.414 21.760 - 21.876: 86.2890% ( 8) 00:38:32.414 21.876 - 21.993: 86.3574% ( 6) 00:38:32.414 21.993 - 22.109: 86.3802% ( 2) 00:38:32.414 22.109 - 22.225: 86.4600% ( 7) 00:38:32.414 22.225 - 22.342: 86.5170% ( 5) 00:38:32.414 22.342 - 22.458: 86.5512% ( 3) 00:38:32.414 22.458 - 22.575: 86.5968% ( 4) 00:38:32.414 22.575 - 22.691: 86.6537% ( 5) 00:38:32.414 22.691 - 22.807: 86.7449% ( 8) 00:38:32.414 22.807 - 22.924: 86.7791% ( 3) 00:38:32.414 22.924 - 23.040: 86.8361% ( 5) 00:38:32.414 23.040 - 23.156: 86.9501% ( 10) 00:38:32.414 23.156 - 23.273: 86.9729% ( 2) 00:38:32.414 23.273 - 23.389: 87.0071% ( 3) 00:38:32.414 23.389 - 23.505: 87.0299% ( 2) 00:38:32.414 23.505 - 23.622: 87.0641% ( 3) 00:38:32.414 23.622 - 23.738: 87.0982% ( 3) 00:38:32.414 23.738 - 23.855: 87.1210% ( 2) 00:38:32.414 23.855 - 23.971: 87.1780% ( 5) 00:38:32.414 23.971 - 24.087: 87.2008% ( 2) 00:38:32.414 24.087 - 24.204: 87.2350% ( 3) 00:38:32.414 24.204 - 24.320: 87.2692% ( 3) 00:38:32.414 24.320 - 24.436: 87.3262% ( 5) 00:38:32.415 24.553 - 24.669: 87.4288% ( 9) 00:38:32.415 24.669 - 24.785: 87.4516% ( 2) 00:38:32.415 24.785 - 24.902: 87.4972% ( 4) 00:38:32.415 24.902 - 25.018: 87.5541% ( 5) 00:38:32.415 25.018 - 25.135: 87.6225% ( 6) 00:38:32.415 25.135 - 25.251: 87.6567% ( 3) 00:38:32.415 25.251 - 25.367: 87.7023% ( 4) 00:38:32.415 25.367 - 25.484: 87.7707% ( 6) 00:38:32.415 25.484 - 25.600: 87.8505% ( 7) 00:38:32.415 25.600 - 25.716: 87.9189% ( 6) 00:38:32.415 25.716 - 25.833: 87.9872% ( 6) 00:38:32.415 25.833 - 25.949: 88.0100% ( 2) 00:38:32.415 25.949 - 26.065: 88.0784% ( 6) 00:38:32.415 26.065 - 26.182: 88.1126% ( 3) 00:38:32.415 26.182 - 26.298: 88.1582% ( 4) 00:38:32.415 26.298 - 26.415: 88.1924% ( 3) 00:38:32.415 26.415 - 26.531: 88.2380% ( 4) 00:38:32.415 26.531 - 26.647: 88.3064% ( 6) 00:38:32.415 26.647 - 26.764: 88.3861% ( 7) 00:38:32.415 26.764 - 26.880: 88.4203% ( 3) 00:38:32.415 26.880 - 26.996: 88.4659% ( 4) 00:38:32.415 26.996 - 27.113: 88.5115% ( 4) 00:38:32.415 27.113 - 27.229: 88.5799% ( 6) 00:38:32.415 27.229 - 27.345: 88.5913% ( 1) 00:38:32.415 27.345 - 27.462: 88.6483% ( 5) 00:38:32.415 27.462 - 27.578: 88.7053% ( 5) 00:38:32.415 27.695 - 27.811: 88.7964% ( 8) 00:38:32.415 27.811 - 27.927: 88.8534% ( 5) 00:38:32.415 27.927 - 28.044: 88.9332% ( 7) 00:38:32.415 28.044 - 28.160: 89.1156% ( 16) 00:38:32.415 28.160 - 28.276: 89.3207% ( 18) 00:38:32.415 28.276 - 28.393: 89.7538% ( 38) 00:38:32.415 28.393 - 28.509: 90.4035% ( 57) 00:38:32.415 28.509 - 28.625: 91.2469% ( 74) 00:38:32.415 28.625 - 28.742: 92.3980% ( 101) 00:38:32.415 28.742 - 28.858: 93.3668% ( 85) 00:38:32.415 28.858 - 28.975: 94.3013% ( 82) 00:38:32.415 28.975 - 29.091: 95.0536% ( 66) 00:38:32.415 29.091 - 29.207: 95.6804% ( 55) 00:38:32.415 29.207 - 29.324: 96.0907% ( 36) 00:38:32.415 29.324 - 29.440: 96.3984% ( 27) 00:38:32.415 29.440 - 29.556: 96.7062% ( 27) 00:38:32.415 29.556 - 29.673: 96.9797% ( 24) 00:38:32.415 29.673 - 29.789: 97.2646% ( 25) 00:38:32.415 29.789 - 30.022: 97.6408% ( 33) 00:38:32.415 30.022 - 30.255: 97.9713% ( 29) 00:38:32.415 30.255 - 30.487: 98.1080% ( 12) 00:38:32.415 30.487 - 30.720: 98.2106% ( 9) 00:38:32.415 30.720 - 30.953: 98.2676% ( 5) 00:38:32.415 30.953 - 31.185: 98.3018% ( 3) 00:38:32.415 31.185 - 31.418: 98.3360% ( 3) 00:38:32.415 31.418 - 31.651: 98.3816% ( 4) 00:38:32.415 31.651 - 31.884: 98.4044% ( 2) 00:38:32.415 31.884 - 32.116: 98.4386% ( 3) 00:38:32.415 32.116 - 32.349: 98.4500% ( 1) 00:38:32.415 32.349 - 32.582: 98.4728% ( 2) 00:38:32.415 32.582 - 32.815: 98.5070% ( 3) 00:38:32.415 32.815 - 33.047: 98.5297% ( 2) 00:38:32.415 33.047 - 33.280: 98.5525% ( 2) 00:38:32.415 33.513 - 33.745: 98.5867% ( 3) 00:38:32.415 33.745 - 33.978: 98.6323% ( 4) 00:38:32.415 33.978 - 34.211: 98.6665% ( 3) 00:38:32.415 34.211 - 34.444: 98.7349% ( 6) 00:38:32.415 34.444 - 34.676: 98.7919% ( 5) 00:38:32.415 34.676 - 34.909: 98.8945% ( 9) 00:38:32.415 34.909 - 35.142: 98.9401% ( 4) 00:38:32.415 35.142 - 35.375: 99.0198% ( 7) 00:38:32.415 35.375 - 35.607: 99.0768% ( 5) 00:38:32.415 35.607 - 35.840: 99.1566% ( 7) 00:38:32.415 35.840 - 36.073: 99.2022% ( 4) 00:38:32.415 36.073 - 36.305: 99.2478% ( 4) 00:38:32.415 36.305 - 36.538: 99.2934% ( 4) 00:38:32.415 36.538 - 36.771: 99.3390% ( 4) 00:38:32.415 36.771 - 37.004: 99.3504% ( 1) 00:38:32.415 37.004 - 37.236: 99.3845% ( 3) 00:38:32.415 37.236 - 37.469: 99.4073% ( 2) 00:38:32.415 37.469 - 37.702: 99.4301% ( 2) 00:38:32.415 37.702 - 37.935: 99.4415% ( 1) 00:38:32.415 37.935 - 38.167: 99.4757% ( 3) 00:38:32.415 38.167 - 38.400: 99.4871% ( 1) 00:38:32.415 38.400 - 38.633: 99.5213% ( 3) 00:38:32.415 39.564 - 39.796: 99.5327% ( 1) 00:38:32.415 40.029 - 40.262: 99.5441% ( 1) 00:38:32.415 40.495 - 40.727: 99.5555% ( 1) 00:38:32.415 40.727 - 40.960: 99.5669% ( 1) 00:38:32.415 40.960 - 41.193: 99.5783% ( 1) 00:38:32.415 41.193 - 41.425: 99.5897% ( 1) 00:38:32.415 41.425 - 41.658: 99.6011% ( 1) 00:38:32.415 41.658 - 41.891: 99.6239% ( 2) 00:38:32.415 42.822 - 43.055: 99.6353% ( 1) 00:38:32.415 43.055 - 43.287: 99.6581% ( 2) 00:38:32.415 43.287 - 43.520: 99.6923% ( 3) 00:38:32.415 43.520 - 43.753: 99.7379% ( 4) 00:38:32.415 43.753 - 43.985: 99.7493% ( 1) 00:38:32.415 43.985 - 44.218: 99.7607% ( 1) 00:38:32.415 44.218 - 44.451: 99.7721% ( 1) 00:38:32.415 44.684 - 44.916: 99.7835% ( 1) 00:38:32.415 45.615 - 45.847: 99.8062% ( 2) 00:38:32.415 45.847 - 46.080: 99.8176% ( 1) 00:38:32.415 46.080 - 46.313: 99.8290% ( 1) 00:38:32.415 46.778 - 47.011: 99.8518% ( 2) 00:38:32.415 47.011 - 47.244: 99.8632% ( 1) 00:38:32.415 47.244 - 47.476: 99.8746% ( 1) 00:38:32.415 47.709 - 47.942: 99.8860% ( 1) 00:38:32.415 49.338 - 49.571: 99.8974% ( 1) 00:38:32.415 51.433 - 51.665: 99.9088% ( 1) 00:38:32.415 53.062 - 53.295: 99.9202% ( 1) 00:38:32.415 53.527 - 53.760: 99.9316% ( 1) 00:38:32.415 54.691 - 54.924: 99.9430% ( 1) 00:38:32.415 56.553 - 56.785: 99.9544% ( 1) 00:38:32.415 57.484 - 57.716: 99.9658% ( 1) 00:38:32.415 66.095 - 66.560: 99.9772% ( 1) 00:38:32.415 73.542 - 74.007: 99.9886% ( 1) 00:38:32.415 97.280 - 97.745: 100.0000% ( 1) 00:38:32.415 00:38:32.415 Complete histogram 00:38:32.415 ================== 00:38:32.415 Range in us Cumulative Count 00:38:32.415 8.553 - 8.611: 0.0228% ( 2) 00:38:32.415 8.611 - 8.669: 0.0570% ( 3) 00:38:32.415 8.669 - 8.727: 0.1026% ( 4) 00:38:32.415 8.727 - 8.785: 0.3191% ( 19) 00:38:32.415 8.785 - 8.844: 1.0713% ( 66) 00:38:32.415 8.844 - 8.902: 2.5416% ( 129) 00:38:32.415 8.902 - 8.960: 3.9321% ( 122) 00:38:32.415 8.960 - 9.018: 5.9266% ( 175) 00:38:32.415 9.018 - 9.076: 7.7844% ( 163) 00:38:32.415 9.076 - 9.135: 10.5767% ( 245) 00:38:32.415 9.135 - 9.193: 14.2808% ( 325) 00:38:32.415 9.193 - 9.251: 17.5519% ( 287) 00:38:32.415 9.251 - 9.309: 21.0166% ( 304) 00:38:32.415 9.309 - 9.367: 25.9061% ( 429) 00:38:32.415 9.367 - 9.425: 31.1602% ( 461) 00:38:32.415 9.425 - 9.484: 36.0497% ( 429) 00:38:32.415 9.484 - 9.542: 39.6968% ( 320) 00:38:32.415 9.542 - 9.600: 42.3410% ( 232) 00:38:32.415 9.600 - 9.658: 44.9624% ( 230) 00:38:32.415 9.658 - 9.716: 47.2874% ( 204) 00:38:32.415 9.716 - 9.775: 48.9628% ( 147) 00:38:32.415 9.775 - 9.833: 49.6923% ( 64) 00:38:32.415 9.833 - 9.891: 50.5699% ( 77) 00:38:32.415 9.891 - 9.949: 51.9375% ( 120) 00:38:32.415 9.949 - 10.007: 53.9207% ( 174) 00:38:32.415 10.007 - 10.065: 55.5391% ( 142) 00:38:32.415 10.065 - 10.124: 56.8840% ( 118) 00:38:32.415 10.124 - 10.182: 57.9439% ( 93) 00:38:32.415 10.182 - 10.240: 58.6961% ( 66) 00:38:32.415 10.240 - 10.298: 59.2318% ( 47) 00:38:32.415 10.298 - 10.356: 59.8473% ( 54) 00:38:32.415 10.356 - 10.415: 60.4399% ( 52) 00:38:32.415 10.415 - 10.473: 60.9870% ( 48) 00:38:32.415 10.473 - 10.531: 61.4885% ( 44) 00:38:32.415 10.531 - 10.589: 61.8874% ( 35) 00:38:32.415 10.589 - 10.647: 62.1723% ( 25) 00:38:32.415 10.647 - 10.705: 62.5256% ( 31) 00:38:32.415 10.705 - 10.764: 62.7650% ( 21) 00:38:32.415 10.764 - 10.822: 63.0043% ( 21) 00:38:32.415 10.822 - 10.880: 63.4146% ( 36) 00:38:32.415 10.880 - 10.938: 63.6768% ( 23) 00:38:32.415 10.938 - 10.996: 63.8819% ( 18) 00:38:32.415 10.996 - 11.055: 63.9845% ( 9) 00:38:32.415 11.055 - 11.113: 64.0985% ( 10) 00:38:32.415 11.113 - 11.171: 64.2124% ( 10) 00:38:32.415 11.171 - 11.229: 64.3834% ( 15) 00:38:32.415 11.229 - 11.287: 64.4860% ( 9) 00:38:32.415 11.287 - 11.345: 64.5772% ( 8) 00:38:32.415 11.345 - 11.404: 64.7139% ( 12) 00:38:32.415 11.404 - 11.462: 64.7937% ( 7) 00:38:32.415 11.462 - 11.520: 64.8393% ( 4) 00:38:32.415 11.520 - 11.578: 64.9647% ( 11) 00:38:32.415 11.578 - 11.636: 65.0672% ( 9) 00:38:32.415 11.636 - 11.695: 65.1470% ( 7) 00:38:32.415 11.695 - 11.753: 65.2496% ( 9) 00:38:32.415 11.753 - 11.811: 65.3864% ( 12) 00:38:32.415 11.811 - 11.869: 66.1158% ( 64) 00:38:32.415 11.869 - 11.927: 67.6658% ( 136) 00:38:32.415 11.927 - 11.985: 70.6519% ( 262) 00:38:32.415 11.985 - 12.044: 74.4586% ( 334) 00:38:32.415 12.044 - 12.102: 77.8892% ( 301) 00:38:32.415 12.102 - 12.160: 80.0547% ( 190) 00:38:32.415 12.160 - 12.218: 81.0121% ( 84) 00:38:32.415 12.218 - 12.276: 81.4110% ( 35) 00:38:32.415 12.276 - 12.335: 81.7529% ( 30) 00:38:32.415 12.335 - 12.393: 81.8669% ( 10) 00:38:32.415 12.393 - 12.451: 81.9467% ( 7) 00:38:32.415 12.451 - 12.509: 82.0834% ( 12) 00:38:32.416 12.509 - 12.567: 82.1746% ( 8) 00:38:32.416 12.567 - 12.625: 82.2544% ( 7) 00:38:32.416 12.625 - 12.684: 82.2886% ( 3) 00:38:32.416 12.684 - 12.742: 82.3798% ( 8) 00:38:32.416 12.742 - 12.800: 82.4595% ( 7) 00:38:32.416 12.800 - 12.858: 82.4823% ( 2) 00:38:32.416 12.858 - 12.916: 82.5393% ( 5) 00:38:32.416 12.916 - 12.975: 82.6419% ( 9) 00:38:32.416 12.975 - 13.033: 82.9382% ( 26) 00:38:32.416 13.033 - 13.091: 83.1548% ( 19) 00:38:32.416 13.091 - 13.149: 83.4967% ( 30) 00:38:32.416 13.149 - 13.207: 83.8728% ( 33) 00:38:32.416 13.207 - 13.265: 84.2147% ( 30) 00:38:32.416 13.265 - 13.324: 84.4769% ( 23) 00:38:32.416 13.324 - 13.382: 84.7162% ( 21) 00:38:32.416 13.382 - 13.440: 84.8872% ( 15) 00:38:32.416 13.440 - 13.498: 85.0239% ( 12) 00:38:32.416 13.498 - 13.556: 85.1607% ( 12) 00:38:32.416 13.556 - 13.615: 85.2063% ( 4) 00:38:32.416 13.615 - 13.673: 85.2633% ( 5) 00:38:32.416 13.673 - 13.731: 85.2861% ( 2) 00:38:32.416 13.731 - 13.789: 85.3089% ( 2) 00:38:32.416 13.789 - 13.847: 85.3431% ( 3) 00:38:32.416 13.847 - 13.905: 85.3659% ( 2) 00:38:32.416 13.905 - 13.964: 85.3886% ( 2) 00:38:32.416 13.964 - 14.022: 85.4114% ( 2) 00:38:32.416 14.022 - 14.080: 85.4570% ( 4) 00:38:32.416 14.080 - 14.138: 85.4684% ( 1) 00:38:32.416 14.138 - 14.196: 85.4798% ( 1) 00:38:32.416 14.196 - 14.255: 85.5026% ( 2) 00:38:32.416 14.255 - 14.313: 85.5482% ( 4) 00:38:32.416 14.313 - 14.371: 85.5824% ( 3) 00:38:32.416 14.371 - 14.429: 85.5938% ( 1) 00:38:32.416 14.487 - 14.545: 85.6394% ( 4) 00:38:32.416 14.545 - 14.604: 85.6622% ( 2) 00:38:32.416 14.604 - 14.662: 85.7078% ( 4) 00:38:32.416 14.662 - 14.720: 85.7420% ( 3) 00:38:32.416 14.720 - 14.778: 85.7762% ( 3) 00:38:32.416 14.778 - 14.836: 85.7990% ( 2) 00:38:32.416 14.836 - 14.895: 85.8331% ( 3) 00:38:32.416 14.895 - 15.011: 85.9357% ( 9) 00:38:32.416 15.011 - 15.127: 86.0041% ( 6) 00:38:32.416 15.127 - 15.244: 86.0383% ( 3) 00:38:32.416 15.244 - 15.360: 86.0497% ( 1) 00:38:32.416 15.360 - 15.476: 86.0725% ( 2) 00:38:32.416 15.476 - 15.593: 86.1409% ( 6) 00:38:32.416 15.593 - 15.709: 86.1979% ( 5) 00:38:32.416 15.709 - 15.825: 86.3004% ( 9) 00:38:32.416 15.825 - 15.942: 86.4258% ( 11) 00:38:32.416 15.942 - 16.058: 86.5854% ( 14) 00:38:32.416 16.058 - 16.175: 86.7221% ( 12) 00:38:32.416 16.175 - 16.291: 86.8475% ( 11) 00:38:32.416 16.291 - 16.407: 86.9159% ( 6) 00:38:32.416 16.407 - 16.524: 86.9729% ( 5) 00:38:32.416 16.524 - 16.640: 87.0527% ( 7) 00:38:32.416 16.640 - 16.756: 87.1666% ( 10) 00:38:32.416 16.756 - 16.873: 87.2464% ( 7) 00:38:32.416 16.873 - 16.989: 87.3148% ( 6) 00:38:32.416 16.989 - 17.105: 87.3832% ( 6) 00:38:32.416 17.105 - 17.222: 87.4516% ( 6) 00:38:32.416 17.222 - 17.338: 87.5313% ( 7) 00:38:32.416 17.338 - 17.455: 87.5883% ( 5) 00:38:32.416 17.455 - 17.571: 87.7023% ( 10) 00:38:32.416 17.571 - 17.687: 87.7365% ( 3) 00:38:32.416 17.687 - 17.804: 87.8391% ( 9) 00:38:32.416 17.804 - 17.920: 87.8733% ( 3) 00:38:32.416 17.920 - 18.036: 87.8847% ( 1) 00:38:32.416 18.036 - 18.153: 87.9075% ( 2) 00:38:32.416 18.153 - 18.269: 87.9530% ( 4) 00:38:32.416 18.385 - 18.502: 87.9758% ( 2) 00:38:32.416 18.735 - 18.851: 87.9872% ( 1) 00:38:32.416 18.851 - 18.967: 88.0214% ( 3) 00:38:32.416 18.967 - 19.084: 88.0442% ( 2) 00:38:32.416 19.084 - 19.200: 88.0784% ( 3) 00:38:32.416 19.200 - 19.316: 88.0898% ( 1) 00:38:32.416 19.316 - 19.433: 88.1354% ( 4) 00:38:32.416 19.433 - 19.549: 88.1468% ( 1) 00:38:32.416 19.549 - 19.665: 88.1696% ( 2) 00:38:32.416 19.665 - 19.782: 88.1924% ( 2) 00:38:32.416 19.782 - 19.898: 88.2266% ( 3) 00:38:32.416 19.898 - 20.015: 88.2380% ( 1) 00:38:32.416 20.131 - 20.247: 88.2950% ( 5) 00:38:32.416 20.247 - 20.364: 88.3406% ( 4) 00:38:32.416 20.364 - 20.480: 88.3861% ( 4) 00:38:32.416 20.480 - 20.596: 88.4203% ( 3) 00:38:32.416 20.596 - 20.713: 88.5001% ( 7) 00:38:32.416 20.713 - 20.829: 88.5343% ( 3) 00:38:32.416 20.945 - 21.062: 88.5685% ( 3) 00:38:32.416 21.062 - 21.178: 88.6027% ( 3) 00:38:32.416 21.178 - 21.295: 88.6255% ( 2) 00:38:32.416 21.295 - 21.411: 88.6483% ( 2) 00:38:32.416 21.411 - 21.527: 88.6597% ( 1) 00:38:32.416 21.527 - 21.644: 88.6711% ( 1) 00:38:32.416 21.644 - 21.760: 88.7395% ( 6) 00:38:32.416 21.760 - 21.876: 88.7736% ( 3) 00:38:32.416 21.876 - 21.993: 88.7964% ( 2) 00:38:32.416 21.993 - 22.109: 88.8534% ( 5) 00:38:32.416 22.109 - 22.225: 88.8990% ( 4) 00:38:32.416 22.225 - 22.342: 88.9560% ( 5) 00:38:32.416 22.342 - 22.458: 89.0016% ( 4) 00:38:32.416 22.458 - 22.575: 89.0244% ( 2) 00:38:32.416 22.575 - 22.691: 89.0358% ( 1) 00:38:32.416 22.691 - 22.807: 89.0700% ( 3) 00:38:32.416 22.807 - 22.924: 89.1042% ( 3) 00:38:32.416 23.040 - 23.156: 89.1384% ( 3) 00:38:32.416 23.156 - 23.273: 89.2067% ( 6) 00:38:32.416 23.273 - 23.389: 89.3549% ( 13) 00:38:32.416 23.389 - 23.505: 89.6512% ( 26) 00:38:32.416 23.505 - 23.622: 90.0729% ( 37) 00:38:32.416 23.622 - 23.738: 90.6770% ( 53) 00:38:32.416 23.738 - 23.855: 91.6458% ( 85) 00:38:32.416 23.855 - 23.971: 92.7855% ( 100) 00:38:32.416 23.971 - 24.087: 93.9024% ( 98) 00:38:32.416 24.087 - 24.204: 95.0536% ( 101) 00:38:32.416 24.204 - 24.320: 96.1591% ( 97) 00:38:32.416 24.320 - 24.436: 96.7176% ( 49) 00:38:32.416 24.436 - 24.553: 97.0823% ( 32) 00:38:32.416 24.553 - 24.669: 97.3900% ( 27) 00:38:32.416 24.669 - 24.785: 97.6066% ( 19) 00:38:32.416 24.785 - 24.902: 97.8231% ( 19) 00:38:32.416 24.902 - 25.018: 98.0055% ( 16) 00:38:32.416 25.018 - 25.135: 98.1764% ( 15) 00:38:32.416 25.135 - 25.251: 98.2904% ( 10) 00:38:32.416 25.251 - 25.367: 98.3246% ( 3) 00:38:32.416 25.367 - 25.484: 98.3588% ( 3) 00:38:32.416 25.484 - 25.600: 98.3816% ( 2) 00:38:32.416 25.600 - 25.716: 98.3930% ( 1) 00:38:32.416 25.833 - 25.949: 98.4386% ( 4) 00:38:32.416 25.949 - 26.065: 98.4500% ( 1) 00:38:32.416 26.065 - 26.182: 98.5070% ( 5) 00:38:32.416 26.182 - 26.298: 98.5297% ( 2) 00:38:32.416 26.298 - 26.415: 98.5753% ( 4) 00:38:32.416 26.415 - 26.531: 98.6551% ( 7) 00:38:32.416 26.531 - 26.647: 98.6893% ( 3) 00:38:32.416 26.647 - 26.764: 98.7235% ( 3) 00:38:32.416 26.764 - 26.880: 98.7349% ( 1) 00:38:32.416 26.880 - 26.996: 98.7691% ( 3) 00:38:32.416 26.996 - 27.113: 98.7805% ( 1) 00:38:32.416 27.113 - 27.229: 98.8147% ( 3) 00:38:32.416 27.345 - 27.462: 98.8489% ( 3) 00:38:32.416 27.462 - 27.578: 98.8603% ( 1) 00:38:32.416 27.578 - 27.695: 98.8717% ( 1) 00:38:32.416 27.811 - 27.927: 98.8831% ( 1) 00:38:32.416 28.625 - 28.742: 98.8945% ( 1) 00:38:32.416 28.742 - 28.858: 98.9059% ( 1) 00:38:32.416 28.858 - 28.975: 98.9173% ( 1) 00:38:32.416 29.091 - 29.207: 98.9287% ( 1) 00:38:32.416 29.207 - 29.324: 98.9401% ( 1) 00:38:32.416 29.440 - 29.556: 98.9514% ( 1) 00:38:32.416 29.673 - 29.789: 98.9628% ( 1) 00:38:32.416 29.789 - 30.022: 98.9970% ( 3) 00:38:32.416 30.022 - 30.255: 99.0540% ( 5) 00:38:32.416 30.255 - 30.487: 99.0882% ( 3) 00:38:32.416 30.487 - 30.720: 99.1566% ( 6) 00:38:32.416 30.720 - 30.953: 99.2820% ( 11) 00:38:32.416 30.953 - 31.185: 99.3504% ( 6) 00:38:32.416 31.185 - 31.418: 99.4415% ( 8) 00:38:32.416 31.418 - 31.651: 99.5897% ( 13) 00:38:32.416 31.651 - 31.884: 99.6239% ( 3) 00:38:32.416 31.884 - 32.116: 99.6353% ( 1) 00:38:32.416 32.116 - 32.349: 99.6467% ( 1) 00:38:32.416 32.582 - 32.815: 99.6581% ( 1) 00:38:32.416 32.815 - 33.047: 99.6809% ( 2) 00:38:32.416 33.280 - 33.513: 99.6923% ( 1) 00:38:32.416 33.745 - 33.978: 99.7151% ( 2) 00:38:32.416 33.978 - 34.211: 99.7265% ( 1) 00:38:32.416 34.444 - 34.676: 99.7379% ( 1) 00:38:32.416 35.375 - 35.607: 99.7493% ( 1) 00:38:32.416 36.073 - 36.305: 99.7607% ( 1) 00:38:32.416 36.771 - 37.004: 99.7721% ( 1) 00:38:32.416 37.935 - 38.167: 99.7835% ( 1) 00:38:32.416 38.167 - 38.400: 99.7948% ( 1) 00:38:32.416 38.865 - 39.098: 99.8290% ( 3) 00:38:32.416 39.098 - 39.331: 99.8518% ( 2) 00:38:32.416 39.331 - 39.564: 99.8746% ( 2) 00:38:32.416 40.262 - 40.495: 99.8860% ( 1) 00:38:32.416 40.727 - 40.960: 99.9088% ( 2) 00:38:32.416 42.124 - 42.356: 99.9316% ( 2) 00:38:32.416 45.615 - 45.847: 99.9544% ( 2) 00:38:32.416 45.847 - 46.080: 99.9658% ( 1) 00:38:32.416 46.545 - 46.778: 99.9772% ( 1) 00:38:32.416 47.011 - 47.244: 99.9886% ( 1) 00:38:32.416 47.244 - 47.476: 100.0000% ( 1) 00:38:32.416 00:38:32.416 00:38:32.416 real 0m1.307s 00:38:32.416 user 0m1.120s 00:38:32.417 sys 0m0.150s 00:38:32.417 00:22:28 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:32.417 00:22:28 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:38:32.417 ************************************ 00:38:32.417 END TEST nvme_overhead 00:38:32.417 ************************************ 00:38:32.417 00:22:28 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:38:32.417 00:22:28 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:38:32.417 00:22:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:32.417 00:22:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:38:32.417 ************************************ 00:38:32.417 START TEST nvme_arbitration 00:38:32.417 ************************************ 00:38:32.417 00:22:28 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:38:35.706 Initializing NVMe Controllers 00:38:35.706 Attached to 0000:00:10.0 00:38:35.706 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:38:35.706 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:38:35.706 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:38:35.706 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:38:35.706 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:38:35.706 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:38:35.706 Initialization complete. Launching workers. 00:38:35.707 Starting thread on core 1 with urgent priority queue 00:38:35.707 Starting thread on core 2 with urgent priority queue 00:38:35.707 Starting thread on core 3 with urgent priority queue 00:38:35.707 Starting thread on core 0 with urgent priority queue 00:38:35.707 QEMU NVMe Ctrl (12340 ) core 0: 1194.67 IO/s 83.71 secs/100000 ios 00:38:35.707 QEMU NVMe Ctrl (12340 ) core 1: 1365.33 IO/s 73.24 secs/100000 ios 00:38:35.707 QEMU NVMe Ctrl (12340 ) core 2: 746.67 IO/s 133.93 secs/100000 ios 00:38:35.707 QEMU NVMe Ctrl (12340 ) core 3: 576.00 IO/s 173.61 secs/100000 ios 00:38:35.707 ======================================================== 00:38:35.707 00:38:35.707 00:38:35.707 real 0m3.412s 00:38:35.707 user 0m9.294s 00:38:35.707 sys 0m0.190s 00:38:35.707 00:22:31 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:35.707 00:22:31 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:38:35.707 ************************************ 00:38:35.707 END TEST nvme_arbitration 00:38:35.707 ************************************ 00:38:35.966 00:22:31 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:38:35.966 00:22:31 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:38:35.966 00:22:31 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:35.966 00:22:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:38:35.966 ************************************ 00:38:35.966 START TEST nvme_single_aen 00:38:35.966 ************************************ 00:38:35.966 00:22:31 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:38:35.966 Asynchronous Event Request test 00:38:35.966 Attached to 0000:00:10.0 00:38:35.966 Reset controller to setup AER completions for this process 00:38:35.966 Registering asynchronous event callbacks... 00:38:35.966 Getting orig temperature thresholds of all controllers 00:38:35.966 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:38:35.966 Setting all controllers temperature threshold low to trigger AER 00:38:35.966 Waiting for all controllers temperature threshold to be set lower 00:38:35.966 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:38:35.966 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:38:35.966 Waiting for all controllers to trigger AER and reset threshold 00:38:35.966 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:38:35.966 Cleaning up... 00:38:36.225 00:38:36.225 real 0m0.243s 00:38:36.225 user 0m0.085s 00:38:36.225 sys 0m0.120s 00:38:36.225 00:22:31 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:36.225 00:22:31 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:38:36.225 ************************************ 00:38:36.225 END TEST nvme_single_aen 00:38:36.225 ************************************ 00:38:36.225 00:22:31 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:38:36.225 00:22:31 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:36.225 00:22:31 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:36.226 00:22:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:38:36.226 ************************************ 00:38:36.226 START TEST nvme_doorbell_aers 00:38:36.226 ************************************ 00:38:36.226 00:22:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:38:36.226 00:22:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:38:36.226 00:22:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:38:36.226 00:22:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:38:36.226 00:22:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:38:36.226 00:22:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:38:36.226 00:22:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:38:36.226 00:22:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:36.226 00:22:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:38:36.226 00:22:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:38:36.226 00:22:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:38:36.226 00:22:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:38:36.226 00:22:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:38:36.226 00:22:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:38:36.485 [2024-07-25 00:22:32.177141] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 120607) is not found. Dropping the request. 00:38:46.456 Executing: test_write_invalid_db 00:38:46.456 Waiting for AER completion... 00:38:46.456 Failure: test_write_invalid_db 00:38:46.456 00:38:46.456 Executing: test_invalid_db_write_overflow_sq 00:38:46.456 Waiting for AER completion... 00:38:46.456 Failure: test_invalid_db_write_overflow_sq 00:38:46.456 00:38:46.456 Executing: test_invalid_db_write_overflow_cq 00:38:46.456 Waiting for AER completion... 00:38:46.456 Failure: test_invalid_db_write_overflow_cq 00:38:46.456 00:38:46.456 00:38:46.456 real 0m10.098s 00:38:46.456 user 0m8.635s 00:38:46.456 sys 0m1.415s 00:38:46.456 00:22:41 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:46.456 00:22:41 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:38:46.456 ************************************ 00:38:46.456 END TEST nvme_doorbell_aers 00:38:46.456 ************************************ 00:38:46.456 00:22:42 nvme -- nvme/nvme.sh@97 -- # uname 00:38:46.456 00:22:42 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:38:46.456 00:22:42 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:38:46.456 00:22:42 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:38:46.456 00:22:42 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:46.456 00:22:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:38:46.456 ************************************ 00:38:46.456 START TEST nvme_multi_aen 00:38:46.456 ************************************ 00:38:46.456 00:22:42 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:38:46.456 [2024-07-25 00:22:42.313533] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 120607) is not found. Dropping the request. 00:38:46.456 [2024-07-25 00:22:42.313637] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 120607) is not found. Dropping the request. 00:38:46.456 [2024-07-25 00:22:42.313656] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 120607) is not found. Dropping the request. 00:38:46.456 Child process pid: 120775 00:38:46.715 [Child] Asynchronous Event Request test 00:38:46.715 [Child] Attached to 0000:00:10.0 00:38:46.715 [Child] Registering asynchronous event callbacks... 00:38:46.715 [Child] Getting orig temperature thresholds of all controllers 00:38:46.715 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:38:46.715 [Child] Waiting for all controllers to trigger AER and reset threshold 00:38:46.715 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:38:46.715 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:38:46.715 [Child] Cleaning up... 00:38:46.974 Asynchronous Event Request test 00:38:46.974 Attached to 0000:00:10.0 00:38:46.974 Reset controller to setup AER completions for this process 00:38:46.974 Registering asynchronous event callbacks... 00:38:46.974 Getting orig temperature thresholds of all controllers 00:38:46.974 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:38:46.974 Setting all controllers temperature threshold low to trigger AER 00:38:46.974 Waiting for all controllers temperature threshold to be set lower 00:38:46.974 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:38:46.974 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:38:46.974 Waiting for all controllers to trigger AER and reset threshold 00:38:46.974 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:38:46.974 Cleaning up... 00:38:46.974 00:38:46.974 real 0m0.567s 00:38:46.974 user 0m0.203s 00:38:46.974 sys 0m0.263s 00:38:46.974 00:22:42 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:46.974 00:22:42 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:38:46.974 ************************************ 00:38:46.974 END TEST nvme_multi_aen 00:38:46.974 ************************************ 00:38:46.974 00:22:42 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:38:46.974 00:22:42 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:46.974 00:22:42 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:46.974 00:22:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:38:46.974 ************************************ 00:38:46.974 START TEST nvme_startup 00:38:46.974 ************************************ 00:38:46.974 00:22:42 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:38:47.233 Initializing NVMe Controllers 00:38:47.233 Attached to 0000:00:10.0 00:38:47.233 Initialization complete. 00:38:47.233 Time used:219548.688 (us). 00:38:47.233 00:38:47.233 real 0m0.298s 00:38:47.233 user 0m0.118s 00:38:47.233 sys 0m0.140s 00:38:47.233 00:22:42 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:47.233 00:22:42 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:38:47.233 ************************************ 00:38:47.233 END TEST nvme_startup 00:38:47.233 ************************************ 00:38:47.233 00:22:43 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:38:47.233 00:22:43 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:47.233 00:22:43 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:47.233 00:22:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:38:47.233 ************************************ 00:38:47.233 START TEST nvme_multi_secondary 00:38:47.233 ************************************ 00:38:47.233 00:22:43 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:38:47.233 00:22:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=120825 00:38:47.233 00:22:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:38:47.233 00:22:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=120826 00:38:47.233 00:22:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:38:47.233 00:22:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:38:50.546 Initializing NVMe Controllers 00:38:50.546 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:38:50.546 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:38:50.546 Initialization complete. Launching workers. 00:38:50.546 ======================================================== 00:38:50.546 Latency(us) 00:38:50.546 Device Information : IOPS MiB/s Average min max 00:38:50.546 PCIE (0000:00:10.0) NSID 1 from core 2: 15759.33 61.56 1014.44 147.23 8609.04 00:38:50.546 ======================================================== 00:38:50.546 Total : 15759.33 61.56 1014.44 147.23 8609.04 00:38:50.546 00:38:50.546 00:22:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 120825 00:38:50.546 Initializing NVMe Controllers 00:38:50.546 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:38:50.546 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:38:50.546 Initialization complete. Launching workers. 00:38:50.546 ======================================================== 00:38:50.546 Latency(us) 00:38:50.546 Device Information : IOPS MiB/s Average min max 00:38:50.546 PCIE (0000:00:10.0) NSID 1 from core 1: 35769.67 139.73 446.96 127.45 3203.87 00:38:50.546 ======================================================== 00:38:50.546 Total : 35769.67 139.73 446.96 127.45 3203.87 00:38:50.546 00:38:53.080 Initializing NVMe Controllers 00:38:53.080 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:38:53.080 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:38:53.080 Initialization complete. Launching workers. 00:38:53.080 ======================================================== 00:38:53.080 Latency(us) 00:38:53.080 Device Information : IOPS MiB/s Average min max 00:38:53.080 PCIE (0000:00:10.0) NSID 1 from core 0: 44585.00 174.16 358.51 95.42 1373.91 00:38:53.080 ======================================================== 00:38:53.080 Total : 44585.00 174.16 358.51 95.42 1373.91 00:38:53.080 00:38:53.080 00:22:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 120826 00:38:53.080 00:22:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=120895 00:38:53.080 00:22:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:38:53.080 00:22:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=120896 00:38:53.080 00:22:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:38:53.080 00:22:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:38:56.365 Initializing NVMe Controllers 00:38:56.365 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:38:56.365 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:38:56.365 Initialization complete. Launching workers. 00:38:56.365 ======================================================== 00:38:56.365 Latency(us) 00:38:56.365 Device Information : IOPS MiB/s Average min max 00:38:56.365 PCIE (0000:00:10.0) NSID 1 from core 0: 34735.99 135.69 460.27 111.61 1380.08 00:38:56.365 ======================================================== 00:38:56.365 Total : 34735.99 135.69 460.27 111.61 1380.08 00:38:56.365 00:38:56.365 Initializing NVMe Controllers 00:38:56.365 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:38:56.365 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:38:56.365 Initialization complete. Launching workers. 00:38:56.365 ======================================================== 00:38:56.365 Latency(us) 00:38:56.365 Device Information : IOPS MiB/s Average min max 00:38:56.365 PCIE (0000:00:10.0) NSID 1 from core 1: 35244.65 137.67 453.62 109.22 4192.43 00:38:56.365 ======================================================== 00:38:56.365 Total : 35244.65 137.67 453.62 109.22 4192.43 00:38:56.365 00:38:58.267 Initializing NVMe Controllers 00:38:58.267 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:38:58.267 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:38:58.267 Initialization complete. Launching workers. 00:38:58.267 ======================================================== 00:38:58.267 Latency(us) 00:38:58.267 Device Information : IOPS MiB/s Average min max 00:38:58.267 PCIE (0000:00:10.0) NSID 1 from core 2: 18331.20 71.61 872.32 154.51 7690.66 00:38:58.267 ======================================================== 00:38:58.267 Total : 18331.20 71.61 872.32 154.51 7690.66 00:38:58.267 00:38:58.267 00:22:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 120895 00:38:58.267 00:22:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 120896 00:38:58.267 00:38:58.267 real 0m10.901s 00:38:58.267 user 0m18.581s 00:38:58.267 sys 0m0.965s 00:38:58.267 00:22:53 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:58.267 00:22:53 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:38:58.267 ************************************ 00:38:58.267 END TEST nvme_multi_secondary 00:38:58.267 ************************************ 00:38:58.267 00:22:53 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:38:58.267 00:22:53 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:38:58.267 00:22:53 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/120251 ]] 00:38:58.267 00:22:53 nvme -- common/autotest_common.sh@1090 -- # kill 120251 00:38:58.267 00:22:53 nvme -- common/autotest_common.sh@1091 -- # wait 120251 00:38:58.267 ================================================================= 00:38:58.268 ==120251==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x2000f97ebff0 at pc 0x7ffff78f8b55 bp 0x7fffffffccd0 sp 0x7fffffffc478 00:38:58.268 WRITE of size 192 at 0x2000f97ebff0 thread T0 (reactor_1) 00:38:58.527 #0 0x7ffff78f8b54 in memset ../../../../src/libsanitizer/sanitizer_common/sanitizer_common_interceptors_memintrinsics.inc:87 00:38:58.527 #1 0x555555f335fe in malloc_elem_join_adjacent_free ../lib/eal/common/malloc_elem.c:532 00:38:58.527 #2 0x555555f33874 in malloc_elem_free ../lib/eal/common/malloc_elem.c:586 00:38:58.527 #3 0x555555f37624 in malloc_heap_free ../lib/eal/common/malloc_heap.c:895 00:38:58.527 #4 0x555555f3a9a2 in mem_free ../lib/eal/common/rte_malloc.c:37 00:38:58.527 #5 0x555555f3aa49 in rte_free ../lib/eal/common/rte_malloc.c:44 00:38:58.527 #6 0x555555f16803 in rte_memzone_free ../lib/eal/common/eal_common_memzone.c:336 00:38:58.527 #7 0x5555561951aa in rte_ring_free ../lib/ring/rte_ring.c:361 00:38:58.527 #8 0x555555e41b45 in spdk_ring_free /home/vagrant/spdk_repo/spdk/lib/env_dpdk/env.c:397 00:38:58.527 #9 0x555555d243ba in nvme_io_msg_ctrlr_detach /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_io_msg.c:188 00:38:58.527 #10 0x555555d25129 in nvme_io_msg_ctrlr_unregister /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_io_msg.c:214 00:38:58.527 #11 0x555555d5595f in spdk_nvme_cuse_unregister /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c:1480 00:38:58.527 #12 0x555555b040c8 in cleanup /home/vagrant/spdk_repo/spdk/test/app/stub/stub.c:34 00:38:58.527 #13 0x555555b054c3 in main /home/vagrant/spdk_repo/spdk/test/app/stub/stub.c:197 00:38:58.527 #14 0x7ffff662a1c9 in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58 00:38:58.527 #15 0x7ffff662a28a in __libc_start_main_impl ../csu/libc-start.c:360 00:38:58.527 #16 0x555555b03ab4 in _start (/home/vagrant/spdk_repo/spdk/test/app/stub/stub+0x5afab4) (BuildId: 9746d89868601ac6bb8042776b8ca05f3e4598c1) 00:38:58.527 00:38:58.527 Address 0x2000f97ebff0 is a wild pointer inside of access range of size 0x0000000000c0. 00:38:58.527 SUMMARY: AddressSanitizer: heap-buffer-overflow ../../../../src/libsanitizer/sanitizer_common/sanitizer_common_interceptors_memintrinsics.inc:87 in memset 00:38:58.527 Shadow bytes around the buggy address: 00:38:58.527 0x2000f97ebd00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00:38:58.527 0x2000f97ebd80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00:38:58.527 0x2000f97ebe00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00:38:58.527 0x2000f97ebe80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00:38:58.527 0x2000f97ebf00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00:38:58.527 =>0x2000f97ebf80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00[fa]fa 00:38:58.527 0x2000f97ec000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00:38:58.527 0x2000f97ec080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00:38:58.527 0x2000f97ec100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00:38:58.527 0x2000f97ec180: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00:38:58.527 0x2000f97ec200: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00:38:58.527 Shadow byte legend (one shadow byte represents 8 application bytes): 00:38:58.527 Addressable: 00 00:38:58.527 Partially addressable: 01 02 03 04 05 06 07 00:38:58.527 Heap left redzone: fa 00:38:58.527 Freed heap region: fd 00:38:58.527 Stack left redzone: f1 00:38:58.527 Stack mid redzone: f2 00:38:58.527 Stack right redzone: f3 00:38:58.527 Stack after return: f5 00:38:58.527 Stack use after scope: f8 00:38:58.527 Global redzone: f9 00:38:58.527 Global init order: f6 00:38:58.527 Poisoned by user: f7 00:38:58.527 Container overflow: fc 00:38:58.527 Array cookie: ac 00:38:58.527 Intra object redzone: bb 00:38:58.527 ASan internal: fe 00:38:58.527 Left alloca redzone: ca 00:38:58.527 Right alloca redzone: cb 00:38:58.527 ==120251==ABORTING 00:38:58.527 00:22:54 nvme -- common/autotest_common.sh@1092 -- # : 00:38:58.527 00:22:54 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:38:58.527 00:22:54 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:38:58.527 00:22:54 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:38:58.527 00:22:54 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:58.527 00:22:54 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:58.527 00:22:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:38:58.527 ************************************ 00:38:58.527 START TEST bdev_nvme_reset_stuck_adm_cmd 00:38:58.527 ************************************ 00:38:58.527 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:38:58.527 * Looking for test storage... 00:38:58.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:38:58.527 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:38:58.527 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:38:58.527 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:38:58.527 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:38:58.527 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:38:58.527 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:38:58.527 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:38:58.527 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:38:58.527 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:38:58.527 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:38:58.527 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:38:58.527 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:38:58.527 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:58.527 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:38:58.527 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:38:58.786 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:38:58.786 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:38:58.786 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:38:58.786 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:38:58.786 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:38:58.786 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=121036 00:38:58.786 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:38:58.786 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:38:58.786 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 121036 00:38:58.786 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 121036 ']' 00:38:58.786 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:58.786 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:58.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:58.786 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:58.786 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:58.786 00:22:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:38:58.786 [2024-07-25 00:22:54.525467] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:38:58.786 [2024-07-25 00:22:54.525658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121036 ] 00:38:59.044 [2024-07-25 00:22:54.720554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:59.303 [2024-07-25 00:22:54.925177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:59.303 [2024-07-25 00:22:54.925330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:59.303 [2024-07-25 00:22:54.925415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:59.303 [2024-07-25 00:22:54.925425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:38:59.870 00:22:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:59.871 00:22:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:38:59.871 00:22:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:38:59.871 00:22:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.871 00:22:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:38:59.871 nvme0n1 00:38:59.871 00:22:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.871 00:22:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:38:59.871 00:22:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_R7lFR.txt 00:38:59.871 00:22:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:38:59.871 00:22:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.871 00:22:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:38:59.871 true 00:38:59.871 00:22:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.871 00:22:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:38:59.871 00:22:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721866975 00:38:59.871 00:22:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:38:59.871 00:22:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=121059 00:38:59.871 00:22:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:38:59.871 00:22:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:39:02.401 [2024-07-25 00:22:57.654964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:39:02.401 [2024-07-25 00:22:57.655374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:02.401 [2024-07-25 00:22:57.655419] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:39:02.401 [2024-07-25 00:22:57.655443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:02.401 [2024-07-25 00:22:57.657535] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.401 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 121059 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 121059 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 121059 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_R7lFR.txt 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:39:02.401 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:39:02.402 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:39:02.402 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:39:02.402 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:39:02.402 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_R7lFR.txt 00:39:02.402 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 121036 00:39:02.402 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 121036 ']' 00:39:02.402 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 121036 00:39:02.402 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:39:02.402 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:02.402 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 121036 00:39:02.402 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:02.402 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:02.402 killing process with pid 121036 00:39:02.402 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 121036' 00:39:02.402 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 121036 00:39:02.402 00:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 121036 00:39:04.304 00:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:39:04.304 00:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:39:04.304 ************************************ 00:39:04.304 END TEST bdev_nvme_reset_stuck_adm_cmd 00:39:04.304 ************************************ 00:39:04.304 00:39:04.304 real 0m5.358s 00:39:04.304 user 0m18.402s 00:39:04.304 sys 0m0.605s 00:39:04.304 00:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:04.304 00:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:39:04.304 00:22:59 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:39:04.304 00:22:59 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:39:04.304 00:22:59 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:04.304 00:22:59 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:04.304 00:22:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:39:04.304 ************************************ 00:39:04.304 START TEST nvme_fio 00:39:04.304 ************************************ 00:39:04.304 00:22:59 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:39:04.304 00:22:59 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:39:04.304 00:22:59 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:39:04.304 00:22:59 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:39:04.304 00:22:59 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:39:04.304 00:22:59 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:39:04.304 00:22:59 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:39:04.304 00:22:59 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:39:04.304 00:22:59 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:39:04.304 00:22:59 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:39:04.304 00:22:59 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:39:04.304 00:22:59 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:39:04.304 00:22:59 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:39:04.304 00:22:59 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:39:04.304 00:22:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:39:04.304 00:22:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:39:04.304 00:22:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:39:04.304 00:22:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:39:04.563 00:23:00 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:39:04.563 00:23:00 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:39:04.563 00:23:00 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:39:04.563 00:23:00 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:04.563 00:23:00 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:04.563 00:23:00 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:04.563 00:23:00 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:39:04.563 00:23:00 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:39:04.563 00:23:00 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:04.563 00:23:00 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:04.563 00:23:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:39:04.563 00:23:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:39:04.563 00:23:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:04.563 00:23:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:39:04.563 00:23:00 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:39:04.563 00:23:00 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:39:04.563 00:23:00 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:39:04.563 00:23:00 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:39:04.563 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:39:04.563 fio-3.35 00:39:04.563 Starting 1 thread 00:39:07.869 00:39:07.869 test: (groupid=0, jobs=1): err= 0: pid=121194: Thu Jul 25 00:23:03 2024 00:39:07.869 read: IOPS=13.4k, BW=52.4MiB/s (54.9MB/s)(105MiB/2001msec) 00:39:07.869 slat (nsec): min=3910, max=92120, avg=6743.91, stdev=4368.27 00:39:07.869 clat (usec): min=308, max=9334, avg=4752.84, stdev=432.28 00:39:07.869 lat (usec): min=314, max=9426, avg=4759.58, stdev=432.61 00:39:07.869 clat percentiles (usec): 00:39:07.869 | 1.00th=[ 3458], 5.00th=[ 4146], 10.00th=[ 4293], 20.00th=[ 4490], 00:39:07.869 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4883], 00:39:07.869 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:39:07.869 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 7308], 99.95th=[ 8291], 00:39:07.869 | 99.99th=[ 9241] 00:39:07.869 bw ( KiB/s): min=52656, max=53936, per=99.68%, avg=53481.33, stdev=715.99, samples=3 00:39:07.869 iops : min=13164, max=13484, avg=13370.33, stdev=179.00, samples=3 00:39:07.869 write: IOPS=13.4k, BW=52.4MiB/s (54.9MB/s)(105MiB/2001msec); 0 zone resets 00:39:07.869 slat (nsec): min=4123, max=53675, avg=6801.09, stdev=4213.47 00:39:07.869 clat (usec): min=242, max=9260, avg=4758.01, stdev=438.94 00:39:07.869 lat (usec): min=247, max=9278, avg=4764.81, stdev=439.16 00:39:07.870 clat percentiles (usec): 00:39:07.870 | 1.00th=[ 3425], 5.00th=[ 4146], 10.00th=[ 4293], 20.00th=[ 4490], 00:39:07.870 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4883], 00:39:07.870 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:39:07.870 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 7504], 99.95th=[ 8291], 00:39:07.870 | 99.99th=[ 9110] 00:39:07.870 bw ( KiB/s): min=52752, max=54083, per=99.86%, avg=53534.33, stdev=695.59, samples=3 00:39:07.870 iops : min=13188, max=13520, avg=13383.33, stdev=173.60, samples=3 00:39:07.870 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:39:07.870 lat (msec) : 2=0.10%, 4=3.16%, 10=96.70% 00:39:07.870 cpu : usr=99.85%, sys=0.10%, ctx=7, majf=0, minf=609 00:39:07.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:39:07.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:07.870 issued rwts: total=26841,26817,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:07.870 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:07.870 00:39:07.870 Run status group 0 (all jobs): 00:39:07.870 READ: bw=52.4MiB/s (54.9MB/s), 52.4MiB/s-52.4MiB/s (54.9MB/s-54.9MB/s), io=105MiB (110MB), run=2001-2001msec 00:39:07.870 WRITE: bw=52.4MiB/s (54.9MB/s), 52.4MiB/s-52.4MiB/s (54.9MB/s-54.9MB/s), io=105MiB (110MB), run=2001-2001msec 00:39:07.870 ----------------------------------------------------- 00:39:07.870 Suppressions used: 00:39:07.870 count bytes template 00:39:07.870 1 32 /usr/src/fio/parse.c 00:39:07.870 ----------------------------------------------------- 00:39:07.870 00:39:07.870 00:23:03 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:39:07.870 00:23:03 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:39:07.870 00:39:07.870 real 0m3.814s 00:39:07.870 user 0m3.072s 00:39:07.870 sys 0m0.387s 00:39:07.870 00:23:03 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:07.870 ************************************ 00:39:07.870 00:23:03 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:39:07.870 END TEST nvme_fio 00:39:07.870 ************************************ 00:39:07.870 00:39:07.870 real 0m44.547s 00:39:07.870 user 2m1.842s 00:39:07.870 sys 0m8.144s 00:39:07.870 00:23:03 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:07.870 00:23:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:39:07.870 ************************************ 00:39:07.870 END TEST nvme 00:39:07.870 ************************************ 00:39:07.870 00:23:03 -- spdk/autotest.sh@221 -- # [[ 0 -eq 1 ]] 00:39:07.870 00:23:03 -- spdk/autotest.sh@225 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:39:07.870 00:23:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:07.870 00:23:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:07.870 00:23:03 -- common/autotest_common.sh@10 -- # set +x 00:39:07.870 ************************************ 00:39:07.870 START TEST nvme_scc 00:39:07.870 ************************************ 00:39:07.870 00:23:03 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:39:07.870 * Looking for test storage... 00:39:07.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:39:07.870 00:23:03 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:39:07.870 00:23:03 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:39:07.870 00:23:03 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:39:07.870 00:23:03 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:39:07.870 00:23:03 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:07.870 00:23:03 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:07.870 00:23:03 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:07.870 00:23:03 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:07.870 00:23:03 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:07.870 00:23:03 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:07.870 00:23:03 nvme_scc -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:07.870 00:23:03 nvme_scc -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:07.870 00:23:03 nvme_scc -- paths/export.sh@6 -- # export PATH 00:39:07.870 00:23:03 nvme_scc -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:07.870 00:23:03 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:39:07.870 00:23:03 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:39:07.870 00:23:03 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:39:07.870 00:23:03 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:39:07.870 00:23:03 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:39:07.870 00:23:03 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:39:07.870 00:23:03 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:39:07.870 00:23:03 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:39:07.870 00:23:03 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:39:07.870 00:23:03 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:07.870 00:23:03 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:39:07.870 00:23:03 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:39:07.870 00:23:03 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:39:07.870 00:23:03 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:39:08.439 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:39:08.439 Waiting for block devices as requested 00:39:08.439 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:39:08.439 00:23:04 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:39:08.439 00:23:04 nvme_scc -- scripts/common.sh@15 -- # local i 00:39:08.439 00:23:04 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:39:08.439 00:23:04 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:39:08.439 00:23:04 nvme_scc -- scripts/common.sh@24 -- # return 0 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:39:08.439 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.440 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.441 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:39:08.442 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:39:08.443 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:39:08.444 00:23:04 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:39:08.444 00:23:04 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:39:08.445 00:23:04 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:39:08.445 00:23:04 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:39:08.445 00:23:04 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:39:08.445 00:23:04 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:39:08.445 00:23:04 nvme_scc -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:39:08.445 00:23:04 nvme_scc -- nvme/functions.sh@206 -- # echo nvme0 00:39:08.445 00:23:04 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:39:08.445 00:23:04 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:39:08.445 00:23:04 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:39:08.445 00:23:04 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:39:09.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:39:09.012 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:39:09.579 00:23:05 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:39:09.579 00:23:05 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:09.579 00:23:05 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:09.579 00:23:05 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:39:09.579 ************************************ 00:39:09.579 START TEST nvme_simple_copy 00:39:09.579 ************************************ 00:39:09.579 00:23:05 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:39:10.149 Initializing NVMe Controllers 00:39:10.149 Attaching to 0000:00:10.0 00:39:10.149 Controller supports SCC. Attached to 0000:00:10.0 00:39:10.149 Namespace ID: 1 size: 5GB 00:39:10.149 Initialization complete. 00:39:10.149 00:39:10.149 Controller QEMU NVMe Ctrl (12340 ) 00:39:10.149 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:39:10.149 Namespace Block Size:4096 00:39:10.149 Writing LBAs 0 to 63 with Random Data 00:39:10.149 Copied LBAs from 0 - 63 to the Destination LBA 256 00:39:10.149 LBAs matching Written Data: 64 00:39:10.149 00:39:10.149 real 0m0.306s 00:39:10.149 user 0m0.124s 00:39:10.149 sys 0m0.083s 00:39:10.149 00:23:05 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:10.149 ************************************ 00:39:10.149 END TEST nvme_simple_copy 00:39:10.149 ************************************ 00:39:10.149 00:23:05 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:39:10.149 ************************************ 00:39:10.149 END TEST nvme_scc 00:39:10.149 ************************************ 00:39:10.149 00:39:10.149 real 0m2.159s 00:39:10.149 user 0m0.603s 00:39:10.149 sys 0m1.501s 00:39:10.149 00:23:05 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:10.149 00:23:05 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:39:10.149 00:23:05 -- spdk/autotest.sh@227 -- # [[ 0 -eq 1 ]] 00:39:10.149 00:23:05 -- spdk/autotest.sh@230 -- # [[ 0 -eq 1 ]] 00:39:10.149 00:23:05 -- spdk/autotest.sh@233 -- # [[ '' -eq 1 ]] 00:39:10.149 00:23:05 -- spdk/autotest.sh@236 -- # [[ 0 -eq 1 ]] 00:39:10.149 00:23:05 -- spdk/autotest.sh@240 -- # [[ '' -eq 1 ]] 00:39:10.149 00:23:05 -- spdk/autotest.sh@244 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:39:10.149 00:23:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:10.149 00:23:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:10.149 00:23:05 -- common/autotest_common.sh@10 -- # set +x 00:39:10.149 ************************************ 00:39:10.149 START TEST nvme_rpc 00:39:10.149 ************************************ 00:39:10.149 00:23:05 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:39:10.149 * Looking for test storage... 00:39:10.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:39:10.150 00:23:05 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:10.150 00:23:05 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:39:10.150 00:23:05 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:39:10.150 00:23:05 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:39:10.150 00:23:05 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:39:10.150 00:23:05 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:39:10.150 00:23:05 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:39:10.150 00:23:05 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:39:10.150 00:23:05 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:39:10.150 00:23:05 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:39:10.150 00:23:05 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:39:10.150 00:23:05 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:39:10.150 00:23:05 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:39:10.150 00:23:05 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:39:10.150 00:23:05 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:39:10.150 00:23:05 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=121618 00:39:10.150 00:23:05 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:39:10.150 00:23:05 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:39:10.150 00:23:05 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 121618 00:39:10.150 00:23:05 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 121618 ']' 00:39:10.150 00:23:05 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:10.150 00:23:05 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:10.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:10.150 00:23:05 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:10.150 00:23:05 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:10.150 00:23:05 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:10.417 [2024-07-25 00:23:06.046009] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:39:10.417 [2024-07-25 00:23:06.046197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121618 ] 00:39:10.417 [2024-07-25 00:23:06.220903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:10.675 [2024-07-25 00:23:06.456980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:10.675 [2024-07-25 00:23:06.456990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:11.242 00:23:07 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:11.242 00:23:07 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:39:11.242 00:23:07 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:39:11.500 Nvme0n1 00:39:11.500 00:23:07 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:39:11.500 00:23:07 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:39:11.758 request: 00:39:11.758 { 00:39:11.758 "bdev_name": "Nvme0n1", 00:39:11.758 "filename": "non_existing_file", 00:39:11.758 "method": "bdev_nvme_apply_firmware", 00:39:11.758 "req_id": 1 00:39:11.758 } 00:39:11.758 Got JSON-RPC error response 00:39:11.758 response: 00:39:11.758 { 00:39:11.758 "code": -32603, 00:39:11.758 "message": "open file failed." 00:39:11.758 } 00:39:11.758 00:23:07 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:39:11.758 00:23:07 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:39:11.758 00:23:07 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:39:12.016 00:23:07 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:39:12.016 00:23:07 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 121618 00:39:12.016 00:23:07 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 121618 ']' 00:39:12.016 00:23:07 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 121618 00:39:12.016 00:23:07 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:39:12.016 00:23:07 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:12.016 00:23:07 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 121618 00:39:12.016 killing process with pid 121618 00:39:12.016 00:23:07 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:12.016 00:23:07 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:12.016 00:23:07 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 121618' 00:39:12.016 00:23:07 nvme_rpc -- common/autotest_common.sh@969 -- # kill 121618 00:39:12.016 00:23:07 nvme_rpc -- common/autotest_common.sh@974 -- # wait 121618 00:39:13.917 ************************************ 00:39:13.917 END TEST nvme_rpc 00:39:13.917 ************************************ 00:39:13.917 00:39:13.917 real 0m3.589s 00:39:13.917 user 0m6.565s 00:39:13.917 sys 0m0.597s 00:39:13.917 00:23:09 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:13.917 00:23:09 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:13.917 00:23:09 -- spdk/autotest.sh@245 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:39:13.917 00:23:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:13.917 00:23:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:13.917 00:23:09 -- common/autotest_common.sh@10 -- # set +x 00:39:13.917 ************************************ 00:39:13.917 START TEST nvme_rpc_timeouts 00:39:13.917 ************************************ 00:39:13.917 00:23:09 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:39:13.917 * Looking for test storage... 00:39:13.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:39:13.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:13.917 00:23:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:13.917 00:23:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_121678 00:39:13.917 00:23:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_121678 00:39:13.917 00:23:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=121702 00:39:13.917 00:23:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:39:13.917 00:23:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:39:13.917 00:23:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 121702 00:39:13.917 00:23:09 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 121702 ']' 00:39:13.917 00:23:09 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:13.917 00:23:09 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:13.917 00:23:09 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:13.917 00:23:09 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:13.918 00:23:09 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:39:13.918 [2024-07-25 00:23:09.617074] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:39:13.918 [2024-07-25 00:23:09.617541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121702 ] 00:39:14.176 [2024-07-25 00:23:09.792148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:14.176 [2024-07-25 00:23:09.946876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:14.176 [2024-07-25 00:23:09.946893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:14.743 00:23:10 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:14.743 00:23:10 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:39:14.743 00:23:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:39:14.743 Checking default timeout settings: 00:39:14.743 00:23:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:39:15.309 Making settings changes with rpc: 00:39:15.309 00:23:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:39:15.309 00:23:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:39:15.309 Check default vs. modified settings: 00:39:15.309 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:39:15.309 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:39:15.568 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:39:15.568 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:39:15.568 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_121678 00:39:15.568 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:39:15.568 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:39:15.568 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:39:15.568 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_121678 00:39:15.568 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:39:15.568 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:39:15.827 Setting action_on_timeout is changed as expected. 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_121678 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_121678 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:39:15.827 Setting timeout_us is changed as expected. 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_121678 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_121678 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:39:15.827 Setting timeout_admin_us is changed as expected. 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_121678 /tmp/settings_modified_121678 00:39:15.827 00:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 121702 00:39:15.827 00:23:11 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 121702 ']' 00:39:15.827 00:23:11 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 121702 00:39:15.827 00:23:11 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:39:15.827 00:23:11 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:15.827 00:23:11 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 121702 00:39:15.827 killing process with pid 121702 00:39:15.827 00:23:11 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:15.827 00:23:11 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:15.827 00:23:11 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 121702' 00:39:15.827 00:23:11 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 121702 00:39:15.827 00:23:11 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 121702 00:39:17.726 RPC TIMEOUT SETTING TEST PASSED. 00:39:17.727 00:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:39:17.727 ************************************ 00:39:17.727 END TEST nvme_rpc_timeouts 00:39:17.727 ************************************ 00:39:17.727 00:39:17.727 real 0m3.784s 00:39:17.727 user 0m7.259s 00:39:17.727 sys 0m0.596s 00:39:17.727 00:23:13 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:17.727 00:23:13 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:39:17.727 00:23:13 -- spdk/autotest.sh@247 -- # uname -s 00:39:17.727 00:23:13 -- spdk/autotest.sh@247 -- # '[' Linux = Linux ']' 00:39:17.727 00:23:13 -- spdk/autotest.sh@248 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:39:17.727 00:23:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:17.727 00:23:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:17.727 00:23:13 -- common/autotest_common.sh@10 -- # set +x 00:39:17.727 ************************************ 00:39:17.727 START TEST sw_hotplug 00:39:17.727 ************************************ 00:39:17.727 00:23:13 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:39:17.727 * Looking for test storage... 00:39:17.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:39:17.727 00:23:13 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:39:17.984 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:39:17.984 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:39:18.550 00:23:14 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:39:18.550 00:23:14 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:39:18.550 00:23:14 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:39:18.550 00:23:14 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@230 -- # local class 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@15 -- # local i 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@325 -- # (( 1 )) 00:39:18.550 00:23:14 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:39:18.550 00:23:14 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=1 00:39:18.550 00:23:14 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:39:18.550 00:23:14 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:39:18.809 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:39:18.809 Waiting for block devices as requested 00:39:18.809 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:39:19.067 00:23:14 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED=0000:00:10.0 00:39:19.067 00:23:14 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:39:19.326 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:39:19.326 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:39:19.326 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:39:20.263 00:23:15 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:39:20.263 00:23:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:39:20.263 00:23:15 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:39:20.263 00:23:15 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:39:20.263 00:23:15 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=122212 00:39:20.263 00:23:15 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:39:20.263 00:23:15 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:39:20.263 00:23:15 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 3 -r 3 -l warning 00:39:20.263 00:23:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:39:20.263 00:23:15 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:39:20.263 00:23:15 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:39:20.263 00:23:15 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:39:20.263 00:23:15 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:39:20.263 00:23:15 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:39:20.263 00:23:15 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:39:20.263 00:23:15 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:39:20.263 00:23:15 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:39:20.263 00:23:15 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:39:20.263 00:23:15 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:39:20.263 Initializing NVMe Controllers 00:39:20.263 Attaching to 0000:00:10.0 00:39:20.263 Attached to 0000:00:10.0 00:39:20.263 Initialization complete. Starting I/O... 00:39:20.263 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:39:20.263 00:39:21.640 QEMU NVMe Ctrl (12340 ): 1967 I/Os completed (+1967) 00:39:21.640 00:39:22.573 QEMU NVMe Ctrl (12340 ): 4463 I/Os completed (+2496) 00:39:22.573 00:39:23.506 QEMU NVMe Ctrl (12340 ): 7367 I/Os completed (+2904) 00:39:23.506 00:39:24.440 QEMU NVMe Ctrl (12340 ): 10343 I/Os completed (+2976) 00:39:24.440 00:39:25.377 QEMU NVMe Ctrl (12340 ): 13239 I/Os completed (+2896) 00:39:25.377 00:39:26.313 00:23:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:39:26.313 00:23:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:39:26.313 00:23:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:39:26.313 [2024-07-25 00:23:21.875824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:39:26.313 Controller removed: QEMU NVMe Ctrl (12340 ) 00:39:26.313 [2024-07-25 00:23:21.877440] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:26.313 [2024-07-25 00:23:21.877543] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:26.313 [2024-07-25 00:23:21.877572] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:26.313 [2024-07-25 00:23:21.877597] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:26.313 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:39:26.313 [2024-07-25 00:23:21.883903] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:26.313 [2024-07-25 00:23:21.883972] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:26.313 [2024-07-25 00:23:21.883998] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:26.313 [2024-07-25 00:23:21.884020] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:26.313 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:39:26.313 EAL: Scan for (pci) bus failed. 00:39:26.313 00:23:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:39:26.313 00:23:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:39:26.313 00:23:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:39:26.313 00:23:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:39:26.313 00:23:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:39:26.313 00:23:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:39:26.313 00:23:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:39:26.313 00:23:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:39:26.313 Attaching to 0000:00:10.0 00:39:26.313 Attached to 0000:00:10.0 00:39:26.313 QEMU NVMe Ctrl (12340 ): 8 I/Os completed (+8) 00:39:26.313 00:39:27.288 QEMU NVMe Ctrl (12340 ): 2968 I/Os completed (+2960) 00:39:27.288 00:39:28.662 QEMU NVMe Ctrl (12340 ): 5868 I/Os completed (+2900) 00:39:28.662 00:39:29.227 QEMU NVMe Ctrl (12340 ): 8816 I/Os completed (+2948) 00:39:29.227 00:39:30.602 QEMU NVMe Ctrl (12340 ): 11808 I/Os completed (+2992) 00:39:30.602 00:39:31.538 QEMU NVMe Ctrl (12340 ): 14716 I/Os completed (+2908) 00:39:31.538 00:39:32.473 00:23:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:39:32.473 00:23:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:39:32.473 00:23:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:39:32.473 00:23:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:39:32.473 [2024-07-25 00:23:28.091320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:39:32.473 Controller removed: QEMU NVMe Ctrl (12340 ) 00:39:32.473 [2024-07-25 00:23:28.092912] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:32.473 [2024-07-25 00:23:28.092971] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:32.473 [2024-07-25 00:23:28.093003] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:32.473 [2024-07-25 00:23:28.093026] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:32.473 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:39:32.473 [2024-07-25 00:23:28.099066] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:32.473 [2024-07-25 00:23:28.099120] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:32.473 [2024-07-25 00:23:28.099146] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:32.473 [2024-07-25 00:23:28.099166] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:32.473 00:39:32.473 00:23:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:39:32.473 00:23:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:39:32.473 00:23:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:39:32.473 00:23:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:39:32.473 00:23:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:39:32.473 00:23:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:39:32.474 00:23:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:39:32.474 00:23:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:39:32.474 Attaching to 0000:00:10.0 00:39:32.474 Attached to 0000:00:10.0 00:39:33.408 QEMU NVMe Ctrl (12340 ): 2424 I/Os completed (+2424) 00:39:33.408 00:39:34.341 QEMU NVMe Ctrl (12340 ): 5296 I/Os completed (+2872) 00:39:34.341 00:39:35.276 QEMU NVMe Ctrl (12340 ): 8176 I/Os completed (+2880) 00:39:35.276 00:39:36.650 QEMU NVMe Ctrl (12340 ): 11072 I/Os completed (+2896) 00:39:36.650 00:39:37.594 QEMU NVMe Ctrl (12340 ): 13957 I/Os completed (+2885) 00:39:37.594 00:39:38.531 QEMU NVMe Ctrl (12340 ): 16861 I/Os completed (+2904) 00:39:38.531 00:39:38.531 00:23:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:39:38.531 00:23:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:39:38.531 00:23:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:39:38.531 00:23:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:39:38.531 [2024-07-25 00:23:34.280500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:39:38.531 Controller removed: QEMU NVMe Ctrl (12340 ) 00:39:38.531 [2024-07-25 00:23:34.281988] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:38.531 [2024-07-25 00:23:34.282048] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:38.531 [2024-07-25 00:23:34.282074] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:38.531 [2024-07-25 00:23:34.282097] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:38.531 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:39:38.531 [2024-07-25 00:23:34.287942] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:38.531 [2024-07-25 00:23:34.288004] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:38.531 [2024-07-25 00:23:34.288029] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:38.531 [2024-07-25 00:23:34.288052] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:38.531 00:23:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:39:38.531 00:23:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:39:38.531 00:23:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:39:38.531 00:23:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:39:38.531 00:23:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:39:38.790 00:23:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:39:38.790 00:23:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:39:38.790 00:23:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:39:38.790 Attaching to 0000:00:10.0 00:39:38.790 Attached to 0000:00:10.0 00:39:38.790 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:39:38.790 [2024-07-25 00:23:34.488059] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:39:45.358 00:23:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:39:45.358 00:23:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:39:45.358 00:23:40 sw_hotplug -- common/autotest_common.sh@717 -- # time=24.61 00:39:45.358 00:23:40 sw_hotplug -- common/autotest_common.sh@718 -- # echo 24.61 00:39:45.358 00:23:40 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:39:45.358 00:23:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=24.61 00:39:45.358 00:23:40 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 24.61 1 00:39:45.358 remove_attach_helper took 24.61s to complete (handling 1 nvme drive(s)) 00:23:40 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:39:50.642 00:23:46 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 122212 00:39:50.642 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (122212) - No such process 00:39:50.642 00:23:46 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 122212 00:39:50.642 00:23:46 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:39:50.642 00:23:46 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:39:50.642 00:23:46 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:39:50.642 00:23:46 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=122523 00:39:50.642 00:23:46 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:50.642 00:23:46 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:39:50.642 00:23:46 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 122523 00:39:50.642 00:23:46 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 122523 ']' 00:39:50.642 00:23:46 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:50.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:50.642 00:23:46 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:50.642 00:23:46 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:50.642 00:23:46 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:50.642 00:23:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:39:50.901 [2024-07-25 00:23:46.577378] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:39:50.901 [2024-07-25 00:23:46.577571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122523 ] 00:39:50.901 [2024-07-25 00:23:46.753663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:51.160 [2024-07-25 00:23:46.967769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:51.729 00:23:47 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:51.729 00:23:47 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:39:51.729 00:23:47 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:39:51.729 00:23:47 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.729 00:23:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:39:51.729 00:23:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.729 00:23:47 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:39:51.729 00:23:47 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:39:51.729 00:23:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:39:51.729 00:23:47 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:39:51.729 00:23:47 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:39:51.729 00:23:47 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:39:51.729 00:23:47 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:39:51.729 00:23:47 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:39:51.729 00:23:47 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:39:51.729 00:23:47 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:39:51.729 00:23:47 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:39:51.729 00:23:47 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:39:51.729 00:23:47 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:39:58.294 00:23:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:39:58.294 00:23:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:39:58.294 00:23:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:39:58.294 00:23:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:39:58.294 00:23:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:39:58.294 00:23:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:39:58.294 00:23:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:39:58.294 00:23:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:39:58.294 00:23:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:39:58.294 00:23:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:58.294 00:23:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:39:58.294 00:23:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:58.294 00:23:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:39:58.294 00:23:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:39:58.294 [2024-07-25 00:23:53.648750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:39:58.294 [2024-07-25 00:23:53.651134] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:58.294 [2024-07-25 00:23:53.651221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:39:58.294 [2024-07-25 00:23:53.651243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:58.294 [2024-07-25 00:23:53.651266] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:58.294 [2024-07-25 00:23:53.651282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:39:58.294 [2024-07-25 00:23:53.651294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:58.294 [2024-07-25 00:23:53.651309] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:58.294 [2024-07-25 00:23:53.651320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:39:58.294 [2024-07-25 00:23:53.651336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:58.294 [2024-07-25 00:23:53.651349] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:58.294 [2024-07-25 00:23:53.651364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:39:58.294 [2024-07-25 00:23:53.651376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:58.294 00:23:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:39:58.294 00:23:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:39:58.294 00:23:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:39:58.294 00:23:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:39:58.294 00:23:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:39:58.294 00:23:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:39:58.294 00:23:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:58.294 00:23:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:39:58.294 00:23:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:58.294 00:23:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:39:58.294 00:23:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:39:58.553 00:23:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:39:58.553 00:23:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:39:58.553 00:23:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:39:58.553 00:23:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:39:58.553 00:23:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:39:58.553 00:23:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:05.114 00:24:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.114 00:24:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:05.114 00:24:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:05.114 00:24:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.114 00:24:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:05.114 00:24:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:40:05.114 [2024-07-25 00:24:00.448782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:40:05.114 [2024-07-25 00:24:00.450915] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:05.114 [2024-07-25 00:24:00.450967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:40:05.114 [2024-07-25 00:24:00.450986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.114 [2024-07-25 00:24:00.451025] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:05.114 [2024-07-25 00:24:00.451039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:40:05.114 [2024-07-25 00:24:00.451055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.114 [2024-07-25 00:24:00.451069] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:05.114 [2024-07-25 00:24:00.451084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:40:05.114 [2024-07-25 00:24:00.451112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.114 [2024-07-25 00:24:00.451159] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:05.114 [2024-07-25 00:24:00.451187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:40:05.114 [2024-07-25 00:24:00.451217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:05.114 00:24:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.114 00:24:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:05.114 00:24:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:40:05.114 00:24:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:40:05.372 00:24:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:40:05.372 00:24:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:40:05.372 00:24:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:40:05.372 00:24:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:40:05.372 00:24:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:40:05.372 00:24:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:40:11.932 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:40:11.932 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:40:11.932 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:40:11.932 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:11.932 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:11.932 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:11.932 00:24:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:11.932 00:24:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:11.932 00:24:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:11.932 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:40:11.932 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:40:11.932 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:40:11.932 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:40:11.932 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:40:11.932 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:40:11.932 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:40:11.932 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:11.932 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:11.932 00:24:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:11.932 00:24:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:11.932 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:11.932 [2024-07-25 00:24:07.248867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:40:11.932 [2024-07-25 00:24:07.250781] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:11.932 [2024-07-25 00:24:07.250870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:40:11.932 [2024-07-25 00:24:07.250892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:11.932 [2024-07-25 00:24:07.250916] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:11.932 [2024-07-25 00:24:07.250932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:40:11.932 [2024-07-25 00:24:07.250945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:11.932 [2024-07-25 00:24:07.250961] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:11.932 [2024-07-25 00:24:07.250973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:40:11.932 [2024-07-25 00:24:07.250987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:11.932 [2024-07-25 00:24:07.251000] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:11.932 [2024-07-25 00:24:07.251013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:40:11.933 [2024-07-25 00:24:07.251024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:11.933 00:24:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:11.933 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:40:11.933 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:40:11.933 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:40:11.933 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:40:11.933 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:40:11.933 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:11.933 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:11.933 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:11.933 00:24:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:11.933 00:24:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:11.933 00:24:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:11.933 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:40:11.933 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:40:12.191 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:40:12.191 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:40:12.191 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:40:12.191 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:40:12.191 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:40:12.191 00:24:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:40:18.753 00:24:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:40:18.753 00:24:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:40:18.753 00:24:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:40:18.753 00:24:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:18.753 00:24:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:18.753 00:24:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:18.753 00:24:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:18.753 00:24:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:18.753 00:24:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:18.753 00:24:13 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:40:18.753 00:24:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:40:18.753 00:24:13 sw_hotplug -- common/autotest_common.sh@717 -- # time=26.42 00:40:18.753 00:24:13 sw_hotplug -- common/autotest_common.sh@718 -- # echo 26.42 00:40:18.753 00:24:13 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:40:18.753 00:24:13 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=26.42 00:40:18.753 00:24:13 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 26.42 1 00:40:18.753 remove_attach_helper took 26.42s to complete (handling 1 nvme drive(s)) 00:24:13 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:40:18.753 00:24:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:18.753 00:24:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:18.753 00:24:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:18.753 00:24:13 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:40:18.753 00:24:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:18.753 00:24:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:18.753 00:24:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:18.753 00:24:14 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:40:18.753 00:24:14 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:40:18.753 00:24:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:40:18.753 00:24:14 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:40:18.753 00:24:14 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:40:18.753 00:24:14 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:40:18.753 00:24:14 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:40:18.753 00:24:14 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:40:18.753 00:24:14 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:40:18.753 00:24:14 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:40:18.753 00:24:14 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:40:18.753 00:24:14 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:40:18.753 00:24:14 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:25.386 00:24:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:25.386 00:24:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:25.386 00:24:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:40:25.386 [2024-07-25 00:24:20.099101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:40:25.386 [2024-07-25 00:24:20.101339] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:25.386 [2024-07-25 00:24:20.101406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:40:25.386 [2024-07-25 00:24:20.101442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:25.386 [2024-07-25 00:24:20.101484] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:25.386 [2024-07-25 00:24:20.101498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:40:25.386 [2024-07-25 00:24:20.101512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:25.386 [2024-07-25 00:24:20.101526] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:25.386 [2024-07-25 00:24:20.101539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:40:25.386 [2024-07-25 00:24:20.101551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:25.386 [2024-07-25 00:24:20.101566] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:25.386 [2024-07-25 00:24:20.101577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:40:25.386 [2024-07-25 00:24:20.101606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:25.386 00:24:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:25.386 00:24:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:25.386 00:24:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:40:25.386 00:24:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:40:31.948 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:40:31.948 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:40:31.948 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:40:31.948 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:31.948 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:31.948 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:31.948 00:24:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:31.948 00:24:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:31.948 00:24:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:31.948 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:40:31.948 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:40:31.948 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:40:31.948 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:40:31.948 [2024-07-25 00:24:26.799191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:40:31.948 [2024-07-25 00:24:26.801692] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:31.948 [2024-07-25 00:24:26.801750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:40:31.948 [2024-07-25 00:24:26.801772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:31.948 [2024-07-25 00:24:26.801808] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:31.948 [2024-07-25 00:24:26.801868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:40:31.948 [2024-07-25 00:24:26.801886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:31.948 [2024-07-25 00:24:26.801908] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:31.948 [2024-07-25 00:24:26.801922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:40:31.948 [2024-07-25 00:24:26.801938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:31.949 [2024-07-25 00:24:26.801952] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:31.949 [2024-07-25 00:24:26.801967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:40:31.949 [2024-07-25 00:24:26.801980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:31.949 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:40:31.949 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:40:31.949 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:40:31.949 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:31.949 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:31.949 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:31.949 00:24:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:31.949 00:24:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:31.949 00:24:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:31.949 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:40:31.949 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:40:31.949 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:40:31.949 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:40:31.949 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:40:31.949 00:24:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:40:31.949 00:24:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:40:31.949 00:24:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:40:37.219 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:40:37.219 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:40:37.219 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:40:37.219 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:37.219 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:37.219 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:37.219 00:24:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.219 00:24:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:37.219 00:24:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.219 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:40:37.219 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:40:37.219 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:40:37.219 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:40:37.219 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:40:37.219 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:40:37.219 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:40:37.219 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:37.219 00:24:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.219 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:37.219 00:24:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:37.219 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:37.478 00:24:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.478 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:40:37.478 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:40:37.478 [2024-07-25 00:24:33.099250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:40:37.478 [2024-07-25 00:24:33.101405] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:37.478 [2024-07-25 00:24:33.101484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:40:37.478 [2024-07-25 00:24:33.101503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:37.478 [2024-07-25 00:24:33.101528] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:37.478 [2024-07-25 00:24:33.101542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:40:37.478 [2024-07-25 00:24:33.101556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:37.478 [2024-07-25 00:24:33.101570] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:37.478 [2024-07-25 00:24:33.101583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:40:37.478 [2024-07-25 00:24:33.101595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:37.478 [2024-07-25 00:24:33.101611] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:37.478 [2024-07-25 00:24:33.101653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:40:37.478 [2024-07-25 00:24:33.101683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:37.736 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:40:37.736 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:40:37.736 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:40:37.736 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:37.736 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:37.736 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:37.736 00:24:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.736 00:24:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:37.995 00:24:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.995 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:40:37.995 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:40:37.995 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:40:37.995 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:40:37.995 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:40:37.995 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:40:37.995 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:40:37.995 00:24:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:40:44.557 00:24:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:40:44.557 00:24:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:40:44.557 00:24:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:40:44.557 00:24:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:44.557 00:24:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:44.557 00:24:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:44.557 00:24:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.557 00:24:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:44.557 00:24:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.557 00:24:39 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:40:44.557 00:24:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:40:44.557 00:24:39 sw_hotplug -- common/autotest_common.sh@717 -- # time=25.80 00:40:44.557 00:24:39 sw_hotplug -- common/autotest_common.sh@718 -- # echo 25.80 00:40:44.557 00:24:39 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:40:44.557 00:24:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=25.80 00:40:44.557 00:24:39 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 25.80 1 00:40:44.557 remove_attach_helper took 25.80s to complete (handling 1 nvme drive(s)) 00:24:39 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:40:44.557 00:24:39 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 122523 00:40:44.557 00:24:39 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 122523 ']' 00:40:44.557 00:24:39 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 122523 00:40:44.557 00:24:39 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:40:44.557 00:24:39 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:44.557 00:24:39 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 122523 00:40:44.557 00:24:39 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:44.557 killing process with pid 122523 00:40:44.557 00:24:39 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:44.557 00:24:39 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 122523' 00:40:44.557 00:24:39 sw_hotplug -- common/autotest_common.sh@969 -- # kill 122523 00:40:44.557 00:24:39 sw_hotplug -- common/autotest_common.sh@974 -- # wait 122523 00:40:45.936 00:24:41 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:40:46.194 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:40:46.194 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:40:46.761 ************************************ 00:40:46.761 END TEST sw_hotplug 00:40:46.761 ************************************ 00:40:46.761 00:40:46.761 real 1m29.120s 00:40:46.761 user 1m4.747s 00:40:46.761 sys 0m14.521s 00:40:46.761 00:24:42 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:46.761 00:24:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:46.761 00:24:42 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:40:46.761 00:24:42 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:40:46.761 00:24:42 -- spdk/autotest.sh@264 -- # timing_exit lib 00:40:46.761 00:24:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:46.761 00:24:42 -- common/autotest_common.sh@10 -- # set +x 00:40:46.761 00:24:42 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:40:46.761 00:24:42 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:40:46.761 00:24:42 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:40:46.761 00:24:42 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:40:46.761 00:24:42 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:40:46.761 00:24:42 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:40:46.761 00:24:42 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:40:46.761 00:24:42 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:40:46.761 00:24:42 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:40:46.761 00:24:42 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:40:46.761 00:24:42 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:40:46.761 00:24:42 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:40:46.761 00:24:42 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:40:46.761 00:24:42 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:40:46.761 00:24:42 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:40:46.761 00:24:42 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:40:46.761 00:24:42 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:40:46.761 00:24:42 -- spdk/autotest.sh@379 -- # [[ 1 -eq 1 ]] 00:40:46.761 00:24:42 -- spdk/autotest.sh@380 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:40:46.761 00:24:42 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:46.761 00:24:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:46.761 00:24:42 -- common/autotest_common.sh@10 -- # set +x 00:40:46.761 ************************************ 00:40:46.761 START TEST blockdev_raid5f 00:40:46.761 ************************************ 00:40:46.761 00:24:42 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:40:46.761 * Looking for test storage... 00:40:46.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:40:46.761 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:40:46.761 00:24:42 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:40:46.761 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:40:46.761 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=123351 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 123351 00:40:46.762 00:24:42 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:40:46.762 00:24:42 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 123351 ']' 00:40:46.762 00:24:42 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:46.762 00:24:42 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:46.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:46.762 00:24:42 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:46.762 00:24:42 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:46.762 00:24:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:47.020 [2024-07-25 00:24:42.662956] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:40:47.020 [2024-07-25 00:24:42.663101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123351 ] 00:40:47.020 [2024-07-25 00:24:42.826072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:47.279 [2024-07-25 00:24:43.017010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:47.847 00:24:43 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:47.847 00:24:43 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:40:47.847 00:24:43 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:40:47.847 00:24:43 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:40:47.847 00:24:43 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:40:47.847 00:24:43 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:47.847 00:24:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:47.847 Malloc0 00:40:48.106 Malloc1 00:40:48.106 Malloc2 00:40:48.106 00:24:43 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.106 00:24:43 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:40:48.106 00:24:43 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.106 00:24:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:48.106 00:24:43 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.106 00:24:43 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:40:48.106 00:24:43 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:40:48.106 00:24:43 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.106 00:24:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:48.106 00:24:43 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.106 00:24:43 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:40:48.106 00:24:43 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.106 00:24:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:48.106 00:24:43 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.106 00:24:43 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:40:48.106 00:24:43 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.106 00:24:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:48.106 00:24:43 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.106 00:24:43 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:40:48.106 00:24:43 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:40:48.106 00:24:43 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.106 00:24:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:48.106 00:24:43 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:40:48.106 00:24:43 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.106 00:24:43 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:40:48.107 00:24:43 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "497f9b24-21fe-4547-ba32-191c83cc2a72"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "497f9b24-21fe-4547-ba32-191c83cc2a72",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "497f9b24-21fe-4547-ba32-191c83cc2a72",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "7f43c88b-f92b-4c18-8e13-ac9d252b88f8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7b9776ab-bbb8-4480-a4c3-a3a02e70f510",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "24d42628-b6c9-4bee-ad2a-47bef753c58d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:40:48.107 00:24:43 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:40:48.107 00:24:43 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:40:48.107 00:24:43 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:40:48.107 00:24:43 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:40:48.107 00:24:43 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 123351 00:40:48.107 00:24:43 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 123351 ']' 00:40:48.107 00:24:43 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 123351 00:40:48.107 00:24:43 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:40:48.107 00:24:43 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:48.107 00:24:43 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123351 00:40:48.107 00:24:43 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:48.107 killing process with pid 123351 00:40:48.107 00:24:43 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:48.107 00:24:43 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123351' 00:40:48.107 00:24:43 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 123351 00:40:48.107 00:24:43 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 123351 00:40:50.033 00:24:45 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:40:50.033 00:24:45 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:40:50.033 00:24:45 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:40:50.033 00:24:45 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:50.033 00:24:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:50.033 ************************************ 00:40:50.033 START TEST bdev_hello_world 00:40:50.033 ************************************ 00:40:50.033 00:24:45 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:40:50.291 [2024-07-25 00:24:45.943916] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:40:50.292 [2024-07-25 00:24:45.944099] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123409 ] 00:40:50.292 [2024-07-25 00:24:46.114749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:50.550 [2024-07-25 00:24:46.267125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:50.809 [2024-07-25 00:24:46.647550] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:40:50.809 [2024-07-25 00:24:46.647604] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:40:50.809 [2024-07-25 00:24:46.647639] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:40:50.809 [2024-07-25 00:24:46.648195] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:40:50.809 [2024-07-25 00:24:46.648375] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:40:50.809 [2024-07-25 00:24:46.648417] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:40:50.809 [2024-07-25 00:24:46.648485] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:40:50.809 00:40:50.809 [2024-07-25 00:24:46.648511] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:40:52.183 00:40:52.183 real 0m1.871s 00:40:52.183 user 0m1.528s 00:40:52.183 sys 0m0.234s 00:40:52.183 00:24:47 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:52.183 00:24:47 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:40:52.183 ************************************ 00:40:52.183 END TEST bdev_hello_world 00:40:52.183 ************************************ 00:40:52.184 00:24:47 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:40:52.184 00:24:47 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:52.184 00:24:47 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:52.184 00:24:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:52.184 ************************************ 00:40:52.184 START TEST bdev_bounds 00:40:52.184 ************************************ 00:40:52.184 00:24:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:40:52.184 00:24:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=123447 00:40:52.184 00:24:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:40:52.184 Process bdevio pid: 123447 00:40:52.184 00:24:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 123447' 00:40:52.184 00:24:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 123447 00:40:52.184 00:24:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:40:52.184 00:24:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 123447 ']' 00:40:52.184 00:24:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:52.184 00:24:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:52.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:52.184 00:24:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:52.184 00:24:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:52.184 00:24:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:40:52.184 [2024-07-25 00:24:47.865490] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:40:52.184 [2024-07-25 00:24:47.865685] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123447 ] 00:40:52.184 [2024-07-25 00:24:48.035139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:52.442 [2024-07-25 00:24:48.187549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:52.442 [2024-07-25 00:24:48.187686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:52.442 [2024-07-25 00:24:48.187706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:40:53.009 00:24:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:53.009 00:24:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:40:53.009 00:24:48 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:40:53.267 I/O targets: 00:40:53.267 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:40:53.267 00:40:53.267 00:40:53.267 CUnit - A unit testing framework for C - Version 2.1-3 00:40:53.267 http://cunit.sourceforge.net/ 00:40:53.267 00:40:53.267 00:40:53.267 Suite: bdevio tests on: raid5f 00:40:53.267 Test: blockdev write read block ...passed 00:40:53.267 Test: blockdev write zeroes read block ...passed 00:40:53.267 Test: blockdev write zeroes read no split ...passed 00:40:53.267 Test: blockdev write zeroes read split ...passed 00:40:53.267 Test: blockdev write zeroes read split partial ...passed 00:40:53.267 Test: blockdev reset ...passed 00:40:53.267 Test: blockdev write read 8 blocks ...passed 00:40:53.267 Test: blockdev write read size > 128k ...passed 00:40:53.267 Test: blockdev write read invalid size ...passed 00:40:53.267 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:53.267 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:53.267 Test: blockdev write read max offset ...passed 00:40:53.267 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:53.267 Test: blockdev writev readv 8 blocks ...passed 00:40:53.267 Test: blockdev writev readv 30 x 1block ...passed 00:40:53.267 Test: blockdev writev readv block ...passed 00:40:53.267 Test: blockdev writev readv size > 128k ...passed 00:40:53.267 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:53.267 Test: blockdev comparev and writev ...passed 00:40:53.267 Test: blockdev nvme passthru rw ...passed 00:40:53.267 Test: blockdev nvme passthru vendor specific ...passed 00:40:53.267 Test: blockdev nvme admin passthru ...passed 00:40:53.267 Test: blockdev copy ...passed 00:40:53.267 00:40:53.267 Run Summary: Type Total Ran Passed Failed Inactive 00:40:53.267 suites 1 1 n/a 0 0 00:40:53.267 tests 23 23 23 0 0 00:40:53.267 asserts 130 130 130 0 n/a 00:40:53.267 00:40:53.267 Elapsed time = 0.459 seconds 00:40:53.267 0 00:40:53.267 00:24:49 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 123447 00:40:53.267 00:24:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 123447 ']' 00:40:53.267 00:24:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 123447 00:40:53.267 00:24:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:40:53.267 00:24:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:53.267 00:24:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123447 00:40:53.526 killing process with pid 123447 00:40:53.526 00:24:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:53.526 00:24:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:53.526 00:24:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123447' 00:40:53.526 00:24:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 123447 00:40:53.526 00:24:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 123447 00:40:54.460 00:24:50 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:40:54.460 00:40:54.460 real 0m2.514s 00:40:54.460 user 0m6.048s 00:40:54.460 sys 0m0.350s 00:40:54.460 00:24:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:54.460 00:24:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:40:54.460 ************************************ 00:40:54.460 END TEST bdev_bounds 00:40:54.460 ************************************ 00:40:54.719 00:24:50 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:40:54.719 00:24:50 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:40:54.719 00:24:50 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:54.719 00:24:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:54.719 ************************************ 00:40:54.719 START TEST bdev_nbd 00:40:54.719 ************************************ 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=123501 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 123501 /var/tmp/spdk-nbd.sock 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 123501 ']' 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:54.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:54.719 00:24:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:40:54.719 [2024-07-25 00:24:50.423590] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:40:54.719 [2024-07-25 00:24:50.423745] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:54.719 [2024-07-25 00:24:50.578133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:54.978 [2024-07-25 00:24:50.730354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:55.545 00:24:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:55.545 00:24:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:40:55.545 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:40:55.545 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:55.545 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:40:55.545 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:40:55.545 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:40:55.545 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:55.545 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:40:55.545 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:40:55.545 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:40:55.545 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:40:55.545 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:40:55.545 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:40:55.545 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:40:55.803 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:40:55.803 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:40:55.803 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:40:55.803 00:24:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:40:55.803 00:24:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:40:55.803 00:24:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:40:55.803 00:24:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:40:55.803 00:24:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:40:55.803 00:24:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:40:55.803 00:24:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:40:55.803 00:24:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:40:55.803 00:24:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:55.803 1+0 records in 00:40:55.803 1+0 records out 00:40:55.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020262 s, 20.2 MB/s 00:40:55.803 00:24:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:55.803 00:24:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:40:55.803 00:24:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:55.803 00:24:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:40:55.803 00:24:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:40:55.803 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:40:55.803 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:40:55.803 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:56.061 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:40:56.061 { 00:40:56.061 "nbd_device": "/dev/nbd0", 00:40:56.061 "bdev_name": "raid5f" 00:40:56.061 } 00:40:56.061 ]' 00:40:56.061 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:40:56.061 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:40:56.061 { 00:40:56.061 "nbd_device": "/dev/nbd0", 00:40:56.061 "bdev_name": "raid5f" 00:40:56.061 } 00:40:56.061 ]' 00:40:56.061 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:40:56.061 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:40:56.061 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:56.061 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:40:56.061 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:56.061 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:40:56.061 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:56.061 00:24:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:40:56.319 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:56.319 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:56.319 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:56.319 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:56.319 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:56.319 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:56.319 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:56.319 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:56.319 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:40:56.319 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:56.319 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:56.578 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:40:56.578 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:40:56.578 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:40:56.578 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:40:56.578 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:40:56.578 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:56.579 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:40:56.838 /dev/nbd0 00:40:56.838 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:56.838 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:56.838 00:24:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:40:56.838 00:24:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:40:56.838 00:24:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:40:56.838 00:24:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:40:56.838 00:24:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:40:56.838 00:24:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:40:56.838 00:24:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:40:56.838 00:24:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:40:56.838 00:24:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:56.838 1+0 records in 00:40:56.838 1+0 records out 00:40:56.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301151 s, 13.6 MB/s 00:40:56.838 00:24:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:56.838 00:24:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:40:56.838 00:24:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:56.838 00:24:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:40:56.838 00:24:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:40:56.838 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:56.838 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:56.838 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:40:56.838 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:56.838 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:40:57.097 { 00:40:57.097 "nbd_device": "/dev/nbd0", 00:40:57.097 "bdev_name": "raid5f" 00:40:57.097 } 00:40:57.097 ]' 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:40:57.097 { 00:40:57.097 "nbd_device": "/dev/nbd0", 00:40:57.097 "bdev_name": "raid5f" 00:40:57.097 } 00:40:57.097 ]' 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:40:57.097 256+0 records in 00:40:57.097 256+0 records out 00:40:57.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00752932 s, 139 MB/s 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:40:57.097 256+0 records in 00:40:57.097 256+0 records out 00:40:57.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.040214 s, 26.1 MB/s 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:57.097 00:24:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:40:57.671 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:40:57.934 malloc_lvol_verify 00:40:58.192 00:24:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:40:58.192 dd5bf4d7-21e3-497b-9ac4-f73ae61cbd8f 00:40:58.192 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:40:58.450 0f436822-7d2a-4009-97ae-0a3292305056 00:40:58.450 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:40:58.709 /dev/nbd0 00:40:58.709 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:40:58.709 mke2fs 1.47.0 (5-Feb-2023) 00:40:58.709 00:40:58.709 Filesystem too small for a journal 00:40:58.709 Discarding device blocks: 0/1024 done 00:40:58.709 Creating filesystem with 1024 4k blocks and 1024 inodes 00:40:58.709 00:40:58.709 Allocating group tables: 0/1 done 00:40:58.709 Writing inode tables: 0/1 done 00:40:58.709 Writing superblocks and filesystem accounting information: 0/1 done 00:40:58.709 00:40:58.709 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:40:58.709 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:40:58.709 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:58.709 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:40:58.709 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:58.709 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:40:58.709 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:58.709 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:40:58.968 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:58.968 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:58.968 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:58.968 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:58.968 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:58.968 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:58.968 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:58.968 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:58.968 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:40:58.968 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:40:58.968 00:24:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 123501 00:40:58.968 00:24:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 123501 ']' 00:40:58.968 00:24:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 123501 00:40:58.968 00:24:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:40:58.968 00:24:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:58.968 00:24:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123501 00:40:58.968 00:24:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:58.968 killing process with pid 123501 00:40:58.968 00:24:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:58.968 00:24:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123501' 00:40:58.968 00:24:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 123501 00:40:58.968 00:24:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 123501 00:41:00.342 00:24:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:41:00.342 00:41:00.342 real 0m5.523s 00:41:00.342 user 0m8.033s 00:41:00.342 sys 0m1.091s 00:41:00.342 00:24:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:00.342 00:24:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:41:00.342 ************************************ 00:41:00.342 END TEST bdev_nbd 00:41:00.342 ************************************ 00:41:00.342 00:24:55 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:41:00.342 00:24:55 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:41:00.342 00:24:55 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:41:00.342 00:24:55 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:41:00.342 00:24:55 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:41:00.342 00:24:55 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:00.342 00:24:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:41:00.342 ************************************ 00:41:00.342 START TEST bdev_fio 00:41:00.342 ************************************ 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:41:00.343 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:00.343 00:24:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:41:00.343 ************************************ 00:41:00.343 START TEST bdev_fio_rw_verify 00:41:00.343 ************************************ 00:41:00.343 00:24:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:41:00.343 00:24:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:41:00.343 00:24:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:00.343 00:24:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:00.343 00:24:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:00.343 00:24:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:41:00.343 00:24:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:41:00.343 00:24:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:00.343 00:24:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:00.343 00:24:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:41:00.343 00:24:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:00.343 00:24:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:41:00.343 00:24:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:41:00.343 00:24:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:41:00.343 00:24:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:41:00.343 00:24:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:41:00.343 00:24:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:41:00.343 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:41:00.343 fio-3.35 00:41:00.343 Starting 1 thread 00:41:12.646 00:41:12.646 job_raid5f: (groupid=0, jobs=1): err= 0: pid=123710: Thu Jul 25 00:25:06 2024 00:41:12.646 read: IOPS=10.7k, BW=41.6MiB/s (43.7MB/s)(416MiB/10001msec) 00:41:12.646 slat (nsec): min=20100, max=92970, avg=22818.11, stdev=4597.96 00:41:12.646 clat (usec): min=11, max=407, avg=148.60, stdev=55.36 00:41:12.646 lat (usec): min=34, max=461, avg=171.41, stdev=56.37 00:41:12.646 clat percentiles (usec): 00:41:12.646 | 50.000th=[ 147], 99.000th=[ 273], 99.900th=[ 318], 99.990th=[ 355], 00:41:12.646 | 99.999th=[ 388] 00:41:12.646 write: IOPS=11.2k, BW=43.8MiB/s (46.0MB/s)(433MiB/9880msec); 0 zone resets 00:41:12.646 slat (usec): min=9, max=273, avg=19.15, stdev= 5.02 00:41:12.646 clat (usec): min=62, max=1231, avg=340.33, stdev=52.64 00:41:12.646 lat (usec): min=78, max=1504, avg=359.49, stdev=54.33 00:41:12.646 clat percentiles (usec): 00:41:12.646 | 50.000th=[ 343], 99.000th=[ 486], 99.900th=[ 586], 99.990th=[ 1037], 00:41:12.646 | 99.999th=[ 1156] 00:41:12.646 bw ( KiB/s): min=42128, max=47384, per=98.78%, avg=44340.63, stdev=1638.53, samples=19 00:41:12.646 iops : min=10532, max=11846, avg=11085.16, stdev=409.63, samples=19 00:41:12.646 lat (usec) : 20=0.01%, 50=0.01%, 100=11.28%, 250=38.24%, 500=50.12% 00:41:12.646 lat (usec) : 750=0.34%, 1000=0.01% 00:41:12.646 lat (msec) : 2=0.01% 00:41:12.646 cpu : usr=99.53%, sys=0.46%, ctx=69, majf=0, minf=8950 00:41:12.646 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:12.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.646 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.646 issued rwts: total=106602,110875,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:12.646 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:12.646 00:41:12.646 Run status group 0 (all jobs): 00:41:12.646 READ: bw=41.6MiB/s (43.7MB/s), 41.6MiB/s-41.6MiB/s (43.7MB/s-43.7MB/s), io=416MiB (437MB), run=10001-10001msec 00:41:12.646 WRITE: bw=43.8MiB/s (46.0MB/s), 43.8MiB/s-43.8MiB/s (46.0MB/s-46.0MB/s), io=433MiB (454MB), run=9880-9880msec 00:41:12.646 ----------------------------------------------------- 00:41:12.646 Suppressions used: 00:41:12.646 count bytes template 00:41:12.646 1 7 /usr/src/fio/parse.c 00:41:12.646 946 90816 /usr/src/fio/iolog.c 00:41:12.646 1 904 libcrypto.so 00:41:12.646 ----------------------------------------------------- 00:41:12.647 00:41:12.647 00:41:12.647 real 0m12.221s 00:41:12.647 user 0m13.027s 00:41:12.647 sys 0m0.596s 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:12.647 ************************************ 00:41:12.647 END TEST bdev_fio_rw_verify 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:41:12.647 ************************************ 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "497f9b24-21fe-4547-ba32-191c83cc2a72"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "497f9b24-21fe-4547-ba32-191c83cc2a72",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "497f9b24-21fe-4547-ba32-191c83cc2a72",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "7f43c88b-f92b-4c18-8e13-ac9d252b88f8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7b9776ab-bbb8-4480-a4c3-a3a02e70f510",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "24d42628-b6c9-4bee-ad2a-47bef753c58d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:41:12.647 /home/vagrant/spdk_repo/spdk 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:41:12.647 00:41:12.647 real 0m12.354s 00:41:12.647 user 0m13.070s 00:41:12.647 sys 0m0.683s 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:12.647 ************************************ 00:41:12.647 END TEST bdev_fio 00:41:12.647 ************************************ 00:41:12.647 00:25:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:41:12.647 00:25:08 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:41:12.647 00:25:08 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:41:12.647 00:25:08 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:41:12.647 00:25:08 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:12.647 00:25:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:41:12.647 ************************************ 00:41:12.647 START TEST bdev_verify 00:41:12.647 ************************************ 00:41:12.647 00:25:08 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:41:12.647 [2024-07-25 00:25:08.408614] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:41:12.647 [2024-07-25 00:25:08.408789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123867 ] 00:41:12.905 [2024-07-25 00:25:08.579434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:12.905 [2024-07-25 00:25:08.733505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:12.905 [2024-07-25 00:25:08.733528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:13.471 Running I/O for 5 seconds... 00:41:18.734 00:41:18.734 Latency(us) 00:41:18.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:18.734 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:41:18.734 Verification LBA range: start 0x0 length 0x2000 00:41:18.734 raid5f : 5.01 8273.75 32.32 0.00 0.00 23262.31 199.21 21805.61 00:41:18.734 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:41:18.734 Verification LBA range: start 0x2000 length 0x2000 00:41:18.734 raid5f : 5.01 8262.50 32.28 0.00 0.00 23224.46 207.59 21209.83 00:41:18.734 =================================================================================================================== 00:41:18.734 Total : 16536.25 64.59 0.00 0.00 23243.41 199.21 21805.61 00:41:19.668 00:41:19.669 real 0m6.928s 00:41:19.669 user 0m12.755s 00:41:19.669 sys 0m0.242s 00:41:19.669 00:25:15 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:19.669 ************************************ 00:41:19.669 END TEST bdev_verify 00:41:19.669 ************************************ 00:41:19.669 00:25:15 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:41:19.669 00:25:15 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:41:19.669 00:25:15 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:41:19.669 00:25:15 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:19.669 00:25:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:41:19.669 ************************************ 00:41:19.669 START TEST bdev_verify_big_io 00:41:19.669 ************************************ 00:41:19.669 00:25:15 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:41:19.669 [2024-07-25 00:25:15.392151] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:41:19.669 [2024-07-25 00:25:15.392399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123948 ] 00:41:19.927 [2024-07-25 00:25:15.566336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:19.927 [2024-07-25 00:25:15.714156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:19.927 [2024-07-25 00:25:15.714175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:20.493 Running I/O for 5 seconds... 00:41:25.760 00:41:25.760 Latency(us) 00:41:25.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:25.760 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:41:25.760 Verification LBA range: start 0x0 length 0x200 00:41:25.760 raid5f : 5.26 482.77 30.17 0.00 0.00 6528494.53 279.27 287881.77 00:41:25.760 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:41:25.760 Verification LBA range: start 0x200 length 0x200 00:41:25.760 raid5f : 5.28 480.47 30.03 0.00 0.00 6596204.35 146.15 289788.28 00:41:25.760 =================================================================================================================== 00:41:25.760 Total : 963.24 60.20 0.00 0.00 6562349.44 146.15 289788.28 00:41:26.695 00:41:26.695 real 0m7.217s 00:41:26.695 user 0m13.344s 00:41:26.695 sys 0m0.240s 00:41:26.695 00:25:22 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:26.695 00:25:22 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:41:26.695 ************************************ 00:41:26.695 END TEST bdev_verify_big_io 00:41:26.695 ************************************ 00:41:26.954 00:25:22 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:26.954 00:25:22 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:41:26.954 00:25:22 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:26.954 00:25:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:41:26.954 ************************************ 00:41:26.954 START TEST bdev_write_zeroes 00:41:26.954 ************************************ 00:41:26.954 00:25:22 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:26.954 [2024-07-25 00:25:22.644384] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:41:26.954 [2024-07-25 00:25:22.644550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124042 ] 00:41:26.954 [2024-07-25 00:25:22.797780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:27.212 [2024-07-25 00:25:22.951071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:27.471 Running I/O for 1 seconds... 00:41:28.844 00:41:28.844 Latency(us) 00:41:28.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:28.844 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:41:28.844 raid5f : 1.01 25568.36 99.88 0.00 0.00 4990.64 1459.67 7298.33 00:41:28.844 =================================================================================================================== 00:41:28.844 Total : 25568.36 99.88 0.00 0.00 4990.64 1459.67 7298.33 00:41:29.778 00:41:29.778 real 0m2.832s 00:41:29.778 user 0m2.522s 00:41:29.778 sys 0m0.200s 00:41:29.778 00:25:25 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:29.778 00:25:25 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:41:29.778 ************************************ 00:41:29.778 END TEST bdev_write_zeroes 00:41:29.778 ************************************ 00:41:29.778 00:25:25 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:29.778 00:25:25 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:41:29.778 00:25:25 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:29.778 00:25:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:41:29.778 ************************************ 00:41:29.778 START TEST bdev_json_nonenclosed 00:41:29.778 ************************************ 00:41:29.778 00:25:25 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:29.778 [2024-07-25 00:25:25.543203] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:41:29.778 [2024-07-25 00:25:25.543394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124081 ] 00:41:30.036 [2024-07-25 00:25:25.714000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:30.036 [2024-07-25 00:25:25.864768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:30.036 [2024-07-25 00:25:25.864900] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:41:30.036 [2024-07-25 00:25:25.864926] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:41:30.036 [2024-07-25 00:25:25.864941] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:30.603 00:41:30.603 real 0m0.721s 00:41:30.603 user 0m0.497s 00:41:30.603 sys 0m0.123s 00:41:30.603 00:25:26 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:30.603 00:25:26 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:41:30.603 ************************************ 00:41:30.603 END TEST bdev_json_nonenclosed 00:41:30.603 ************************************ 00:41:30.603 00:25:26 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:30.603 00:25:26 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:41:30.603 00:25:26 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:30.603 00:25:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:41:30.603 ************************************ 00:41:30.603 START TEST bdev_json_nonarray 00:41:30.603 ************************************ 00:41:30.603 00:25:26 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:30.603 [2024-07-25 00:25:26.313447] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.03.0 initialization... 00:41:30.603 [2024-07-25 00:25:26.313625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124111 ] 00:41:30.861 [2024-07-25 00:25:26.486100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:30.861 [2024-07-25 00:25:26.636404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:30.861 [2024-07-25 00:25:26.636521] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:41:30.861 [2024-07-25 00:25:26.636545] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:41:30.861 [2024-07-25 00:25:26.636560] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:31.119 00:41:31.119 real 0m0.727s 00:41:31.119 user 0m0.501s 00:41:31.119 sys 0m0.126s 00:41:31.119 00:25:26 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:31.119 00:25:26 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:41:31.119 ************************************ 00:41:31.119 END TEST bdev_json_nonarray 00:41:31.119 ************************************ 00:41:31.377 00:25:27 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:41:31.377 00:25:27 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:41:31.377 00:25:27 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:41:31.377 00:25:27 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:41:31.377 00:25:27 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:41:31.377 00:25:27 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:41:31.377 00:25:27 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:41:31.377 00:25:27 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:41:31.377 00:25:27 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:41:31.377 00:25:27 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:41:31.377 00:25:27 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:41:31.377 00:41:31.377 real 0m44.517s 00:41:31.377 user 1m1.848s 00:41:31.377 sys 0m4.105s 00:41:31.377 00:25:27 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:31.377 00:25:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:41:31.377 ************************************ 00:41:31.377 END TEST blockdev_raid5f 00:41:31.377 ************************************ 00:41:31.377 00:25:27 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:41:31.377 00:25:27 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:41:31.377 00:25:27 -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:31.377 00:25:27 -- common/autotest_common.sh@10 -- # set +x 00:41:31.377 00:25:27 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:41:31.377 00:25:27 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:41:31.377 00:25:27 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:41:31.377 00:25:27 -- common/autotest_common.sh@10 -- # set +x 00:41:33.279 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:41:33.279 Waiting for block devices as requested 00:41:33.279 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:41:33.537 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:41:33.796 Cleaning 00:41:33.796 Removing: /var/run/dpdk/spdk0/config 00:41:33.796 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:41:33.796 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:41:33.796 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:41:33.796 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:41:33.796 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:41:33.796 Removing: /var/run/dpdk/spdk0/hugepage_info 00:41:33.796 Removing: /dev/shm/spdk_tgt_trace.pid68930 00:41:33.796 Removing: /var/run/dpdk/spdk0 00:41:33.796 Removing: /var/run/dpdk/spdk_pid100887 00:41:33.796 Removing: /var/run/dpdk/spdk_pid101394 00:41:33.796 Removing: /var/run/dpdk/spdk_pid103960 00:41:33.796 Removing: /var/run/dpdk/spdk_pid104612 00:41:33.796 Removing: /var/run/dpdk/spdk_pid105104 00:41:33.796 Removing: /var/run/dpdk/spdk_pid107879 00:41:33.796 Removing: /var/run/dpdk/spdk_pid108632 00:41:33.796 Removing: /var/run/dpdk/spdk_pid109208 00:41:33.796 Removing: /var/run/dpdk/spdk_pid110431 00:41:33.796 Removing: /var/run/dpdk/spdk_pid110891 00:41:33.796 Removing: /var/run/dpdk/spdk_pid112014 00:41:33.796 Removing: /var/run/dpdk/spdk_pid112479 00:41:33.796 Removing: /var/run/dpdk/spdk_pid113594 00:41:33.796 Removing: /var/run/dpdk/spdk_pid114057 00:41:33.796 Removing: /var/run/dpdk/spdk_pid114812 00:41:33.796 Removing: /var/run/dpdk/spdk_pid114853 00:41:33.796 Removing: /var/run/dpdk/spdk_pid114896 00:41:33.796 Removing: /var/run/dpdk/spdk_pid114950 00:41:33.796 Removing: /var/run/dpdk/spdk_pid115071 00:41:33.796 Removing: /var/run/dpdk/spdk_pid115213 00:41:33.796 Removing: /var/run/dpdk/spdk_pid115422 00:41:33.796 Removing: /var/run/dpdk/spdk_pid115680 00:41:33.796 Removing: /var/run/dpdk/spdk_pid115693 00:41:33.796 Removing: /var/run/dpdk/spdk_pid115735 00:41:33.796 Removing: /var/run/dpdk/spdk_pid115760 00:41:33.796 Removing: /var/run/dpdk/spdk_pid115780 00:41:33.796 Removing: /var/run/dpdk/spdk_pid115812 00:41:33.796 Removing: /var/run/dpdk/spdk_pid115831 00:41:33.796 Removing: /var/run/dpdk/spdk_pid115851 00:41:33.796 Removing: /var/run/dpdk/spdk_pid115881 00:41:33.796 Removing: /var/run/dpdk/spdk_pid115906 00:41:33.796 Removing: /var/run/dpdk/spdk_pid115926 00:41:33.796 Removing: /var/run/dpdk/spdk_pid115951 00:41:33.796 Removing: /var/run/dpdk/spdk_pid115980 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116000 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116026 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116051 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116071 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116100 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116119 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116145 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116182 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116207 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116241 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116311 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116350 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116366 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116407 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116427 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116447 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116490 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116518 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116551 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116571 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116586 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116606 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116620 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116640 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116655 00:41:33.796 Removing: /var/run/dpdk/spdk_pid116675 00:41:34.054 Removing: /var/run/dpdk/spdk_pid116708 00:41:34.054 Removing: /var/run/dpdk/spdk_pid116747 00:41:34.054 Removing: /var/run/dpdk/spdk_pid116764 00:41:34.054 Removing: /var/run/dpdk/spdk_pid116808 00:41:34.054 Removing: /var/run/dpdk/spdk_pid116824 00:41:34.054 Removing: /var/run/dpdk/spdk_pid116844 00:41:34.054 Removing: /var/run/dpdk/spdk_pid116891 00:41:34.055 Removing: /var/run/dpdk/spdk_pid116915 00:41:34.055 Removing: /var/run/dpdk/spdk_pid116952 00:41:34.055 Removing: /var/run/dpdk/spdk_pid116973 00:41:34.055 Removing: /var/run/dpdk/spdk_pid116987 00:41:34.055 Removing: /var/run/dpdk/spdk_pid117007 00:41:34.055 Removing: /var/run/dpdk/spdk_pid117021 00:41:34.055 Removing: /var/run/dpdk/spdk_pid117036 00:41:34.055 Removing: /var/run/dpdk/spdk_pid117055 00:41:34.055 Removing: /var/run/dpdk/spdk_pid117069 00:41:34.055 Removing: /var/run/dpdk/spdk_pid117150 00:41:34.055 Removing: /var/run/dpdk/spdk_pid117227 00:41:34.055 Removing: /var/run/dpdk/spdk_pid117359 00:41:34.055 Removing: /var/run/dpdk/spdk_pid117375 00:41:34.055 Removing: /var/run/dpdk/spdk_pid117420 00:41:34.055 Removing: /var/run/dpdk/spdk_pid117474 00:41:34.055 Removing: /var/run/dpdk/spdk_pid117502 00:41:34.055 Removing: /var/run/dpdk/spdk_pid117529 00:41:34.055 Removing: /var/run/dpdk/spdk_pid117550 00:41:34.055 Removing: /var/run/dpdk/spdk_pid117592 00:41:34.055 Removing: /var/run/dpdk/spdk_pid117617 00:41:34.055 Removing: /var/run/dpdk/spdk_pid117693 00:41:34.055 Removing: /var/run/dpdk/spdk_pid117745 00:41:34.055 Removing: /var/run/dpdk/spdk_pid117784 00:41:34.055 Removing: /var/run/dpdk/spdk_pid118018 00:41:34.055 Removing: /var/run/dpdk/spdk_pid118131 00:41:34.055 Removing: /var/run/dpdk/spdk_pid118165 00:41:34.055 Removing: /var/run/dpdk/spdk_pid118251 00:41:34.055 Removing: /var/run/dpdk/spdk_pid118321 00:41:34.055 Removing: /var/run/dpdk/spdk_pid118359 00:41:34.055 Removing: /var/run/dpdk/spdk_pid118584 00:41:34.055 Removing: /var/run/dpdk/spdk_pid118671 00:41:34.055 Removing: /var/run/dpdk/spdk_pid118750 00:41:34.055 Removing: /var/run/dpdk/spdk_pid118798 00:41:34.055 Removing: /var/run/dpdk/spdk_pid118828 00:41:34.055 Removing: /var/run/dpdk/spdk_pid118900 00:41:34.055 Removing: /var/run/dpdk/spdk_pid119284 00:41:34.055 Removing: /var/run/dpdk/spdk_pid119321 00:41:34.055 Removing: /var/run/dpdk/spdk_pid119603 00:41:34.055 Removing: /var/run/dpdk/spdk_pid119686 00:41:34.055 Removing: /var/run/dpdk/spdk_pid119787 00:41:34.055 Removing: /var/run/dpdk/spdk_pid119835 00:41:34.055 Removing: /var/run/dpdk/spdk_pid119859 00:41:34.055 Removing: /var/run/dpdk/spdk_pid119886 00:41:34.055 Removing: /var/run/dpdk/spdk_pid121036 00:41:34.055 Removing: /var/run/dpdk/spdk_pid121159 00:41:34.055 Removing: /var/run/dpdk/spdk_pid121163 00:41:34.055 Removing: /var/run/dpdk/spdk_pid121180 00:41:34.055 Removing: /var/run/dpdk/spdk_pid121618 00:41:34.055 Removing: /var/run/dpdk/spdk_pid121702 00:41:34.055 Removing: /var/run/dpdk/spdk_pid122523 00:41:34.055 Removing: /var/run/dpdk/spdk_pid123351 00:41:34.055 Removing: /var/run/dpdk/spdk_pid123409 00:41:34.055 Removing: /var/run/dpdk/spdk_pid123447 00:41:34.055 Removing: /var/run/dpdk/spdk_pid123705 00:41:34.055 Removing: /var/run/dpdk/spdk_pid123867 00:41:34.055 Removing: /var/run/dpdk/spdk_pid123948 00:41:34.055 Removing: /var/run/dpdk/spdk_pid124042 00:41:34.055 Removing: /var/run/dpdk/spdk_pid124081 00:41:34.055 Removing: /var/run/dpdk/spdk_pid124111 00:41:34.055 Removing: /var/run/dpdk/spdk_pid68730 00:41:34.055 Removing: /var/run/dpdk/spdk_pid68930 00:41:34.055 Removing: /var/run/dpdk/spdk_pid69140 00:41:34.055 Removing: /var/run/dpdk/spdk_pid69239 00:41:34.055 Removing: /var/run/dpdk/spdk_pid69284 00:41:34.055 Removing: /var/run/dpdk/spdk_pid69407 00:41:34.055 Removing: /var/run/dpdk/spdk_pid69425 00:41:34.055 Removing: /var/run/dpdk/spdk_pid69572 00:41:34.055 Removing: /var/run/dpdk/spdk_pid69818 00:41:34.055 Removing: /var/run/dpdk/spdk_pid69990 00:41:34.055 Removing: /var/run/dpdk/spdk_pid70077 00:41:34.055 Removing: /var/run/dpdk/spdk_pid70171 00:41:34.313 Removing: /var/run/dpdk/spdk_pid70274 00:41:34.313 Removing: /var/run/dpdk/spdk_pid70363 00:41:34.313 Removing: /var/run/dpdk/spdk_pid70408 00:41:34.313 Removing: /var/run/dpdk/spdk_pid70450 00:41:34.313 Removing: /var/run/dpdk/spdk_pid70513 00:41:34.313 Removing: /var/run/dpdk/spdk_pid70619 00:41:34.313 Removing: /var/run/dpdk/spdk_pid71112 00:41:34.313 Removing: /var/run/dpdk/spdk_pid71176 00:41:34.313 Removing: /var/run/dpdk/spdk_pid71249 00:41:34.313 Removing: /var/run/dpdk/spdk_pid71265 00:41:34.313 Removing: /var/run/dpdk/spdk_pid71396 00:41:34.313 Removing: /var/run/dpdk/spdk_pid71412 00:41:34.313 Removing: /var/run/dpdk/spdk_pid71542 00:41:34.313 Removing: /var/run/dpdk/spdk_pid71558 00:41:34.313 Removing: /var/run/dpdk/spdk_pid71633 00:41:34.313 Removing: /var/run/dpdk/spdk_pid71652 00:41:34.313 Removing: /var/run/dpdk/spdk_pid71711 00:41:34.313 Removing: /var/run/dpdk/spdk_pid71729 00:41:34.313 Removing: /var/run/dpdk/spdk_pid71913 00:41:34.313 Removing: /var/run/dpdk/spdk_pid71955 00:41:34.313 Removing: /var/run/dpdk/spdk_pid71991 00:41:34.313 Removing: /var/run/dpdk/spdk_pid72068 00:41:34.313 Removing: /var/run/dpdk/spdk_pid72233 00:41:34.313 Removing: /var/run/dpdk/spdk_pid72314 00:41:34.313 Removing: /var/run/dpdk/spdk_pid72362 00:41:34.314 Removing: /var/run/dpdk/spdk_pid73562 00:41:34.314 Removing: /var/run/dpdk/spdk_pid73756 00:41:34.314 Removing: /var/run/dpdk/spdk_pid73942 00:41:34.314 Removing: /var/run/dpdk/spdk_pid74057 00:41:34.314 Removing: /var/run/dpdk/spdk_pid74183 00:41:34.314 Removing: /var/run/dpdk/spdk_pid74247 00:41:34.314 Removing: /var/run/dpdk/spdk_pid74278 00:41:34.314 Removing: /var/run/dpdk/spdk_pid74309 00:41:34.314 Removing: /var/run/dpdk/spdk_pid74727 00:41:34.314 Removing: /var/run/dpdk/spdk_pid74810 00:41:34.314 Removing: /var/run/dpdk/spdk_pid74911 00:41:34.314 Removing: /var/run/dpdk/spdk_pid74969 00:41:34.314 Removing: /var/run/dpdk/spdk_pid76515 00:41:34.314 Removing: /var/run/dpdk/spdk_pid76843 00:41:34.314 Removing: /var/run/dpdk/spdk_pid77020 00:41:34.314 Removing: /var/run/dpdk/spdk_pid77864 00:41:34.314 Removing: /var/run/dpdk/spdk_pid78201 00:41:34.314 Removing: /var/run/dpdk/spdk_pid78372 00:41:34.314 Removing: /var/run/dpdk/spdk_pid79220 00:41:34.314 Removing: /var/run/dpdk/spdk_pid79707 00:41:34.314 Removing: /var/run/dpdk/spdk_pid79878 00:41:34.314 Removing: /var/run/dpdk/spdk_pid81834 00:41:34.314 Removing: /var/run/dpdk/spdk_pid82258 00:41:34.314 Removing: /var/run/dpdk/spdk_pid82443 00:41:34.314 Removing: /var/run/dpdk/spdk_pid84375 00:41:34.314 Removing: /var/run/dpdk/spdk_pid84826 00:41:34.314 Removing: /var/run/dpdk/spdk_pid85013 00:41:34.314 Removing: /var/run/dpdk/spdk_pid86962 00:41:34.314 Removing: /var/run/dpdk/spdk_pid87629 00:41:34.314 Removing: /var/run/dpdk/spdk_pid87819 00:41:34.314 Removing: /var/run/dpdk/spdk_pid90004 00:41:34.314 Removing: /var/run/dpdk/spdk_pid90501 00:41:34.314 Removing: /var/run/dpdk/spdk_pid90694 00:41:34.314 Removing: /var/run/dpdk/spdk_pid92852 00:41:34.314 Removing: /var/run/dpdk/spdk_pid93341 00:41:34.314 Removing: /var/run/dpdk/spdk_pid93531 00:41:34.314 Removing: /var/run/dpdk/spdk_pid95696 00:41:34.314 Removing: /var/run/dpdk/spdk_pid96464 00:41:34.314 Removing: /var/run/dpdk/spdk_pid96653 00:41:34.314 Removing: /var/run/dpdk/spdk_pid96840 00:41:34.314 Removing: /var/run/dpdk/spdk_pid97347 00:41:34.314 Removing: /var/run/dpdk/spdk_pid98216 00:41:34.314 Removing: /var/run/dpdk/spdk_pid98658 00:41:34.314 Removing: /var/run/dpdk/spdk_pid99457 00:41:34.314 Removing: /var/run/dpdk/spdk_pid99990 00:41:34.314 Clean 00:41:34.572 00:25:30 -- common/autotest_common.sh@1451 -- # return 0 00:41:34.572 00:25:30 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:41:34.572 00:25:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:34.572 00:25:30 -- common/autotest_common.sh@10 -- # set +x 00:41:34.572 00:25:30 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:41:34.572 00:25:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:34.572 00:25:30 -- common/autotest_common.sh@10 -- # set +x 00:41:34.572 00:25:30 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:41:34.572 00:25:30 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:41:34.572 00:25:30 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:41:34.572 00:25:30 -- spdk/autotest.sh@395 -- # hash lcov 00:41:34.572 00:25:30 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:41:34.572 00:25:30 -- spdk/autotest.sh@397 -- # hostname 00:41:34.572 00:25:30 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2404-cloud-1720510786-2314 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:41:34.830 geninfo: WARNING: invalid characters removed from testname! 00:42:31.042 00:26:18 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:42:31.042 00:26:23 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:42:31.326 00:26:26 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:42:34.626 00:26:30 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:42:37.912 00:26:33 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:42:40.444 00:26:36 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:42:43.729 00:26:39 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:42:43.730 00:26:39 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:43.730 00:26:39 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:42:43.730 00:26:39 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:43.730 00:26:39 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:43.730 00:26:39 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:43.730 00:26:39 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:43.730 00:26:39 -- paths/export.sh@4 -- $ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:43.730 00:26:39 -- paths/export.sh@5 -- $ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:43.730 00:26:39 -- paths/export.sh@6 -- $ export PATH 00:42:43.730 00:26:39 -- paths/export.sh@7 -- $ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:43.730 00:26:39 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:42:43.730 00:26:39 -- common/autobuild_common.sh@447 -- $ date +%s 00:42:43.730 00:26:39 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721867199.XXXXXX 00:42:43.730 00:26:39 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721867199.8hULwu 00:42:43.730 00:26:39 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:42:43.730 00:26:39 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:42:43.730 00:26:39 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:42:43.730 00:26:39 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:42:43.730 00:26:39 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:42:43.730 00:26:39 -- common/autobuild_common.sh@463 -- $ get_config_params 00:42:43.730 00:26:39 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:42:43.730 00:26:39 -- common/autotest_common.sh@10 -- $ set +x 00:42:43.730 00:26:39 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:42:43.730 00:26:39 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:42:43.730 00:26:39 -- pm/common@17 -- $ local monitor 00:42:43.730 00:26:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:43.730 00:26:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:43.730 00:26:39 -- pm/common@25 -- $ sleep 1 00:42:43.730 00:26:39 -- pm/common@21 -- $ date +%s 00:42:43.730 00:26:39 -- pm/common@21 -- $ date +%s 00:42:43.730 00:26:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721867199 00:42:43.730 00:26:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721867199 00:42:43.730 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721867199_collect-vmstat.pm.log 00:42:43.730 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721867199_collect-cpu-load.pm.log 00:42:44.667 00:26:40 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:42:44.667 00:26:40 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:42:44.667 00:26:40 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:42:44.667 00:26:40 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:42:44.667 00:26:40 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:42:44.667 00:26:40 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:42:44.667 00:26:40 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:42:44.667 00:26:40 -- common/autotest_common.sh@724 -- $ xtrace_disable 00:42:44.667 00:26:40 -- common/autotest_common.sh@10 -- $ set +x 00:42:44.667 00:26:40 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:42:44.667 00:26:40 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:42:44.667 00:26:40 -- spdk/autopackage.sh@40 -- $ get_config_params 00:42:44.667 00:26:40 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:42:44.667 00:26:40 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:42:44.667 00:26:40 -- common/autotest_common.sh@10 -- $ set +x 00:42:44.667 00:26:40 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:42:44.667 00:26:40 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --enable-lto --disable-unit-tests 00:42:44.667 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:42:44.667 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:42:44.925 Using 'verbs' RDMA provider 00:42:58.066 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:43:10.275 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:43:10.275 Creating mk/config.mk...done. 00:43:10.275 Creating mk/cc.flags.mk...done. 00:43:10.275 Type 'make' to build. 00:43:10.275 00:27:04 -- spdk/autopackage.sh@43 -- $ make -j10 00:43:10.275 make[1]: Nothing to be done for 'all'. 00:43:14.472 The Meson build system 00:43:14.472 Version: 1.4.1 00:43:14.472 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:43:14.472 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:43:14.472 Build type: native build 00:43:14.472 Program cat found: YES (/usr/bin/cat) 00:43:14.472 Project name: DPDK 00:43:14.472 Project version: 24.03.0 00:43:14.472 C compiler for the host machine: cc (gcc 13.2.0 "cc (Ubuntu 13.2.0-23ubuntu4) 13.2.0") 00:43:14.472 C linker for the host machine: cc ld.bfd 2.42 00:43:14.472 Host machine cpu family: x86_64 00:43:14.472 Host machine cpu: x86_64 00:43:14.472 Message: ## Building in Developer Mode ## 00:43:14.472 Program pkg-config found: YES (/usr/bin/pkg-config) 00:43:14.472 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:43:14.472 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:43:14.472 Program python3 found: YES (/var/spdk/dependencies/pip/bin/python3) 00:43:14.472 Program cat found: YES (/usr/bin/cat) 00:43:14.472 Compiler for C supports arguments -march=native: YES 00:43:14.472 Checking for size of "void *" : 8 00:43:14.472 Checking for size of "void *" : 8 (cached) 00:43:14.472 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:43:14.472 Library m found: YES 00:43:14.472 Library numa found: YES 00:43:14.472 Has header "numaif.h" : YES 00:43:14.472 Library fdt found: NO 00:43:14.472 Library execinfo found: NO 00:43:14.472 Has header "execinfo.h" : YES 00:43:14.472 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.1 00:43:14.472 Run-time dependency libarchive found: NO (tried pkgconfig) 00:43:14.472 Run-time dependency libbsd found: NO (tried pkgconfig) 00:43:14.472 Run-time dependency jansson found: NO (tried pkgconfig) 00:43:14.472 Run-time dependency openssl found: YES 3.0.13 00:43:14.472 Run-time dependency libpcap found: NO (tried pkgconfig) 00:43:14.472 Library pcap found: NO 00:43:14.472 Compiler for C supports arguments -Wcast-qual: YES 00:43:14.472 Compiler for C supports arguments -Wdeprecated: YES 00:43:14.472 Compiler for C supports arguments -Wformat: YES 00:43:14.472 Compiler for C supports arguments -Wformat-nonliteral: YES 00:43:14.472 Compiler for C supports arguments -Wformat-security: YES 00:43:14.472 Compiler for C supports arguments -Wmissing-declarations: YES 00:43:14.472 Compiler for C supports arguments -Wmissing-prototypes: YES 00:43:14.472 Compiler for C supports arguments -Wnested-externs: YES 00:43:14.472 Compiler for C supports arguments -Wold-style-definition: YES 00:43:14.472 Compiler for C supports arguments -Wpointer-arith: YES 00:43:14.472 Compiler for C supports arguments -Wsign-compare: YES 00:43:14.472 Compiler for C supports arguments -Wstrict-prototypes: YES 00:43:14.472 Compiler for C supports arguments -Wundef: YES 00:43:14.472 Compiler for C supports arguments -Wwrite-strings: YES 00:43:14.472 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:43:14.472 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:43:14.472 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:43:14.472 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:43:14.472 Program objdump found: YES (/usr/bin/objdump) 00:43:14.472 Compiler for C supports arguments -mavx512f: YES 00:43:14.472 Checking if "AVX512 checking" compiles: YES 00:43:14.472 Fetching value of define "__SSE4_2__" : 1 00:43:14.472 Fetching value of define "__AES__" : 1 00:43:14.472 Fetching value of define "__AVX__" : 1 00:43:14.472 Fetching value of define "__AVX2__" : 1 00:43:14.472 Fetching value of define "__AVX512BW__" : (undefined) 00:43:14.472 Fetching value of define "__AVX512CD__" : (undefined) 00:43:14.472 Fetching value of define "__AVX512DQ__" : (undefined) 00:43:14.473 Fetching value of define "__AVX512F__" : (undefined) 00:43:14.473 Fetching value of define "__AVX512VL__" : (undefined) 00:43:14.473 Fetching value of define "__PCLMUL__" : 1 00:43:14.473 Fetching value of define "__RDRND__" : 1 00:43:14.473 Fetching value of define "__RDSEED__" : 1 00:43:14.473 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:43:14.473 Fetching value of define "__znver1__" : (undefined) 00:43:14.473 Fetching value of define "__znver2__" : (undefined) 00:43:14.473 Fetching value of define "__znver3__" : (undefined) 00:43:14.473 Fetching value of define "__znver4__" : (undefined) 00:43:14.473 Compiler for C supports arguments -ffat-lto-objects: YES 00:43:14.473 Library asan found: YES 00:43:14.473 Compiler for C supports arguments -Wno-format-truncation: YES 00:43:14.473 Message: lib/log: Defining dependency "log" 00:43:14.473 Message: lib/kvargs: Defining dependency "kvargs" 00:43:14.473 Message: lib/telemetry: Defining dependency "telemetry" 00:43:14.473 Library rt found: YES 00:43:14.473 Checking for function "getentropy" : NO 00:43:14.473 Message: lib/eal: Defining dependency "eal" 00:43:14.473 Message: lib/ring: Defining dependency "ring" 00:43:14.473 Message: lib/rcu: Defining dependency "rcu" 00:43:14.473 Message: lib/mempool: Defining dependency "mempool" 00:43:14.473 Message: lib/mbuf: Defining dependency "mbuf" 00:43:14.473 Fetching value of define "__PCLMUL__" : 1 (cached) 00:43:14.473 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:43:14.473 Compiler for C supports arguments -mpclmul: YES 00:43:14.473 Compiler for C supports arguments -maes: YES 00:43:14.473 Compiler for C supports arguments -mavx512f: YES (cached) 00:43:14.473 Compiler for C supports arguments -mavx512bw: YES 00:43:14.473 Compiler for C supports arguments -mavx512dq: YES 00:43:14.473 Compiler for C supports arguments -mavx512vl: YES 00:43:14.473 Compiler for C supports arguments -mvpclmulqdq: YES 00:43:14.473 Compiler for C supports arguments -mavx2: YES 00:43:14.473 Compiler for C supports arguments -mavx: YES 00:43:14.473 Message: lib/net: Defining dependency "net" 00:43:14.473 Message: lib/meter: Defining dependency "meter" 00:43:14.473 Message: lib/ethdev: Defining dependency "ethdev" 00:43:14.473 Message: lib/pci: Defining dependency "pci" 00:43:14.473 Message: lib/cmdline: Defining dependency "cmdline" 00:43:14.473 Message: lib/hash: Defining dependency "hash" 00:43:14.473 Message: lib/timer: Defining dependency "timer" 00:43:14.473 Message: lib/compressdev: Defining dependency "compressdev" 00:43:14.473 Message: lib/cryptodev: Defining dependency "cryptodev" 00:43:14.473 Message: lib/dmadev: Defining dependency "dmadev" 00:43:14.473 Compiler for C supports arguments -Wno-cast-qual: YES 00:43:14.473 Message: lib/power: Defining dependency "power" 00:43:14.473 Message: lib/reorder: Defining dependency "reorder" 00:43:14.473 Message: lib/security: Defining dependency "security" 00:43:14.473 Has header "linux/userfaultfd.h" : YES 00:43:14.473 Has header "linux/vduse.h" : YES 00:43:14.473 Message: lib/vhost: Defining dependency "vhost" 00:43:14.473 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:43:14.473 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:43:14.473 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:43:14.473 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:43:14.473 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:43:14.473 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:43:14.473 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:43:14.473 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:43:14.473 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:43:14.473 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:43:14.473 Program doxygen found: YES (/usr/bin/doxygen) 00:43:14.473 Configuring doxy-api-html.conf using configuration 00:43:14.473 Configuring doxy-api-man.conf using configuration 00:43:14.473 Program mandb found: YES (/usr/bin/mandb) 00:43:14.473 Program sphinx-build found: NO 00:43:14.473 Configuring rte_build_config.h using configuration 00:43:14.473 Message: 00:43:14.473 ================= 00:43:14.473 Applications Enabled 00:43:14.473 ================= 00:43:14.473 00:43:14.473 apps: 00:43:14.473 00:43:14.473 00:43:14.473 Message: 00:43:14.473 ================= 00:43:14.473 Libraries Enabled 00:43:14.473 ================= 00:43:14.473 00:43:14.473 libs: 00:43:14.473 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:43:14.473 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:43:14.473 cryptodev, dmadev, power, reorder, security, vhost, 00:43:14.473 00:43:14.473 Message: 00:43:14.473 =============== 00:43:14.473 Drivers Enabled 00:43:14.473 =============== 00:43:14.473 00:43:14.473 common: 00:43:14.473 00:43:14.473 bus: 00:43:14.473 pci, vdev, 00:43:14.473 mempool: 00:43:14.473 ring, 00:43:14.473 dma: 00:43:14.473 00:43:14.473 net: 00:43:14.473 00:43:14.473 crypto: 00:43:14.473 00:43:14.473 compress: 00:43:14.473 00:43:14.473 vdpa: 00:43:14.473 00:43:14.473 00:43:14.473 Message: 00:43:14.473 ================= 00:43:14.473 Content Skipped 00:43:14.473 ================= 00:43:14.473 00:43:14.473 apps: 00:43:14.473 dumpcap: explicitly disabled via build config 00:43:14.473 graph: explicitly disabled via build config 00:43:14.473 pdump: explicitly disabled via build config 00:43:14.473 proc-info: explicitly disabled via build config 00:43:14.473 test-acl: explicitly disabled via build config 00:43:14.473 test-bbdev: explicitly disabled via build config 00:43:14.473 test-cmdline: explicitly disabled via build config 00:43:14.473 test-compress-perf: explicitly disabled via build config 00:43:14.473 test-crypto-perf: explicitly disabled via build config 00:43:14.473 test-dma-perf: explicitly disabled via build config 00:43:14.473 test-eventdev: explicitly disabled via build config 00:43:14.473 test-fib: explicitly disabled via build config 00:43:14.473 test-flow-perf: explicitly disabled via build config 00:43:14.473 test-gpudev: explicitly disabled via build config 00:43:14.473 test-mldev: explicitly disabled via build config 00:43:14.473 test-pipeline: explicitly disabled via build config 00:43:14.473 test-pmd: explicitly disabled via build config 00:43:14.473 test-regex: explicitly disabled via build config 00:43:14.473 test-sad: explicitly disabled via build config 00:43:14.473 test-security-perf: explicitly disabled via build config 00:43:14.473 00:43:14.473 libs: 00:43:14.473 argparse: explicitly disabled via build config 00:43:14.473 metrics: explicitly disabled via build config 00:43:14.473 acl: explicitly disabled via build config 00:43:14.473 bbdev: explicitly disabled via build config 00:43:14.473 bitratestats: explicitly disabled via build config 00:43:14.473 bpf: explicitly disabled via build config 00:43:14.473 cfgfile: explicitly disabled via build config 00:43:14.473 distributor: explicitly disabled via build config 00:43:14.473 efd: explicitly disabled via build config 00:43:14.473 eventdev: explicitly disabled via build config 00:43:14.473 dispatcher: explicitly disabled via build config 00:43:14.473 gpudev: explicitly disabled via build config 00:43:14.473 gro: explicitly disabled via build config 00:43:14.473 gso: explicitly disabled via build config 00:43:14.473 ip_frag: explicitly disabled via build config 00:43:14.473 jobstats: explicitly disabled via build config 00:43:14.473 latencystats: explicitly disabled via build config 00:43:14.473 lpm: explicitly disabled via build config 00:43:14.473 member: explicitly disabled via build config 00:43:14.473 pcapng: explicitly disabled via build config 00:43:14.473 rawdev: explicitly disabled via build config 00:43:14.473 regexdev: explicitly disabled via build config 00:43:14.473 mldev: explicitly disabled via build config 00:43:14.473 rib: explicitly disabled via build config 00:43:14.473 sched: explicitly disabled via build config 00:43:14.473 stack: explicitly disabled via build config 00:43:14.473 ipsec: explicitly disabled via build config 00:43:14.473 pdcp: explicitly disabled via build config 00:43:14.473 fib: explicitly disabled via build config 00:43:14.473 port: explicitly disabled via build config 00:43:14.474 pdump: explicitly disabled via build config 00:43:14.474 table: explicitly disabled via build config 00:43:14.474 pipeline: explicitly disabled via build config 00:43:14.474 graph: explicitly disabled via build config 00:43:14.474 node: explicitly disabled via build config 00:43:14.474 00:43:14.474 drivers: 00:43:14.474 common/cpt: not in enabled drivers build config 00:43:14.474 common/dpaax: not in enabled drivers build config 00:43:14.474 common/iavf: not in enabled drivers build config 00:43:14.474 common/idpf: not in enabled drivers build config 00:43:14.474 common/ionic: not in enabled drivers build config 00:43:14.474 common/mvep: not in enabled drivers build config 00:43:14.474 common/octeontx: not in enabled drivers build config 00:43:14.474 bus/auxiliary: not in enabled drivers build config 00:43:14.474 bus/cdx: not in enabled drivers build config 00:43:14.474 bus/dpaa: not in enabled drivers build config 00:43:14.474 bus/fslmc: not in enabled drivers build config 00:43:14.474 bus/ifpga: not in enabled drivers build config 00:43:14.474 bus/platform: not in enabled drivers build config 00:43:14.474 bus/uacce: not in enabled drivers build config 00:43:14.474 bus/vmbus: not in enabled drivers build config 00:43:14.474 common/cnxk: not in enabled drivers build config 00:43:14.474 common/mlx5: not in enabled drivers build config 00:43:14.474 common/nfp: not in enabled drivers build config 00:43:14.474 common/nitrox: not in enabled drivers build config 00:43:14.474 common/qat: not in enabled drivers build config 00:43:14.474 common/sfc_efx: not in enabled drivers build config 00:43:14.474 mempool/bucket: not in enabled drivers build config 00:43:14.474 mempool/cnxk: not in enabled drivers build config 00:43:14.474 mempool/dpaa: not in enabled drivers build config 00:43:14.474 mempool/dpaa2: not in enabled drivers build config 00:43:14.474 mempool/octeontx: not in enabled drivers build config 00:43:14.474 mempool/stack: not in enabled drivers build config 00:43:14.474 dma/cnxk: not in enabled drivers build config 00:43:14.474 dma/dpaa: not in enabled drivers build config 00:43:14.474 dma/dpaa2: not in enabled drivers build config 00:43:14.474 dma/hisilicon: not in enabled drivers build config 00:43:14.474 dma/idxd: not in enabled drivers build config 00:43:14.474 dma/ioat: not in enabled drivers build config 00:43:14.474 dma/skeleton: not in enabled drivers build config 00:43:14.474 net/af_packet: not in enabled drivers build config 00:43:14.474 net/af_xdp: not in enabled drivers build config 00:43:14.474 net/ark: not in enabled drivers build config 00:43:14.474 net/atlantic: not in enabled drivers build config 00:43:14.474 net/avp: not in enabled drivers build config 00:43:14.474 net/axgbe: not in enabled drivers build config 00:43:14.474 net/bnx2x: not in enabled drivers build config 00:43:14.474 net/bnxt: not in enabled drivers build config 00:43:14.474 net/bonding: not in enabled drivers build config 00:43:14.474 net/cnxk: not in enabled drivers build config 00:43:14.474 net/cpfl: not in enabled drivers build config 00:43:14.474 net/cxgbe: not in enabled drivers build config 00:43:14.474 net/dpaa: not in enabled drivers build config 00:43:14.474 net/dpaa2: not in enabled drivers build config 00:43:14.474 net/e1000: not in enabled drivers build config 00:43:14.474 net/ena: not in enabled drivers build config 00:43:14.474 net/enetc: not in enabled drivers build config 00:43:14.474 net/enetfec: not in enabled drivers build config 00:43:14.474 net/enic: not in enabled drivers build config 00:43:14.474 net/failsafe: not in enabled drivers build config 00:43:14.474 net/fm10k: not in enabled drivers build config 00:43:14.474 net/gve: not in enabled drivers build config 00:43:14.474 net/hinic: not in enabled drivers build config 00:43:14.474 net/hns3: not in enabled drivers build config 00:43:14.474 net/i40e: not in enabled drivers build config 00:43:14.474 net/iavf: not in enabled drivers build config 00:43:14.474 net/ice: not in enabled drivers build config 00:43:14.474 net/idpf: not in enabled drivers build config 00:43:14.474 net/igc: not in enabled drivers build config 00:43:14.474 net/ionic: not in enabled drivers build config 00:43:14.474 net/ipn3ke: not in enabled drivers build config 00:43:14.474 net/ixgbe: not in enabled drivers build config 00:43:14.474 net/mana: not in enabled drivers build config 00:43:14.474 net/memif: not in enabled drivers build config 00:43:14.474 net/mlx4: not in enabled drivers build config 00:43:14.474 net/mlx5: not in enabled drivers build config 00:43:14.474 net/mvneta: not in enabled drivers build config 00:43:14.474 net/mvpp2: not in enabled drivers build config 00:43:14.474 net/netvsc: not in enabled drivers build config 00:43:14.474 net/nfb: not in enabled drivers build config 00:43:14.474 net/nfp: not in enabled drivers build config 00:43:14.474 net/ngbe: not in enabled drivers build config 00:43:14.474 net/null: not in enabled drivers build config 00:43:14.474 net/octeontx: not in enabled drivers build config 00:43:14.474 net/octeon_ep: not in enabled drivers build config 00:43:14.474 net/pcap: not in enabled drivers build config 00:43:14.474 net/pfe: not in enabled drivers build config 00:43:14.474 net/qede: not in enabled drivers build config 00:43:14.474 net/ring: not in enabled drivers build config 00:43:14.474 net/sfc: not in enabled drivers build config 00:43:14.474 net/softnic: not in enabled drivers build config 00:43:14.474 net/tap: not in enabled drivers build config 00:43:14.474 net/thunderx: not in enabled drivers build config 00:43:14.474 net/txgbe: not in enabled drivers build config 00:43:14.474 net/vdev_netvsc: not in enabled drivers build config 00:43:14.474 net/vhost: not in enabled drivers build config 00:43:14.474 net/virtio: not in enabled drivers build config 00:43:14.474 net/vmxnet3: not in enabled drivers build config 00:43:14.474 raw/*: missing internal dependency, "rawdev" 00:43:14.474 crypto/armv8: not in enabled drivers build config 00:43:14.474 crypto/bcmfs: not in enabled drivers build config 00:43:14.474 crypto/caam_jr: not in enabled drivers build config 00:43:14.474 crypto/ccp: not in enabled drivers build config 00:43:14.474 crypto/cnxk: not in enabled drivers build config 00:43:14.474 crypto/dpaa_sec: not in enabled drivers build config 00:43:14.474 crypto/dpaa2_sec: not in enabled drivers build config 00:43:14.474 crypto/ipsec_mb: not in enabled drivers build config 00:43:14.474 crypto/mlx5: not in enabled drivers build config 00:43:14.474 crypto/mvsam: not in enabled drivers build config 00:43:14.474 crypto/nitrox: not in enabled drivers build config 00:43:14.474 crypto/null: not in enabled drivers build config 00:43:14.474 crypto/octeontx: not in enabled drivers build config 00:43:14.474 crypto/openssl: not in enabled drivers build config 00:43:14.474 crypto/scheduler: not in enabled drivers build config 00:43:14.474 crypto/uadk: not in enabled drivers build config 00:43:14.474 crypto/virtio: not in enabled drivers build config 00:43:14.474 compress/isal: not in enabled drivers build config 00:43:14.474 compress/mlx5: not in enabled drivers build config 00:43:14.474 compress/nitrox: not in enabled drivers build config 00:43:14.474 compress/octeontx: not in enabled drivers build config 00:43:14.474 compress/zlib: not in enabled drivers build config 00:43:14.474 regex/*: missing internal dependency, "regexdev" 00:43:14.474 ml/*: missing internal dependency, "mldev" 00:43:14.474 vdpa/ifc: not in enabled drivers build config 00:43:14.474 vdpa/mlx5: not in enabled drivers build config 00:43:14.474 vdpa/nfp: not in enabled drivers build config 00:43:14.474 vdpa/sfc: not in enabled drivers build config 00:43:14.474 event/*: missing internal dependency, "eventdev" 00:43:14.474 baseband/*: missing internal dependency, "bbdev" 00:43:14.474 gpu/*: missing internal dependency, "gpudev" 00:43:14.474 00:43:14.474 00:43:15.043 Build targets in project: 85 00:43:15.043 00:43:15.043 DPDK 24.03.0 00:43:15.043 00:43:15.043 User defined options 00:43:15.043 default_library : static 00:43:15.043 libdir : lib 00:43:15.043 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:43:15.043 b_lto : true 00:43:15.043 b_sanitize : address 00:43:15.043 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:43:15.043 c_link_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:43:15.043 cpu_instruction_set: native 00:43:15.043 disable_apps : test-pipeline,test-pmd,test-eventdev,test,test-cmdline,test-bbdev,test-sad,proc-info,graph,test-gpudev,test-crypto-perf,test-dma-perf,test-regex,test-mldev,test-acl,test-flow-perf,dumpcap,test-compress-perf,test-security-perf,test-fib,pdump 00:43:15.043 disable_libs : mldev,jobstats,bpf,argparse,rawdev,rib,stack,bbdev,lpm,pipeline,member,port,regexdev,latencystats,table,bitratestats,acl,sched,node,graph,gso,dispatcher,efd,eventdev,pdcp,fib,pcapng,cfgfile,metrics,ip_frag,gro,pdump,gpudev,distributor,ipsec 00:43:15.043 enable_docs : false 00:43:15.043 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:43:15.043 enable_kmods : false 00:43:15.043 max_lcores : 128 00:43:15.043 tests : false 00:43:15.043 00:43:15.043 Found ninja-1.11.1.git.kitware.jobserver-1 at /var/spdk/dependencies/pip/bin/ninja 00:43:15.611 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:43:15.611 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:43:15.611 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:43:15.611 [3/268] Linking static target lib/librte_kvargs.a 00:43:15.611 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:43:15.868 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:43:15.868 [6/268] Linking static target lib/librte_log.a 00:43:15.868 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:43:15.868 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:43:16.126 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:43:16.126 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:43:16.126 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:43:16.126 [12/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:43:16.126 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:43:16.383 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:43:16.383 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:43:16.642 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:43:16.642 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:43:16.642 [18/268] Linking target lib/librte_log.so.24.1 00:43:16.901 [19/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:43:16.901 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:43:16.901 [21/268] Linking target lib/librte_kvargs.so.24.1 00:43:17.160 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:43:17.160 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:43:17.160 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:43:17.160 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:43:17.160 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:43:17.160 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:43:17.418 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:43:17.418 [29/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:43:17.418 [30/268] Linking static target lib/librte_telemetry.a 00:43:17.418 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:43:17.676 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:43:17.676 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:43:17.934 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:43:17.934 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:43:17.934 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:43:17.934 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:43:17.934 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:43:17.934 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:43:17.934 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:43:17.934 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:43:18.192 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:43:18.451 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:43:18.451 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:43:18.710 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:43:18.710 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:43:18.710 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:43:18.969 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:43:18.969 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:43:18.969 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:43:19.228 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:43:19.228 [52/268] Linking target lib/librte_telemetry.so.24.1 00:43:19.228 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:43:19.228 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:43:19.487 [55/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:43:19.487 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:43:19.487 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:43:19.487 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:43:19.746 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:43:19.746 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:43:19.746 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:43:19.746 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:43:19.746 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:43:20.005 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:43:20.005 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:43:20.005 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:43:20.263 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:43:20.521 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:43:20.521 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:43:20.521 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:43:20.521 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:43:20.780 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:43:20.780 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:43:20.780 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:43:20.780 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:43:20.780 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:43:20.780 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:43:21.348 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:43:21.348 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:43:21.348 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:43:21.348 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:43:21.348 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:43:21.607 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:43:21.607 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:43:21.607 [85/268] Linking static target lib/librte_ring.a 00:43:21.866 [86/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:43:21.866 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:43:22.125 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:43:22.125 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:43:22.125 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:43:22.125 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:43:22.125 [92/268] Linking static target lib/librte_eal.a 00:43:22.125 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:43:22.383 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:43:22.383 [95/268] Linking static target lib/librte_mempool.a 00:43:22.642 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:43:22.642 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:43:22.642 [98/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:43:22.642 [99/268] Linking static target lib/librte_rcu.a 00:43:22.901 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:43:22.901 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:43:22.901 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:43:22.901 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:43:22.901 [104/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:43:23.160 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:43:23.160 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:43:23.419 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:43:23.419 [108/268] Linking static target lib/librte_net.a 00:43:23.678 [109/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:43:23.678 [110/268] Linking static target lib/librte_meter.a 00:43:23.678 [111/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:43:23.678 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:43:23.936 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:43:23.937 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:43:23.937 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:43:24.195 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:43:24.195 [117/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:43:24.195 [118/268] Linking static target lib/librte_mbuf.a 00:43:24.762 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:43:24.762 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:43:25.021 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:43:25.021 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:43:25.278 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:43:25.278 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:43:25.536 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:43:25.536 [126/268] Linking static target lib/librte_pci.a 00:43:25.536 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:43:25.794 [128/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:43:25.794 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:43:25.794 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:43:25.794 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:43:25.794 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:43:26.053 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:43:26.053 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:43:26.053 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:43:26.053 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:43:26.346 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:43:26.346 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:43:26.346 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:43:26.346 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:43:26.346 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:43:26.346 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:43:26.346 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:43:26.604 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:43:26.604 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:43:26.863 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:43:26.863 [147/268] Linking static target lib/librte_cmdline.a 00:43:27.121 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:43:27.121 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:43:27.699 [150/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:43:27.699 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:43:27.699 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:43:27.699 [153/268] Linking static target lib/librte_timer.a 00:43:27.957 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:43:27.957 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:43:27.957 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:43:27.957 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:43:27.957 [158/268] Linking static target lib/librte_compressdev.a 00:43:28.214 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:43:28.214 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:43:28.473 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:43:28.473 [162/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:43:28.473 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:43:28.731 [164/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:43:28.731 [165/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:43:28.731 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:43:28.989 [167/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:43:29.554 [168/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:43:29.554 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:43:29.554 [170/268] Linking static target lib/librte_dmadev.a 00:43:29.812 [171/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:43:29.812 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:43:30.742 [173/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:43:30.742 [174/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:43:30.742 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:43:30.998 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:43:30.998 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:43:30.998 [178/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:43:30.998 [179/268] Linking static target lib/librte_ethdev.a 00:43:30.998 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:43:31.562 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:43:31.562 [182/268] Linking static target lib/librte_power.a 00:43:31.821 [183/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:43:31.821 [184/268] Linking static target lib/librte_reorder.a 00:43:32.079 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:43:32.337 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:43:32.337 [187/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:43:32.595 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:43:32.595 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:43:32.595 [190/268] Linking static target lib/librte_security.a 00:43:32.595 [191/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:43:32.595 [192/268] Linking static target lib/librte_cryptodev.a 00:43:32.595 [193/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:43:32.854 [194/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:43:32.854 [195/268] Linking static target lib/librte_hash.a 00:43:32.854 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:43:33.788 [197/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:43:33.788 [198/268] Linking target lib/librte_eal.so.24.1 00:43:33.788 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:43:33.788 [200/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:43:33.788 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:43:33.788 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:43:34.045 [203/268] Linking target lib/librte_meter.so.24.1 00:43:34.046 [204/268] Linking target lib/librte_pci.so.24.1 00:43:34.046 [205/268] Linking target lib/librte_ring.so.24.1 00:43:34.046 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:43:34.303 [207/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:43:34.303 [208/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:43:34.303 [209/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:43:34.868 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:43:34.868 [211/268] Linking target lib/librte_timer.so.24.1 00:43:34.868 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:43:34.868 [213/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:43:35.126 [214/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:43:35.126 [215/268] Linking target lib/librte_dmadev.so.24.1 00:43:35.126 [216/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:43:35.383 [217/268] Linking target lib/librte_rcu.so.24.1 00:43:35.383 [218/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:43:35.383 [219/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:43:35.641 [220/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:43:35.899 [221/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:43:35.899 [222/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:43:35.899 [223/268] Linking target lib/librte_mempool.so.24.1 00:43:36.157 [224/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:43:36.157 [225/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:43:36.157 [226/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:43:36.415 [227/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:43:36.415 [228/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:43:36.415 [229/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:43:36.415 [230/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:43:36.415 [231/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:43:36.415 [232/268] Linking static target drivers/librte_bus_vdev.a 00:43:36.674 [233/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:43:36.674 [234/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:43:36.674 [235/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:43:36.674 [236/268] Linking static target drivers/librte_bus_pci.a 00:43:36.674 [237/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:43:36.932 [238/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:43:36.932 [239/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:43:36.932 [240/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:43:37.190 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:43:37.190 [242/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:43:37.190 [243/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:43:37.190 [244/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:43:37.190 [245/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:43:37.190 [246/268] Linking static target drivers/librte_mempool_ring.a 00:43:38.123 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:43:38.689 [248/268] Linking target lib/librte_mbuf.so.24.1 00:43:38.689 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:43:39.255 [250/268] Linking target drivers/librte_bus_pci.so.24.1 00:43:39.255 [251/268] Linking target lib/librte_reorder.so.24.1 00:43:39.513 [252/268] Linking target lib/librte_compressdev.so.24.1 00:43:40.079 [253/268] Linking target lib/librte_net.so.24.1 00:43:40.079 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:43:41.474 [255/268] Linking target lib/librte_cmdline.so.24.1 00:43:41.731 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:43:41.731 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:43:42.298 [258/268] Linking target lib/librte_security.so.24.1 00:43:44.831 [259/268] Linking target lib/librte_ethdev.so.24.1 00:43:44.831 [260/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:43:46.207 [261/268] Linking target lib/librte_hash.so.24.1 00:43:46.207 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:43:47.141 [263/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:43:48.075 [264/268] Linking target lib/librte_power.so.24.1 00:44:14.625 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:44:14.625 [266/268] Linking static target lib/librte_vhost.a 00:44:15.999 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:44:34.097 [268/268] Linking target lib/librte_vhost.so.24.1 00:44:34.097 INFO: autodetecting backend as ninja 00:44:34.097 INFO: calculating backend command to run: /var/spdk/dependencies/pip/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:44:34.097 CC lib/ut/ut.o 00:44:34.097 CC lib/ut_mock/mock.o 00:44:34.097 CC lib/log/log.o 00:44:34.097 CC lib/log/log_flags.o 00:44:34.097 CC lib/log/log_deprecated.o 00:44:34.097 LIB libspdk_ut_mock.a 00:44:34.097 LIB libspdk_log.a 00:44:34.097 LIB libspdk_ut.a 00:44:34.097 CC lib/util/base64.o 00:44:34.097 CC lib/util/bit_array.o 00:44:34.098 CC lib/util/cpuset.o 00:44:34.098 CC lib/util/crc16.o 00:44:34.098 CC lib/dma/dma.o 00:44:34.098 CC lib/ioat/ioat.o 00:44:34.098 CXX lib/trace_parser/trace.o 00:44:34.098 CC lib/util/crc32.o 00:44:34.098 CC lib/util/crc32c.o 00:44:34.098 CC lib/vfio_user/host/vfio_user_pci.o 00:44:34.098 CC lib/util/crc32_ieee.o 00:44:34.098 CC lib/vfio_user/host/vfio_user.o 00:44:34.098 CC lib/util/crc64.o 00:44:34.098 CC lib/util/dif.o 00:44:34.098 LIB libspdk_dma.a 00:44:34.098 CC lib/util/fd.o 00:44:34.098 CC lib/util/fd_group.o 00:44:34.098 CC lib/util/file.o 00:44:34.098 LIB libspdk_ioat.a 00:44:34.098 CC lib/util/hexlify.o 00:44:34.098 CC lib/util/iov.o 00:44:34.098 CC lib/util/math.o 00:44:34.098 CC lib/util/net.o 00:44:34.098 CC lib/util/pipe.o 00:44:34.098 LIB libspdk_vfio_user.a 00:44:34.098 CC lib/util/strerror_tls.o 00:44:34.098 CC lib/util/string.o 00:44:34.098 CC lib/util/uuid.o 00:44:34.098 CC lib/util/xor.o 00:44:34.098 CC lib/util/zipf.o 00:44:34.098 LIB libspdk_util.a 00:44:34.098 CC lib/json/json_parse.o 00:44:34.098 CC lib/conf/conf.o 00:44:34.098 CC lib/json/json_util.o 00:44:34.098 CC lib/json/json_write.o 00:44:34.098 CC lib/idxd/idxd.o 00:44:34.098 CC lib/rdma_provider/common.o 00:44:34.098 CC lib/env_dpdk/env.o 00:44:34.098 CC lib/rdma_utils/rdma_utils.o 00:44:34.098 CC lib/vmd/vmd.o 00:44:34.098 LIB libspdk_trace_parser.a 00:44:34.098 CC lib/vmd/led.o 00:44:34.098 CC lib/rdma_provider/rdma_provider_verbs.o 00:44:34.098 CC lib/idxd/idxd_user.o 00:44:34.098 LIB libspdk_conf.a 00:44:34.098 CC lib/idxd/idxd_kernel.o 00:44:34.098 CC lib/env_dpdk/memory.o 00:44:34.098 CC lib/env_dpdk/pci.o 00:44:34.098 LIB libspdk_json.a 00:44:34.098 LIB libspdk_rdma_utils.a 00:44:34.098 CC lib/env_dpdk/init.o 00:44:34.098 LIB libspdk_rdma_provider.a 00:44:34.098 CC lib/env_dpdk/threads.o 00:44:34.098 CC lib/env_dpdk/pci_ioat.o 00:44:34.098 CC lib/jsonrpc/jsonrpc_server.o 00:44:34.098 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:44:34.098 LIB libspdk_idxd.a 00:44:34.098 CC lib/jsonrpc/jsonrpc_client.o 00:44:34.098 LIB libspdk_vmd.a 00:44:34.098 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:44:34.098 CC lib/env_dpdk/pci_virtio.o 00:44:34.098 CC lib/env_dpdk/pci_vmd.o 00:44:34.098 CC lib/env_dpdk/pci_idxd.o 00:44:34.098 CC lib/env_dpdk/pci_event.o 00:44:34.098 CC lib/env_dpdk/sigbus_handler.o 00:44:34.098 CC lib/env_dpdk/pci_dpdk.o 00:44:34.098 CC lib/env_dpdk/pci_dpdk_2207.o 00:44:34.098 CC lib/env_dpdk/pci_dpdk_2211.o 00:44:34.098 LIB libspdk_jsonrpc.a 00:44:34.373 CC lib/rpc/rpc.o 00:44:34.644 LIB libspdk_rpc.a 00:44:34.644 CC lib/keyring/keyring.o 00:44:34.644 CC lib/keyring/keyring_rpc.o 00:44:34.644 CC lib/notify/notify.o 00:44:34.644 CC lib/notify/notify_rpc.o 00:44:34.644 CC lib/trace/trace_flags.o 00:44:34.904 CC lib/trace/trace_rpc.o 00:44:34.904 CC lib/trace/trace.o 00:44:34.904 LIB libspdk_notify.a 00:44:34.904 LIB libspdk_keyring.a 00:44:34.904 LIB libspdk_trace.a 00:44:35.162 LIB libspdk_env_dpdk.a 00:44:35.162 CC lib/thread/thread.o 00:44:35.162 CC lib/thread/iobuf.o 00:44:35.162 CC lib/sock/sock.o 00:44:35.162 CC lib/sock/sock_rpc.o 00:44:35.729 LIB libspdk_sock.a 00:44:35.987 CC lib/nvme/nvme_ctrlr_cmd.o 00:44:35.987 CC lib/nvme/nvme_ctrlr.o 00:44:35.987 CC lib/nvme/nvme_fabric.o 00:44:35.987 CC lib/nvme/nvme_pcie_common.o 00:44:35.987 CC lib/nvme/nvme_ns_cmd.o 00:44:35.987 CC lib/nvme/nvme_ns.o 00:44:35.987 CC lib/nvme/nvme_pcie.o 00:44:35.987 CC lib/nvme/nvme_qpair.o 00:44:35.987 CC lib/nvme/nvme.o 00:44:36.246 LIB libspdk_thread.a 00:44:36.246 CC lib/nvme/nvme_quirks.o 00:44:36.813 CC lib/nvme/nvme_transport.o 00:44:36.813 CC lib/nvme/nvme_discovery.o 00:44:36.813 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:44:36.813 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:44:36.813 CC lib/nvme/nvme_tcp.o 00:44:36.813 CC lib/nvme/nvme_opal.o 00:44:36.813 CC lib/nvme/nvme_io_msg.o 00:44:37.072 CC lib/nvme/nvme_poll_group.o 00:44:37.072 CC lib/nvme/nvme_zns.o 00:44:37.330 CC lib/nvme/nvme_stubs.o 00:44:37.589 CC lib/nvme/nvme_auth.o 00:44:37.589 CC lib/nvme/nvme_cuse.o 00:44:37.589 CC lib/nvme/nvme_rdma.o 00:44:37.847 CC lib/accel/accel.o 00:44:37.847 CC lib/accel/accel_rpc.o 00:44:37.847 CC lib/accel/accel_sw.o 00:44:37.847 CC lib/blob/blobstore.o 00:44:37.847 CC lib/init/json_config.o 00:44:38.106 CC lib/init/subsystem.o 00:44:38.106 CC lib/init/subsystem_rpc.o 00:44:38.106 CC lib/blob/request.o 00:44:38.106 CC lib/virtio/virtio.o 00:44:38.106 CC lib/init/rpc.o 00:44:38.364 CC lib/blob/zeroes.o 00:44:38.364 CC lib/blob/blob_bs_dev.o 00:44:38.364 CC lib/virtio/virtio_vhost_user.o 00:44:38.364 CC lib/virtio/virtio_vfio_user.o 00:44:38.364 LIB libspdk_accel.a 00:44:38.364 LIB libspdk_init.a 00:44:38.364 CC lib/virtio/virtio_pci.o 00:44:38.622 CC lib/bdev/bdev.o 00:44:38.622 CC lib/bdev/bdev_rpc.o 00:44:38.622 CC lib/bdev/part.o 00:44:38.622 CC lib/bdev/bdev_zone.o 00:44:38.622 CC lib/event/app.o 00:44:38.622 CC lib/bdev/scsi_nvme.o 00:44:38.622 CC lib/event/reactor.o 00:44:38.622 LIB libspdk_virtio.a 00:44:38.622 CC lib/event/log_rpc.o 00:44:38.880 CC lib/event/app_rpc.o 00:44:38.880 CC lib/event/scheduler_static.o 00:44:38.880 LIB libspdk_nvme.a 00:44:39.138 LIB libspdk_event.a 00:44:39.706 LIB libspdk_blob.a 00:44:39.964 CC lib/blobfs/blobfs.o 00:44:39.964 CC lib/blobfs/tree.o 00:44:39.964 CC lib/lvol/lvol.o 00:44:40.222 LIB libspdk_bdev.a 00:44:40.480 CC lib/nbd/nbd.o 00:44:40.480 CC lib/nbd/nbd_rpc.o 00:44:40.480 CC lib/ublk/ublk.o 00:44:40.480 CC lib/ublk/ublk_rpc.o 00:44:40.480 CC lib/nvmf/ctrlr.o 00:44:40.480 CC lib/nvmf/ctrlr_discovery.o 00:44:40.480 CC lib/scsi/dev.o 00:44:40.480 CC lib/ftl/ftl_core.o 00:44:40.737 LIB libspdk_blobfs.a 00:44:40.737 CC lib/ftl/ftl_init.o 00:44:40.737 LIB libspdk_lvol.a 00:44:40.737 CC lib/ftl/ftl_layout.o 00:44:40.737 CC lib/nvmf/ctrlr_bdev.o 00:44:40.737 CC lib/nvmf/subsystem.o 00:44:40.737 CC lib/nvmf/nvmf.o 00:44:40.737 CC lib/ftl/ftl_debug.o 00:44:40.994 CC lib/scsi/lun.o 00:44:40.994 LIB libspdk_nbd.a 00:44:40.994 CC lib/scsi/port.o 00:44:40.994 CC lib/ftl/ftl_io.o 00:44:40.994 CC lib/ftl/ftl_sb.o 00:44:40.994 CC lib/ftl/ftl_l2p.o 00:44:40.994 CC lib/ftl/ftl_l2p_flat.o 00:44:40.994 LIB libspdk_ublk.a 00:44:40.994 CC lib/scsi/scsi.o 00:44:40.994 CC lib/ftl/ftl_nv_cache.o 00:44:41.251 CC lib/scsi/scsi_bdev.o 00:44:41.251 CC lib/ftl/ftl_band.o 00:44:41.252 CC lib/ftl/ftl_band_ops.o 00:44:41.252 CC lib/ftl/ftl_writer.o 00:44:41.252 CC lib/ftl/ftl_rq.o 00:44:41.252 CC lib/ftl/ftl_reloc.o 00:44:41.252 CC lib/nvmf/nvmf_rpc.o 00:44:41.509 CC lib/nvmf/transport.o 00:44:41.509 CC lib/nvmf/tcp.o 00:44:41.509 CC lib/nvmf/stubs.o 00:44:41.509 CC lib/nvmf/mdns_server.o 00:44:41.509 CC lib/nvmf/rdma.o 00:44:41.509 CC lib/scsi/scsi_pr.o 00:44:41.509 CC lib/nvmf/auth.o 00:44:41.509 CC lib/scsi/scsi_rpc.o 00:44:41.766 CC lib/scsi/task.o 00:44:41.766 CC lib/ftl/ftl_l2p_cache.o 00:44:41.766 CC lib/ftl/ftl_p2l.o 00:44:41.766 CC lib/ftl/mngt/ftl_mngt.o 00:44:41.766 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:44:41.766 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:44:41.766 CC lib/ftl/mngt/ftl_mngt_startup.o 00:44:41.766 LIB libspdk_scsi.a 00:44:41.766 CC lib/ftl/mngt/ftl_mngt_md.o 00:44:42.024 CC lib/ftl/mngt/ftl_mngt_misc.o 00:44:42.024 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:44:42.024 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:44:42.024 CC lib/ftl/mngt/ftl_mngt_band.o 00:44:42.024 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:44:42.024 CC lib/iscsi/conn.o 00:44:42.282 CC lib/vhost/vhost.o 00:44:42.282 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:44:42.282 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:44:42.282 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:44:42.282 CC lib/vhost/vhost_rpc.o 00:44:42.282 CC lib/ftl/utils/ftl_conf.o 00:44:42.282 CC lib/ftl/utils/ftl_md.o 00:44:42.282 CC lib/iscsi/init_grp.o 00:44:42.539 CC lib/ftl/utils/ftl_mempool.o 00:44:42.539 CC lib/ftl/utils/ftl_bitmap.o 00:44:42.539 CC lib/ftl/utils/ftl_property.o 00:44:42.539 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:44:42.539 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:44:42.539 CC lib/iscsi/iscsi.o 00:44:42.539 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:44:42.539 CC lib/iscsi/md5.o 00:44:42.797 LIB libspdk_nvmf.a 00:44:42.797 CC lib/iscsi/param.o 00:44:42.797 CC lib/iscsi/portal_grp.o 00:44:42.797 CC lib/iscsi/tgt_node.o 00:44:42.797 CC lib/iscsi/iscsi_subsystem.o 00:44:42.797 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:44:42.797 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:44:42.797 CC lib/vhost/vhost_scsi.o 00:44:43.054 CC lib/iscsi/iscsi_rpc.o 00:44:43.054 CC lib/iscsi/task.o 00:44:43.054 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:44:43.054 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:44:43.055 CC lib/ftl/upgrade/ftl_sb_v3.o 00:44:43.055 CC lib/vhost/vhost_blk.o 00:44:43.055 CC lib/ftl/upgrade/ftl_sb_v5.o 00:44:43.055 CC lib/vhost/rte_vhost_user.o 00:44:43.312 CC lib/ftl/nvc/ftl_nvc_dev.o 00:44:43.312 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:44:43.312 CC lib/ftl/base/ftl_base_dev.o 00:44:43.312 CC lib/ftl/base/ftl_base_bdev.o 00:44:43.570 LIB libspdk_ftl.a 00:44:43.570 LIB libspdk_iscsi.a 00:44:44.502 LIB libspdk_vhost.a 00:44:44.502 CC module/env_dpdk/env_dpdk_rpc.o 00:44:44.759 CC module/accel/ioat/accel_ioat.o 00:44:44.759 CC module/keyring/file/keyring.o 00:44:44.759 CC module/accel/error/accel_error.o 00:44:44.759 CC module/accel/dsa/accel_dsa.o 00:44:44.759 CC module/accel/iaa/accel_iaa.o 00:44:44.759 CC module/scheduler/dynamic/scheduler_dynamic.o 00:44:44.759 CC module/blob/bdev/blob_bdev.o 00:44:44.759 CC module/keyring/linux/keyring.o 00:44:44.759 CC module/sock/posix/posix.o 00:44:44.759 LIB libspdk_env_dpdk_rpc.a 00:44:44.759 CC module/accel/error/accel_error_rpc.o 00:44:44.759 CC module/keyring/file/keyring_rpc.o 00:44:44.759 CC module/keyring/linux/keyring_rpc.o 00:44:44.759 CC module/accel/dsa/accel_dsa_rpc.o 00:44:44.759 CC module/accel/ioat/accel_ioat_rpc.o 00:44:44.759 LIB libspdk_scheduler_dynamic.a 00:44:45.017 CC module/accel/iaa/accel_iaa_rpc.o 00:44:45.017 LIB libspdk_accel_error.a 00:44:45.017 LIB libspdk_blob_bdev.a 00:44:45.017 LIB libspdk_keyring_file.a 00:44:45.017 LIB libspdk_keyring_linux.a 00:44:45.017 LIB libspdk_accel_ioat.a 00:44:45.017 LIB libspdk_accel_dsa.a 00:44:45.017 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:44:45.017 LIB libspdk_accel_iaa.a 00:44:45.017 CC module/scheduler/gscheduler/gscheduler.o 00:44:45.273 CC module/bdev/delay/vbdev_delay.o 00:44:45.273 CC module/bdev/gpt/gpt.o 00:44:45.273 CC module/bdev/error/vbdev_error.o 00:44:45.273 CC module/bdev/malloc/bdev_malloc.o 00:44:45.273 CC module/bdev/lvol/vbdev_lvol.o 00:44:45.273 LIB libspdk_scheduler_dpdk_governor.a 00:44:45.273 CC module/blobfs/bdev/blobfs_bdev.o 00:44:45.273 CC module/bdev/malloc/bdev_malloc_rpc.o 00:44:45.273 CC module/bdev/null/bdev_null.o 00:44:45.273 LIB libspdk_scheduler_gscheduler.a 00:44:45.273 LIB libspdk_sock_posix.a 00:44:45.273 CC module/bdev/null/bdev_null_rpc.o 00:44:45.273 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:44:45.273 CC module/bdev/gpt/vbdev_gpt.o 00:44:45.273 CC module/bdev/error/vbdev_error_rpc.o 00:44:45.530 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:44:45.530 CC module/bdev/delay/vbdev_delay_rpc.o 00:44:45.530 LIB libspdk_bdev_malloc.a 00:44:45.530 LIB libspdk_bdev_null.a 00:44:45.530 LIB libspdk_bdev_error.a 00:44:45.530 LIB libspdk_bdev_gpt.a 00:44:45.530 LIB libspdk_blobfs_bdev.a 00:44:45.530 CC module/bdev/nvme/bdev_nvme.o 00:44:45.530 LIB libspdk_bdev_delay.a 00:44:45.530 CC module/bdev/nvme/bdev_nvme_rpc.o 00:44:45.530 CC module/bdev/nvme/nvme_rpc.o 00:44:45.530 LIB libspdk_bdev_lvol.a 00:44:45.530 CC module/bdev/passthru/vbdev_passthru.o 00:44:45.530 CC module/bdev/split/vbdev_split.o 00:44:45.787 CC module/bdev/raid/bdev_raid.o 00:44:45.787 CC module/bdev/ftl/bdev_ftl.o 00:44:45.787 CC module/bdev/zone_block/vbdev_zone_block.o 00:44:45.787 CC module/bdev/aio/bdev_aio.o 00:44:45.787 CC module/bdev/iscsi/bdev_iscsi.o 00:44:45.787 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:44:45.787 CC module/bdev/split/vbdev_split_rpc.o 00:44:46.045 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:44:46.045 CC module/bdev/nvme/bdev_mdns_client.o 00:44:46.045 CC module/bdev/ftl/bdev_ftl_rpc.o 00:44:46.045 LIB libspdk_bdev_split.a 00:44:46.045 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:44:46.045 CC module/bdev/aio/bdev_aio_rpc.o 00:44:46.045 LIB libspdk_bdev_passthru.a 00:44:46.045 CC module/bdev/nvme/vbdev_opal.o 00:44:46.045 LIB libspdk_bdev_iscsi.a 00:44:46.045 CC module/bdev/nvme/vbdev_opal_rpc.o 00:44:46.045 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:44:46.302 CC module/bdev/raid/bdev_raid_rpc.o 00:44:46.302 CC module/bdev/virtio/bdev_virtio_scsi.o 00:44:46.302 LIB libspdk_bdev_zone_block.a 00:44:46.302 LIB libspdk_bdev_ftl.a 00:44:46.302 LIB libspdk_bdev_aio.a 00:44:46.302 CC module/bdev/raid/bdev_raid_sb.o 00:44:46.302 CC module/bdev/raid/raid0.o 00:44:46.302 CC module/bdev/virtio/bdev_virtio_blk.o 00:44:46.302 CC module/bdev/raid/raid1.o 00:44:46.302 CC module/bdev/raid/concat.o 00:44:46.302 CC module/bdev/raid/raid5f.o 00:44:46.302 CC module/bdev/virtio/bdev_virtio_rpc.o 00:44:46.559 LIB libspdk_bdev_virtio.a 00:44:46.817 LIB libspdk_bdev_raid.a 00:44:47.075 LIB libspdk_bdev_nvme.a 00:44:47.647 CC module/event/subsystems/vmd/vmd.o 00:44:47.647 CC module/event/subsystems/vmd/vmd_rpc.o 00:44:47.647 CC module/event/subsystems/iobuf/iobuf.o 00:44:47.647 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:44:47.647 CC module/event/subsystems/scheduler/scheduler.o 00:44:47.647 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:44:47.647 CC module/event/subsystems/sock/sock.o 00:44:47.647 CC module/event/subsystems/keyring/keyring.o 00:44:47.647 LIB libspdk_event_keyring.a 00:44:47.647 LIB libspdk_event_scheduler.a 00:44:47.647 LIB libspdk_event_vhost_blk.a 00:44:47.647 LIB libspdk_event_sock.a 00:44:47.647 LIB libspdk_event_iobuf.a 00:44:47.647 LIB libspdk_event_vmd.a 00:44:47.904 CC module/event/subsystems/accel/accel.o 00:44:47.904 LIB libspdk_event_accel.a 00:44:48.161 CC module/event/subsystems/bdev/bdev.o 00:44:48.419 LIB libspdk_event_bdev.a 00:44:48.677 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:44:48.677 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:44:48.677 CC module/event/subsystems/ublk/ublk.o 00:44:48.677 CC module/event/subsystems/scsi/scsi.o 00:44:48.677 CC module/event/subsystems/nbd/nbd.o 00:44:48.677 LIB libspdk_event_ublk.a 00:44:48.677 LIB libspdk_event_nbd.a 00:44:48.677 LIB libspdk_event_scsi.a 00:44:48.935 LIB libspdk_event_nvmf.a 00:44:48.935 CC module/event/subsystems/iscsi/iscsi.o 00:44:48.935 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:44:49.193 LIB libspdk_event_vhost_scsi.a 00:44:49.193 LIB libspdk_event_iscsi.a 00:44:49.450 CXX app/trace/trace.o 00:44:49.450 CC app/trace_record/trace_record.o 00:44:49.450 CC app/spdk_lspci/spdk_lspci.o 00:44:49.450 CC app/iscsi_tgt/iscsi_tgt.o 00:44:49.450 CC examples/interrupt_tgt/interrupt_tgt.o 00:44:49.450 CC examples/ioat/perf/perf.o 00:44:49.450 CC app/nvmf_tgt/nvmf_main.o 00:44:49.450 CC examples/util/zipf/zipf.o 00:44:49.450 CC app/spdk_tgt/spdk_tgt.o 00:44:49.450 CC test/thread/poller_perf/poller_perf.o 00:44:49.707 LINK spdk_lspci 00:44:49.707 LINK zipf 00:44:49.707 LINK iscsi_tgt 00:44:49.707 LINK interrupt_tgt 00:44:49.707 LINK poller_perf 00:44:49.707 LINK spdk_trace_record 00:44:49.707 LINK nvmf_tgt 00:44:49.707 LINK ioat_perf 00:44:49.707 LINK spdk_tgt 00:44:49.965 LINK spdk_trace 00:44:59.929 CC app/spdk_nvme_perf/perf.o 00:45:02.458 LINK spdk_nvme_perf 00:45:12.429 CC app/spdk_nvme_identify/identify.o 00:45:14.961 CC examples/thread/thread/thread_ex.o 00:45:16.333 LINK thread 00:45:17.706 LINK spdk_nvme_identify 00:45:24.264 CC examples/ioat/verify/verify.o 00:45:24.521 LINK verify 00:45:26.418 CC test/thread/lock/spdk_lock.o 00:45:34.526 LINK spdk_lock 00:45:41.083 CC app/spdk_nvme_discover/discovery_aer.o 00:45:42.457 LINK spdk_nvme_discover 00:46:21.185 CC examples/sock/hello_world/hello_sock.o 00:46:21.185 LINK hello_sock 00:46:21.185 CC app/spdk_top/spdk_top.o 00:46:26.448 LINK spdk_top 00:46:34.566 CC test/dma/test_dma/test_dma.o 00:46:36.467 LINK test_dma 00:46:36.467 CC app/vhost/vhost.o 00:46:38.370 LINK vhost 00:46:53.251 CC examples/vmd/lsvmd/lsvmd.o 00:46:53.251 LINK lsvmd 00:46:56.535 CC app/spdk_dd/spdk_dd.o 00:46:57.922 LINK spdk_dd 00:46:58.489 CC app/fio/nvme/fio_plugin.o 00:47:01.016 LINK spdk_nvme 00:47:19.100 CC examples/idxd/perf/perf.o 00:47:20.476 LINK idxd_perf 00:48:16.748 CC examples/vmd/led/led.o 00:48:16.748 LINK led 00:48:16.748 CC app/fio/bdev/fio_plugin.o 00:48:17.689 CC test/app/bdev_svc/bdev_svc.o 00:48:18.263 LINK spdk_bdev 00:48:19.206 LINK bdev_svc 00:48:29.181 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:48:31.087 LINK nvme_fuzz 00:48:33.620 CC examples/nvme/hello_world/hello_world.o 00:48:35.523 LINK hello_world 00:49:02.063 CC test/app/histogram_perf/histogram_perf.o 00:49:02.063 LINK histogram_perf 00:49:28.608 CC test/app/jsoncat/jsoncat.o 00:49:28.608 LINK jsoncat 00:49:55.149 CC examples/accel/perf/accel_perf.o 00:49:55.149 CC examples/nvme/reconnect/reconnect.o 00:49:55.408 LINK reconnect 00:49:55.408 LINK accel_perf 00:49:57.958 TEST_HEADER include/spdk/config.h 00:49:57.958 CXX test/cpp_headers/accel.o 00:49:59.330 CXX test/cpp_headers/accel_module.o 00:50:00.265 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:50:00.265 CXX test/cpp_headers/assert.o 00:50:00.265 CC test/env/mem_callbacks/mem_callbacks.o 00:50:01.641 CXX test/cpp_headers/barrier.o 00:50:02.576 CXX test/cpp_headers/base64.o 00:50:03.951 CXX test/cpp_headers/bdev.o 00:50:04.518 LINK mem_callbacks 00:50:05.133 CXX test/cpp_headers/bdev_module.o 00:50:06.506 LINK iscsi_fuzz 00:50:06.763 CXX test/cpp_headers/bdev_zone.o 00:50:08.144 CXX test/cpp_headers/bit_array.o 00:50:09.519 CXX test/cpp_headers/bit_pool.o 00:50:10.453 CC test/env/vtophys/vtophys.o 00:50:10.453 CXX test/cpp_headers/blob.o 00:50:11.388 LINK vtophys 00:50:11.646 CXX test/cpp_headers/blob_bdev.o 00:50:13.021 CXX test/cpp_headers/blobfs.o 00:50:14.394 CXX test/cpp_headers/blobfs_bdev.o 00:50:15.789 CXX test/cpp_headers/conf.o 00:50:16.047 CC test/event/event_perf/event_perf.o 00:50:17.421 LINK event_perf 00:50:17.421 CXX test/cpp_headers/config.o 00:50:17.682 CXX test/cpp_headers/cpuset.o 00:50:19.581 CXX test/cpp_headers/crc16.o 00:50:19.839 CXX test/cpp_headers/crc32.o 00:50:21.214 CXX test/cpp_headers/crc64.o 00:50:22.589 CXX test/cpp_headers/dif.o 00:50:24.518 CXX test/cpp_headers/dma.o 00:50:26.418 CXX test/cpp_headers/endian.o 00:50:27.792 CXX test/cpp_headers/env.o 00:50:29.691 CXX test/cpp_headers/env_dpdk.o 00:50:31.064 CXX test/cpp_headers/event.o 00:50:32.964 CXX test/cpp_headers/fd.o 00:50:34.336 CXX test/cpp_headers/fd_group.o 00:50:36.235 CXX test/cpp_headers/file.o 00:50:37.609 CXX test/cpp_headers/ftl.o 00:50:39.506 CXX test/cpp_headers/gpt_spec.o 00:50:40.880 CXX test/cpp_headers/hexlify.o 00:50:42.785 CXX test/cpp_headers/histogram_data.o 00:50:44.160 CXX test/cpp_headers/idxd.o 00:50:44.419 CC test/nvme/aer/aer.o 00:50:45.791 CXX test/cpp_headers/idxd_spec.o 00:50:46.724 LINK aer 00:50:47.290 CXX test/cpp_headers/init.o 00:50:48.664 CXX test/cpp_headers/ioat.o 00:50:48.664 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:50:49.599 LINK env_dpdk_post_init 00:50:49.599 CXX test/cpp_headers/ioat_spec.o 00:50:49.871 CC test/rpc_client/rpc_client_test.o 00:50:50.817 CXX test/cpp_headers/iscsi_spec.o 00:50:51.074 LINK rpc_client_test 00:50:51.639 CXX test/cpp_headers/json.o 00:50:52.572 CXX test/cpp_headers/jsonrpc.o 00:50:53.506 CXX test/cpp_headers/keyring.o 00:50:54.074 CC examples/nvme/nvme_manage/nvme_manage.o 00:50:54.641 CXX test/cpp_headers/keyring_module.o 00:50:56.015 CXX test/cpp_headers/likely.o 00:50:56.950 LINK nvme_manage 00:50:56.950 CXX test/cpp_headers/log.o 00:50:58.325 CXX test/cpp_headers/lvol.o 00:50:59.699 CXX test/cpp_headers/memory.o 00:51:01.074 CXX test/cpp_headers/mmio.o 00:51:02.008 CC test/event/reactor/reactor.o 00:51:02.008 CXX test/cpp_headers/nbd.o 00:51:02.266 CXX test/cpp_headers/net.o 00:51:02.833 LINK reactor 00:51:03.400 CXX test/cpp_headers/notify.o 00:51:04.776 CXX test/cpp_headers/nvme.o 00:51:06.150 CXX test/cpp_headers/nvme_intel.o 00:51:07.084 CXX test/cpp_headers/nvme_ocssd.o 00:51:08.983 CXX test/cpp_headers/nvme_ocssd_spec.o 00:51:09.260 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:51:10.205 CXX test/cpp_headers/nvme_spec.o 00:51:10.205 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:51:11.578 CXX test/cpp_headers/nvme_zns.o 00:51:12.513 LINK vhost_fuzz 00:51:13.080 CXX test/cpp_headers/nvmf.o 00:51:14.457 CXX test/cpp_headers/nvmf_cmd.o 00:51:15.393 CC test/event/reactor_perf/reactor_perf.o 00:51:15.651 CXX test/cpp_headers/nvmf_fc_spec.o 00:51:16.587 LINK reactor_perf 00:51:17.153 CXX test/cpp_headers/nvmf_spec.o 00:51:17.153 CC test/event/app_repeat/app_repeat.o 00:51:18.087 LINK app_repeat 00:51:18.346 CXX test/cpp_headers/nvmf_transport.o 00:51:20.900 CXX test/cpp_headers/opal.o 00:51:22.277 CXX test/cpp_headers/opal_spec.o 00:51:23.653 CXX test/cpp_headers/pci_ids.o 00:51:25.553 CXX test/cpp_headers/pipe.o 00:51:26.487 CXX test/cpp_headers/queue.o 00:51:26.745 CXX test/cpp_headers/reduce.o 00:51:28.119 CC test/accel/dif/dif.o 00:51:28.377 CXX test/cpp_headers/rpc.o 00:51:30.275 CXX test/cpp_headers/scheduler.o 00:51:31.208 LINK dif 00:51:31.774 CXX test/cpp_headers/scsi.o 00:51:34.304 CXX test/cpp_headers/scsi_spec.o 00:51:35.679 CXX test/cpp_headers/sock.o 00:51:37.054 CXX test/cpp_headers/stdinc.o 00:51:38.429 CXX test/cpp_headers/string.o 00:51:39.802 CXX test/cpp_headers/thread.o 00:51:40.736 CC test/nvme/reset/reset.o 00:51:40.994 CXX test/cpp_headers/trace.o 00:51:42.395 CXX test/cpp_headers/trace_parser.o 00:51:42.395 LINK reset 00:51:43.330 CXX test/cpp_headers/tree.o 00:51:43.588 CXX test/cpp_headers/ublk.o 00:51:44.964 CXX test/cpp_headers/util.o 00:51:46.338 CXX test/cpp_headers/uuid.o 00:51:46.338 CXX test/cpp_headers/version.o 00:51:46.905 CC examples/nvme/arbitration/arbitration.o 00:51:47.472 CXX test/cpp_headers/vfio_user_pci.o 00:51:48.038 CC test/env/memory/memory_ut.o 00:51:48.605 LINK arbitration 00:51:48.863 CXX test/cpp_headers/vfio_user_spec.o 00:51:49.429 CXX test/cpp_headers/vhost.o 00:51:50.364 CXX test/cpp_headers/vmd.o 00:51:51.299 CXX test/cpp_headers/xor.o 00:51:52.673 CXX test/cpp_headers/zipf.o 00:51:54.048 CC test/env/pci/pci_ut.o 00:51:54.048 LINK memory_ut 00:51:55.424 LINK pci_ut 00:51:56.800 CC examples/blob/hello_world/hello_blob.o 00:51:58.701 LINK hello_blob 00:51:58.701 CC test/app/stub/stub.o 00:52:00.074 LINK stub 00:52:01.972 CC test/event/scheduler/scheduler.o 00:52:03.344 LINK scheduler 00:52:04.717 CC test/blobfs/mkfs/mkfs.o 00:52:06.090 LINK mkfs 00:52:28.060 CC test/lvol/esnap/esnap.o 00:52:28.060 CC test/nvme/sgl/sgl.o 00:52:28.060 LINK sgl 00:52:32.244 CC examples/nvme/hotplug/hotplug.o 00:52:33.177 LINK hotplug 00:52:41.289 CC test/nvme/e2edp/nvme_dp.o 00:52:43.188 LINK nvme_dp 00:52:49.746 LINK esnap 00:52:52.292 CC examples/nvme/cmb_copy/cmb_copy.o 00:52:53.675 LINK cmb_copy 00:53:15.604 CC examples/nvme/abort/abort.o 00:53:18.136 LINK abort 00:53:44.683 CC test/nvme/overhead/overhead.o 00:53:44.683 CC test/nvme/err_injection/err_injection.o 00:53:44.683 LINK overhead 00:53:44.683 LINK err_injection 00:53:54.655 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:53:55.222 LINK pmr_persistence 00:53:59.410 CC test/nvme/startup/startup.o 00:54:01.314 LINK startup 00:54:09.424 CC test/nvme/reserve/reserve.o 00:54:10.359 LINK reserve 00:54:36.903 CC test/nvme/simple_copy/simple_copy.o 00:54:37.471 LINK simple_copy 00:54:40.000 CC examples/blob/cli/blobcli.o 00:54:44.189 LINK blobcli 00:54:59.069 CC test/nvme/connect_stress/connect_stress.o 00:55:00.972 LINK connect_stress 00:55:03.503 CC test/nvme/boot_partition/boot_partition.o 00:55:04.438 LINK boot_partition 00:55:08.660 CC test/nvme/compliance/nvme_compliance.o 00:55:09.593 LINK nvme_compliance 00:55:10.528 CC examples/bdev/hello_world/hello_bdev.o 00:55:11.095 CC examples/bdev/bdevperf/bdevperf.o 00:55:12.031 LINK hello_bdev 00:55:14.562 LINK bdevperf 00:55:15.524 CC test/bdev/bdevio/bdevio.o 00:55:17.427 CC test/nvme/fused_ordering/fused_ordering.o 00:55:17.427 LINK bdevio 00:55:18.367 LINK fused_ordering 00:55:18.626 CC test/nvme/doorbell_aers/doorbell_aers.o 00:55:19.561 LINK doorbell_aers 00:55:37.669 CC test/nvme/fdp/fdp.o 00:55:37.669 LINK fdp 00:56:04.209 CC test/nvme/cuse/cuse.o 00:56:14.215 LINK cuse 00:57:21.888 CC examples/nvmf/nvmf/nvmf.o 00:57:21.888 LINK nvmf 00:57:48.434 00:41:42 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:57:48.434 make[1]: Nothing to be done for 'clean'. 00:57:53.728 00:41:48 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:57:53.728 00:41:48 -- common/autotest_common.sh@730 -- $ xtrace_disable 00:57:53.728 00:41:48 -- common/autotest_common.sh@10 -- $ set +x 00:57:53.728 00:41:48 -- spdk/autopackage.sh@48 -- $ timing_finish 00:57:53.728 00:41:48 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:57:53.728 00:41:48 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:57:53.728 00:41:48 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:57:53.728 00:41:48 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:57:53.728 00:41:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:57:53.728 00:41:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:57:53.728 00:41:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:57:53.728 00:41:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:57:53.728 00:41:48 -- pm/common@44 -- $ pid=125679 00:57:53.728 00:41:48 -- pm/common@50 -- $ kill -TERM 125679 00:57:53.728 00:41:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:57:53.728 00:41:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:57:53.728 00:41:48 -- pm/common@44 -- $ pid=125681 00:57:53.728 00:41:48 -- pm/common@50 -- $ kill -TERM 125681 00:57:53.728 + [[ -n 2384 ]] 00:57:53.728 + sudo kill 2384 00:57:53.996 [Pipeline] } 00:57:54.016 [Pipeline] // timeout 00:57:54.022 [Pipeline] } 00:57:54.040 [Pipeline] // stage 00:57:54.046 [Pipeline] } 00:57:54.063 [Pipeline] // catchError 00:57:54.073 [Pipeline] stage 00:57:54.076 [Pipeline] { (Stop VM) 00:57:54.090 [Pipeline] sh 00:57:54.370 + vagrant halt 00:57:58.564 ==> default: Halting domain... 00:58:03.877 [Pipeline] sh 00:58:04.158 + vagrant destroy -f 00:58:07.443 ==> default: Removing domain... 00:58:08.045 [Pipeline] sh 00:58:08.327 + mv output /var/jenkins/workspace/ubuntu24-vg-autotest/output 00:58:08.341 [Pipeline] } 00:58:08.359 [Pipeline] // stage 00:58:08.365 [Pipeline] } 00:58:08.382 [Pipeline] // dir 00:58:08.388 [Pipeline] } 00:58:08.408 [Pipeline] // wrap 00:58:08.415 [Pipeline] } 00:58:08.430 [Pipeline] // catchError 00:58:08.440 [Pipeline] stage 00:58:08.444 [Pipeline] { (Epilogue) 00:58:08.458 [Pipeline] sh 00:58:08.741 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:58:30.678 [Pipeline] catchError 00:58:30.680 [Pipeline] { 00:58:30.695 [Pipeline] sh 00:58:31.006 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:58:31.006 Artifacts sizes are good 00:58:31.014 [Pipeline] } 00:58:31.032 [Pipeline] // catchError 00:58:31.044 [Pipeline] archiveArtifacts 00:58:31.052 Archiving artifacts 00:58:31.443 [Pipeline] cleanWs 00:58:31.454 [WS-CLEANUP] Deleting project workspace... 00:58:31.454 [WS-CLEANUP] Deferred wipeout is used... 00:58:31.461 [WS-CLEANUP] done 00:58:31.463 [Pipeline] } 00:58:31.481 [Pipeline] // stage 00:58:31.487 [Pipeline] } 00:58:31.505 [Pipeline] // node 00:58:31.512 [Pipeline] End of Pipeline 00:58:31.550 Finished: SUCCESS